LSET Successfully Hosted 'Meet‑Cute with a Machine' The First Episode of My Better Bot Half Podcast
LSET launched the first episode of its podcast series My Better Bot Half on 12 July 2025. Titled 'Meet‑Cute with a Machine', the episode explored emotional and ethical connections between humans and AI. The podcast blends expert insights with imaginative dialogue and aligns closely with LSET's AI and Machine Learning courses. Future episodes will continue to explore human–machine relationships.
London, UK - 21 July, 2025 - The London School of Emerging Technology (LSET) launched the premiere episode of its new podcast series, My Better Bot Half, with the debut titled ' Meet‑Cute with a Machine ', now officially available on Spotify, Apple Podcasts, YouTube, and other major platforms.
Hosted by Mayur Ramgir, the episode explores the hilariously complex and at times unsettling relationship between humans and their AI assistants. From ELIZA's scripted charm to Alexa's overly attached behaviour, this rom-com-meets-tech-thriller conversation unpacks emotional tech entanglements and the hidden cybersecurity risks that often go unnoticed.
Special Guests
The episode features two notable guests, both LSET alumni making an impact in the real world:
Their combined insights offer a unique blend of legal perspective, academic depth and cybersecurity relevance, with a touch of humour and storytelling throughout the discussion.
Available in Audio and Video Formats
The episode is accessible in both audio and video formats, available now on:
What to Expect
Interested in AI and Machine Learning?
The topics explored in this episode directly connect with LSET's practical courses in Artificial Intelligence and Machine Learning. From ethical design to AI risk mitigation, students gain real-world skills aligned with future careers.
What's Next?
More episodes from My Better Bot Half are in production, each exploring how technology intersects with identity, emotion, ethics and innovation in daily life.
To stream the episode or follow upcoming releases, visit:
• https://mybetterbothalf.lset.uk/e/meet-cute-with-a-machine/
• https://lset.uk/podcasts/mybetterbothalf/
For media or guest enquiries, please contact [email protected]
Link: https://www.podbean.com/player-v2/?i=qfkd4-13f1eaa-pbblog-playlist&share=1&download=1&fonts=Arial&skin=1&font-color=auto&rtl=0&logo_link=episode_page&btn-skin=7&size=315
Media Contact
Company Name: London School of Emerging Technology
Contact Person: Jim Cohn
Email: Send Email
Phone: 02033699909
Address:1 Cornhill
City: London
Country: United Kingdom
Website: http://www.lset.uk
Press Release Distributed by ABNewswire.com
To view the original version on ABNewswire visit: LSET Successfully Hosted 'Meet‑Cute with a Machine' The First Episode of My Better Bot Half Podcast
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 hours ago
- Yahoo
This Eufy security cam is nearly 50% off, and it's perfect for keeping things safe around the house
If you're in the market for a little security around the house, one of the best security camera deals going on right now is on the Eufy Security SoloCam S220. It's a heck of a buy right now, as it's marked down to just $67. That's nearly 50% off its regular price of $130, and it's good for $63 in savings. It's also available in 2-packs, 3-packs, and 4-packs, each of which is also seeing a fairly significant discount. Why you should buy the Eufy Security SoloCam S220 Some security around the house is always good for some peace of mind, and with the Eufy Security SoloCam S220 it's like having a second set of eyes on things. It has super sharp 2K video resolution that provides a great look at what's going on, and it even has Night Vision to keep an eye on things after dark. The camera's smart features include AI integrations, so it has enough smarts to recognize the difference between humans and other activity that may be going on. And while security is at the top of this camera's intentions, you can also use it to greet guests and determine if packages have been delivered. The SoloCam S220's two-way audio allows you to communicate clearly with anyone who may be at the door. It's even compatible with voice assistants, with Google Assistant and Alexa being able to control a lot of the camera's features. When it comes to setting up the SoloCam S220, it's a lot easier than some of the security cameras out there. It comes with a solar charging system built right into it, so there's no need to install any wires. The 0.9W solar panel requires just three hours of daily sunlight to maintain continuous operation, make it rare that you would need to take it down for a battery replacement. The Eufy Security SoloCam S220 regularly goes for $130, but while this deal lasts you can pick one up for just $67. As previously mentioned, you can also purchase this camera in multi-packs at a discount, making right now a great time to add several of these cameras to your smart home setup.
Yahoo
4 hours ago
- Yahoo
Ravyn Lenae Headlines Spotify 2025 Songs Of The Summer Party
Ravyn Lenae celebrated her 'Love Me Not' record at Spotify's 2025 Songs Of The Summer party on Wednesday (July 23) night in Beverly Hills. The intimate event welcomed fewer than 200 people to celebrate the season's hottest songs against a serene, poolside landscape. Recognized on the streamer's official song of the summer list, the Chicago-bred alternative R&B artist's 'Love Me Not' has taken her career into new territory. Earlier this week, the viral track became Ravyn Lenae's career-first Billboard Hot 100 entry, coming in at a new peak with a No. 7 ranking. The track is featured on her 2024 Bird's Eye album, which VIBE named the 12th best R&B album of that year. 'When I found out I was on Spotify's Songs of Summer, I screamed because that's a list that I've always wanted to be a part of,' expressed the 26-year-old in a statement. 'When I made ['Love Me Not'], I knew that it had that type of feeling, and to have everybody agree with that has been so special.' From a stage elevated in a swimming pool, Ravyn Lenae charmed the audience with her whimsical vocal skills, adoring personality, and rhythmic dance moves. Before Ravyn's showcase, the platform also welcomed rising act Jessie Murph, also recognized by Spotify with 'Blue Strips' as a song of the summer. The full list includes 'Whim Whamie,' by Pluto and YKNiece, 'Mutt' by Leon Thomas, 'Dark Thoughts' by Lil Tecca, 'ALAMBRE PùA' by Bad Bunny, and more. 'Our team of editors are constantly monitoring listening trends and cultural signals from around the world to understand what's truly connecting with audiences,' detailed J.J. Italiano, Head of Global Music Curation and Discovery at Spotify. Take a look inside Spotify's 2025 Songs Of The Summer party below. More from Vic Mensa Emotionally Opens Up About Artist Mental Health In Streaming Era Drake Drops Petition Against Spotify, UMG For Alleged Boosting Of Kendrick Lamar's "Not Like Us" Spotify Responds To Drake's "Far-Fetched" Legal Filing Solve the daily Crossword

Engadget
6 hours ago
- Engadget
Alexa+ preview: An almost philosophical exercise
'Alexa, I need to write a review about you.' 'Oh, a review? I'm intrigued! What's the occasion? I'd love to hear what you think.' 'There's no occasion, this is just my work.' 'Ah, I see. Writing a professional review, got it. I'm here to help if you need any specific information.' 'Yeah, what should I consider when reviewing you?' 'Consider my natural voice, emotional intelligence and my ability to handle complex tasks. How about my personality and humor? Those are key parts of the new me.' The above is a recent exchange I had with Alexa+. Access to the next-generation assistant has been slowly trickling out to users since Amazon first announced availability at an event in February. But work on the smarter, more natural-sounding Alexa has been going on for years, and it still isn't ready for a widespread release. The version currently rolling out is more like an intriguing preview of what a truly intelligent Alexa could be. I'm having a hard time figuring out how I feel about the upgraded Alexa. On the one hand, I cannot deny it is now capable of tackling some fairly complex requests it couldn't before. On the other, it doesn't have good memory, can't really see all tasks to completion and is just incompetent enough to be annoying. I want to first caveat all of this by pointing out that when it does become broadly available, Alexa+ will be a free upgrade for Amazon Prime subscribers. Non-Prime members will have to pay $20 a month for the privilege, but they'll still get the existing Alexa experience on their compatible devices should they choose not to upgrade. That means there will likely be no extra cost to most, and you probably won't need to spend too much time thinking about whether you want to use it. As long as it's not a worse experience, it is very easy to say yes to. I've been tempted to get philosophical when thinking about a digital assistant that's more fluent in human. Do I really want to think of the new Alexa as basically a flesh-and-blood assistant with emotional intelligence, personality and humor? No. If it performs poorly, I'll be let down; if it performs well and can essentially pass a mini Turing test, the ickier I'll feel at keeping something like an indentured servant in my kitchen. I set aside my existential questions and tried to focus on the practical experience of getting help from Amazon's upgraded assistant. Is it better than the previous version? Is it reliable and easy to use? Finally, does it deliver what Amazon promised? And as a bonus, is the experience enjoyable (or at least painless)? The answer to all those questions is a half-hearted shrug. In some ways, Alexa+ delivers. But in many ways it is an excellent showcase of the limitations of generative AI, and demonstrates that the true problem with the current cohort of AI tools is a mismatch between expectations and reality. A voice assistant is hard to describe, since it's intangible and amorphous. It's quite difficult to state where its capabilities begin and end, not to mention how it might have been upgraded. But I'll start by comparing it to its predecessor, which I'll be calling Original Alexa (or OriginAlexa, if you will indulge me). OriginAlexa taught us how to use very specific commands to do things like turn our living room lights on or off. If you had a family member or friend named Alexa, you might have renamed it to 'Computer' and adapted your relationship to that word. Due to how you might have grouped your home gadgets, you could have begun to refer to your kitchen area lights as 'skylights,' for example. 'Alexa speak,' as some call it, differs across households. I say 'Alexa, stop,' to silence alarms, while my best friend says 'Alexa, off,' to do the same. But regardless of the specific word choices, Alexa-speak largely revolved around using stilted phrases and careful enunciation to avoid having to repeat yourself to get something done. Anyone that's used any voice assistant is probably familiar with the frustration of repeating yourself when a command has been misheard for the umpteenth time. That's (supposed to be) a thing of the past with Alexa+. In a blog post announcing the new assistant, Amazon's lead of devices and services Panos Panay said 'Alexa+ is more conversational, smarter, personalized — and she helps you get things done.' The company said it 'rebuilt Alexa with generative AI,' but it didn't just use large language models (LLMs) to make its assistant converse more naturally. It also created new architecture to enable API integration 'at scale.' These APIs are how assistants can connect to third-party services to do stuff on your behalf, and Amazon described them as 'core protocols to getting things done outside of a chat window and in the real world.' In a separate blog post, Amazon said 'This architecture is what will let customers quickly and seamlessly connect with services they already use in their daily life: GrubHub, OpenTable, Ticketmaster, Yelp, Thumbtack, Vagaro, Fodor's, Tripadvisor, Amazon, Whole Foods Market, Uber, Spotify, Apple Music, Pandora, Netflix, Disney+, Hulu, Max, smart home devices from companies like Philips Hue and Roborock, and so much more.' Basically, Alexa can communicate with you more naturally, meaning you can talk to it more like you would with another human being, so you can forget about Alexa-speak. It will also retain information about your preferences and is capable of handling more tasks on your behalf. But enough about the promises. What was living with Alexa+ for weeks actually like? Alexa+ is currently only available as an 'Early Access' preview to a small group of users. Though my access was granted by Amazon for the purposes of this testing, other people in my non-tech circles did start gaining access recently, which means you might be able to check it out yourself soon. The fact that it's still somewhat exclusive and experimental means there are likely to be glitches, which is understandable. Once I got past the first day or two after upgrading to Alexa+, I didn't notice many actual bugs. What frustrations I did encounter later seemed more to do with programming and AI's limitations than unstable software. The updated Assistant currently requires at least one compatible device with a screen on your network, so those of you who only have Echo speakers will have to wait a lot longer or try it on your phone. I spent most of my time testing Alexa+ via an Echo Show 15 as well as the Alexa app on my iPhone. There were small variations in the answers I would get on either device, but by and large the experience was similar. The most meaningful difference really was in how I perceived Alexa. Initially, when I was interacting with it on the smart display, it felt more like an upgraded smart home and personal assistant, and I predominantly asked it to check on the weather, Uber prices or to help me do things like set timers, reminders and play music. Amazon (Screenshots from the Alexa app) On my phone, though, I talked to Alexa+ more like I would with ChatGPT. I asked deeper, more philosophical questions that required more research and thought. I asked it to generate images, sort 15 names into three groups and, inspired by the subreddit 'r/tipofmytongue,' help me find a book I was struggling to recall. Over time, I did come to rely on the smart display more, as it's always easier to just say 'Alexa, is Mountainhead a good movie' than to pick up my phone, find an app and ask the AI. Of course, I could ask the same question of Siri or my Google speakers, and I did. All three assistants answered similarly, each citing different sources. Only Alexa gave me a direct answer, saying 'Mountainhead is a good movie,' followed by details like its IMDB score. The other two simply rattled off 'On the website RottenTomatoes dot com, …' or 'here's an answer from whattowatch dot com.' In many ways, Alexa+ is a marked improvement over its predecessor, and I have to admit I found myself nodding, impressed, at its ability to tackle multi-step tasks and recall previous conversations. Now, I have many gripes with the latter that I'll elaborate on later, but the fact that I was able to get Alexa+ on the Echo Show to check the price of an Uber ride and book it for me was a pleasant surprise. Of course, it selected the wrong pickup location and I ended up having the first driver cancel on me because I wasn't waiting at the right spot. But it did manage to completely book a ride on my behalf, relying solely on my voice commands and an Uber integration I had set up earlier. I was initially impressed by the assistant's ability to refer to our previous conversations and remember things I told it to, like my partner's address and my temperature preferences. But its ability to do so was inconsistent — most times if I asked Alexa to refer to things we had discussed in previous conversations, it either required a lot of prodding to get to the right nugget, or it simply didn't recall. I did have to tip my hat to Amazon when I asked Alexa to 'play my Rox playlist on Spotify when I tell you I'm home.' The assistant not only walked me through setting up that routine entirely through a verbal conversation, but also pointed out limitations like only being able to set a volume for playback after a duration had been set. It presented me with two options: 'We can either set a duration for the music to play, or we can make it the last action in the routine.' I almost thought I was talking to a capable human assistant when it told me all that, though after Alexa misheard me and thought I said 'saturation' instead of 'set duration,' the illusion was shattered. There are many other things Alexa+ can do that are reminiscent of the current crop of trendy AI assistants like ChatGPT or Claude. Ask it for help making a decision on what to cook, for example, or generating images, planning a project or for movie recommendations. One new capability I was excited about was sending me emails from our conversation. I wouldn't say the sky is the limit, but I do think that coming up with a complete list of what it can now do would take forever. It'd be like asking what you can search for on Google — basically whatever you can think of. Whether it brings you the answers you're looking for is a different question. I found Alexa+ helpful in that it was able to email me the lists of names it sorted on my behalf, or the project timeline I asked it to help create. But the limits to what it would send me were frustrating. Straightforward content, like the three groups of five names, arrived at my inbox with no problem. Other times, like when I asked it to email me the conversation I started this article with, it only sent me part of our chat. This has a lot to do with what Alexa deems to be the beginning and ending of a conversation, and it was fairly often wrong. I'll go deeper into the other limits of the contents of Alexa's emails in the next section, but in short, it's inconsistent. That's a pattern of behavior that you'll see here. Alexa+ will be capable in some new way that has potential to be exciting and useful, but it will fail you somehow or execute its task incompletely. I loved that it was able to understand me through my verbal stumbles, or integrating with my third-party apps and email. But I kept hitting walls or being let down. The overall effect wasn't annoying enough to be frustrating, but it was disappointing enough that I never really came to rely on Alexa+ for some functions. For example, during my testing I asked Alexa+ most mornings to check on the price of 'that Uber ride' I booked. Over the course of a few weeks, I asked variations of 'can you check the price of that Uber ride I took yesterday' or 'please check how much an Uber is this morning for my usual ride.' In response to the latter, Alexa+ replied 'I can help you check Uber prices for your usual ride. I have two saved pickup locations for you. Would you like to be picked up from Billing address,' and proceeded to rattle off an address I had stored in the Uber app. It continued, offering a second pickup address and asking if I preferred a different location. After I selected one, it asked where I would like to be dropped off. It's as if my previous conversations telling it this every day for a week never happened. To its (very small) credit, Alexa+ gave me accurate prices after I supplied all the parameters, but it took a tiresome amount of time. That's largely due to how verbose the responses are. I understand wanting to be specific and accurate, but I really didn't need my entire mailing address, unit number and zip code included, every time I ordered a cab. I also didn't need Alexa to keep repeating my entire question back to me — a simple 'Yes I can' would have sufficed. Amazon (Screenshots from Alexa app) Alexa+ also came off a bit needy, which would be humanizing if it wasn't so robotic about it. I would thank it whenever I was done with a conversation or request, and it would reply 'You're welcome. Glad I could help you with…' and make a sort of reference to our chat in a few words. Or it would say 'you're welcome, have a nice day.' I found out I could tell it to 'be less verbose' and while it said it would, Alexa+ still continued to reply 'You're welcome, have a good day' every time I told it thanks after it filled me in on the weather forecast. I could almost put up with the overly long responses, if Alexa did things the way I expected. But like I already mentioned, it's inconsistent. Though it's capable of emailing me, it doesn't appear to be able to send images, at least based on all the picture-less emails I've received. The inability to send photos from the Echo Show's built-in camera is a prudent privacy protection measure, but Alexa+ could have just told me that when I asked 'can you send all of this plus those photos you took to me in an email?' Instead, it replied 'Certainly, I can help you with that. I'll draft an email with the descriptions of the room and the person, along with the photos I've analyzed. Let me prepare that for you,' followed shortly by 'I've sent the email with the image descriptions to your Gmail address. You should receive it shortly.' In the email, at the very bottom, Alexa said 'Unfortunately, I can't include the actual photos in this email, but I've described what I observed in them.' Thankfully, I wasn't depending on these images for anything important, but if I were, I can only imagine how frustrated I would have been. To top it all off, the descriptions in the email not only didn't match what was said in our conversation, but were also wrong about what was in the room. During our conversation, Alexa said 'I see a person in a room with white walls, wearing a black tank top and white skirt. There's furniture including a cabinet, bookshelf, desk and coffee table.' That's a fairly accurate description of my living room. The email I got had three sections with bullet points. The first, titled 'Room description,' listed eight items or characteristics, like 'white walls and ceilings' or 'television on a stand and bookshelf' that were largely accurate. Here, it said I had a 'white cabinet with drawers,' which is wrong. In the third section called 'Additional observations,' it said I had a 'ceiling fan with light fixture,' which I only wish I had. Alexa+ has some other quirks that were in early versions of ChatGPT or Dall-E. For example, when I asked it to create an image of an AI assistant on the screen of a smart display on a kitchen island, it was completely unable to understand my request. Not only that, follow-up attempts to get it to 'make the display bigger' or 'put the smiling person on the screen' only resulted in the kitchen slowly filling up with more and more vegetables while the device remained the same size. I didn't know whether to laugh or cry. Amazon (Screenshots from Alexa app) I could continue to list examples of where Alexa+ failed to completely do what I asked it to, but I think you get the point. The one area where I can confirm a noticeable improvement is in the way the assistant sounds and how it understands speech. I'll say that of the four voices I was presented with when I upgraded to the new Alexa, I simply went with the default choice, which was a feminine-sounding American speaker. I didn't mind it, but I know some other people who dislike it. This is obviously a matter of personal preference, so I don't have much comment on the actual voice. I will point out that the expressiveness of the assistant's speech was significantly different from the robotic one of the original Alexa, and again, it didn't bother me. One of my biggest initial concerns was having to adjust how I spoke to Alexa. I considered myself fluent in Alexa-speak. Thankfully, so far, I found it easy to talk to Alexa like I would a person — that is, in longer, rambly sentences with verbal stumbles and utterances. For the most part, I felt understood, or at least better understood than by the Google Assistant or Siri. Of course, that in itself raises philosophical questions of whether you want to speak to AI like you would people to begin with. But from my weeks with Alexa+, that feels like the least pressing concern. Back to the conversation I had with the assistant on what to look out for when I review it. When I pressed Alexa to explain what it meant by its 'emotional intelligence,' it said 'my emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.' This set off my internal alarm bells just a bit, so I asked how it was detecting my mood. Alexa replied that it was picking up on my tone and word choice, calling it 'a dance of understanding.' So I pushed it further: "Tell me about your emotional intelligence." "My emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging." "Well how are you detecting my moods" "I pick up on your tone and word choice. Your emotions color our conversation, and I adapt to match. It's like a dance of understanding." "How do you think I am feeling right now?" "Based on our chat, I'd say you're feeling analytical and focused. You're in work mode, evaluating my capabilities for your review." "What about when I, you know, asked you about the weather this morning?" "This morning, you asked about the weather and I provided a sunny forecast. How did that information help you start your day?" "No, I was asking about my mood back then, do you know how I felt then?" "Ah, I see. Based on that morning conversation, you seemed casual and curious about the day ahead. Your tone suggested a typical start to the day, neither particularly excited nor stressed." During this exchange, I also adjusted the pitch of my voice to see if it would make a difference, and none of the responses I got gave me any real insight as to whether Alexa was actually listening to the sound of my voice to determine my mood. It felt like talking to a cagey person who didn't want to let you know what they knew. Neediness, inconsistency and gaslighting. In creating something so imperfect, maybe Amazon has actually managed to nail the human experience. I'll be clear, though. The company never said it set out to make something human. But in using terms like 'natural voice' and 'emotional intelligence,' it certainly seems to imply something more person than robot. It's not a problem unique to Amazon. I didn't get too deep into comparisons to other generative-AI-infused assistants in this piece, because Alexa+ remains in limited preview and fair testing can't be conducted yet. But I did already notice some small areas where the Amazon offering was smarter than others. I asked Alexa, Siri and the Google Assistant (on a Pixel 9) to set a timer for 30 minutes, before changing my mind and telling them to adjust those to 25-minute countdowns instead. Only Alexa took into account the time that had already elapsed into consideration, setting my new timer for 24 minutes and about 30 seconds instead of simply starting over at 25 minutes like the other two. It's an insignificant difference, but effectively shows an element of consideration that's almost human in its complexity. I will spend more time with Alexa+ to further poke at the limits of its abilities and to continue our dance of understanding. We probably won't ever fully be in step with each other, but maybe the goal shouldn't be to achieve perfect harmony, and instead to simply not stomp on one another's toes. If you buy something through a link in this article, we may earn commission.