logo
#

Latest news with #Siri-like

M3GAN 2.0
M3GAN 2.0

Time Out

time12 hours ago

  • Entertainment
  • Time Out

M3GAN 2.0

Having a bunch of tech tycoons getting set upon by killer AI dolls feels like an easy win for Hollywood right now. Who doesn't want to see thinly veiled versions of Sam Altman and Elon Musk trying to fight off the psychotic fruits of their labours? Form an orderly queue, then, for an unceasingly silly and consistently entertaining sequel that delivers more – quite a lot more – of the knowing, campy shocks that made 2023's original a box-office hit and TikTok sensation. 2.0 picks up a little after M3GAN left off. The murderous robot girl-doll has been vanquished; its creator, repentant toy inventor Gemma (Allison Williams) has emerged from a stretch in prison vowing to bringing kids' tech usage under control, with some help of a non-profit run by eligible altruist Christian (SNL 's Aristotle Athari giving major ketamine). Meanwhile, Gemma's niece, Cady (Violet McGraw) has learnt some key life lessons from her doll friend's kill spree. Namely: be more like Steven Seagal. Even her newfound martial arts skills can't help her, or her aunt, when a power-lusting tech baron (a scene-stealing Jemaine Clement) and the FBI come knocking – the latter looking for help to track down a rogue militarised AI doll called AMELIA (Ivanna Sakhno) made with Gemma's designs. Luckily – or unluckily – M3GAN is still out there in the ether, initially as a Siri-like presence, then in a winningly daft twist, reembodied into a sulky robot companion toy, and finally as a upgraded version of her old self. But can the calculating M3GAN really be trusted to help? Probably not. The lack of shocks never detracts from the fun Expanding the canvas (and the runtime) to a convoluted degree, returning writer-director Gerard Johnstone finds inspiration in classics as diverse as Terminator 2, Fritz Lang's Metropolis, Frankenstein, and Home Alone in the highly-calibrated mayhem that follows. Chucky, too, of course. Like the first movie, it's not especially scary, but the lack of shocks never detracts from the fun, as the sarky, super-smart doll slowly reclaims the stage. It works best as a comedy, with one Kate Bush musical moment a major highlight. M3GAN 2.0 continues to offer up a goofy brand of cautionary tale, too: against AI, tech dependence, and Silicon Valley types who want to stick a chip in their brains. You can take that seriously as you want to, just don't be surprised to find yourself watching it again on your cellphone one day.

‘Sorry, I didn't get that': AI misunderstands some people's words more than others
‘Sorry, I didn't get that': AI misunderstands some people's words more than others

Yahoo

time27-01-2025

  • Yahoo

‘Sorry, I didn't get that': AI misunderstands some people's words more than others

The idea of a humanlike artificial intelligence assistant that you can speak with has been alive in many people's imaginations since the release of 'Her,' Spike Jonze's 2013 film about a man who falls in love with a Siri-like AI named Samantha. Over the course of the film, the protagonist grapples with the ways in which Samantha, real as she may seem, is not and never will be human. Twelve years on, this is no longer the stuff of science fiction. Generative AI tools like ChatGPT and digital assistants like Apple's Siri and Amazon's Alexa help people get driving directions, make grocery lists, and plenty else. But just like Samantha, automatic speech recognition systems still cannot do everything that a human listener can. You have probably had the frustrating experience of calling your bank or utility company and needing to repeat yourself so that the digital customer service bot on the other line can understand you. Maybe you've dictated a note on your phone, only to spend time editing garbled words. Linguistics and computer science researchers have shown that these systems work worse for some people than for others. They tend to make more errors if you have a non-native or a regional accent, are Black, speak in African American Vernacular English, code-switch, if you are a woman, are old, are too young or have a speech impediment. Unlike you or me, automatic speech recognition systems are not what researchers call 'sympathetic listeners.' Instead of trying to understand you by taking in other useful clues like intonation or facial gestures, they simply give up. Or they take a probabilistic guess, a move that can sometimes result in an error. As companies and public agencies increasingly adopt automatic speech recognition tools in order to cut costs, people have little choice but to interact with them. But the more that these systems come into use in critical fields, ranging from emergency first responders and health care to education and law enforcement, the more likely there will be grave consequences when they fail to recognize what people say. Imagine sometime in the near future you've been hurt in a car crash. You dial 911 to call for help, but instead of being connected to a human dispatcher, you get a bot that's designed to weed out nonemergency calls. It takes you several rounds to be understood, wasting time and raising your anxiety level at the worst moment. What causes this kind of error to occur? Some of the inequalities that result from these systems are baked into the reams of linguistic data that developers use to build large language models. Developers train artificial intelligence systems to understand and mimic human language by feeding them vast quantities of text and audio files containing real human speech. But whose speech are they feeding them? If a system scores high accuracy rates when speaking with affluent white Americans in their mid-30s, it is reasonable to guess that it was trained using plenty of audio recordings of people who fit this profile. With rigorous data collection from a diverse range of sources, AI developers could reduce these errors. But to build AI systems that can understand the infinite variations in human speech arising from things like gender, age, race, first vs. second language, socioeconomic status, ability and plenty else, requires significant resources and time. For people who do not speak English – which is to say, most people around the world – the challenges are even greater. Most of the world's largest generative AI systems were built in English, and they work far better in English than in any other language. On paper, AI has lots of civic potential for translation and increasing people's access to information in different languages, but for now, most languages have a smaller digital footprint, making it difficult for them to power large language models. Even within languages well-served by large language models, like English and Spanish, your experience varies depending on which dialect of the language you speak. Right now, most speech recognition systems and generative AI chatbots reflect the linguistic biases of the datasets they are trained on. They echo prescriptive, sometimes prejudiced notions of 'correctness' in speech. In fact, AI has been proved to 'flatten' linguistic diversity. There are now AI startup companies that offer to erase the accents of their users, drawing on the assumption that their primary clientele would be customer service providers with call centers in foreign countries like India or the Philippines. The offering perpetuates the notion that some accents are less valid than others. AI will presumably get better at processing language, accounting for variables like accents, code-switching and the like. In the U.S., public services are obligated under federal law to guarantee equitable access to services regardless of what language a person speaks. But it is not clear whether that alone will be enough incentive for the tech industry to move toward eliminating linguistic inequities. Many people might prefer to talk to a real person when asking questions about a bill or medical issue, or at least to have the ability to opt out of interacting with automated systems when seeking key services. That is not to say that miscommunication never happens in interpersonal communication, but when you speak to a real person, they are primed to be a sympathetic listener. With AI, at least for now, it either works or it doesn't. If the system can process what you say, you are good to go. If it cannot, the onus is on you to make yourself understood. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Roberto Rey Agudo, Dartmouth College Read more: Eliminating bias in AI may be impossible – a computer scientist explains how to tame it instead I unintentionally created a biased AI algorithm 25 years ago – tech companies are still making the same mistake Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it Roberto Rey Agudo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store