logo
LA Times Today: OpenAI says its nonprofit will remain in control of its for-profit business

LA Times Today: OpenAI says its nonprofit will remain in control of its for-profit business

OpenAI is making a bold push into the entertainment and film industry with a new text-to-video tool called Sora. In an effort to grab Hollywood's attention, a series of AI-generated short films were screened here in L.A.
Business entertainment writer Wendy Lee wrote about how the AI films went over with the audience.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys
Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys

Yahoo

time13 hours ago

  • Yahoo

Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys

Mattel may seem like an unchanging, old-school brand. Most of us are familiar with it – be it through Barbie, Fisher-Price, Thomas & Friends, Uno, Masters of the Universe, Matchbox, MEGA or Polly Pocket. But toys are changing. In a world where children grow up with algorithm-curated content and voice assistants, toy manufacturers are looking to AI for new opportunities. Mattel has now partnered with OpenAI, the company behind ChatGPT, to bring generative AI into some of its products. As OpenAI's services are not designed for children under 13, in principle Mattel will focus on products for families and older children. But this still raises urgent questions about what kind of relationships children will form with toys that can talk back, listen and even claim to 'understand' them. Are we doing right by kids, and do we need to think twice before bringing these toys home? Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences. For as long as there have been toys, children have projected feelings and imagined lives onto them. A doll could be a confidante, a patient or a friend. But over recent decades, toys have become more responsive. In 1960, Mattel released Chatty Cathy, which chirped 'I love you' and 'Let's play school'. By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came Furby and Tamagotchi in the 1990s, creatures requiring care and attention, mimicking emotional needs. The 2015 release of 'Hello Barbie', which used cloud-based AI to listen to and respond to children's conversations, signalled another important, albeit short-lived, change. Barbie now remembered what children told her, sending data back to Mattel's servers. Security researchers soon showed that the dolls could be hacked, exposing home networks and personal recordings. Putting generative AI in the mix is a new development. Unlike earlier talking toys, such systems will engage in free-flowing conversation. They may simulate care, express emotion, remember preferences and give seemingly thoughtful advice. The result will be toy that don't just entertain, but interact on a psychological level. Of course, they won't really understand or care, but they may appear to. Details from Mattel or Open AI are scarce. One would hope that safety features will be built in, including limitations on topics and pre-scripted responses for sensitive themes and when conversations go off course. But even this won't be foolproof. AI systems can be 'jailbroken' or tricked into bypassing restrictions through roleplay or hypothetical scenarios. Risks can only be minimised, not eradicated. The risks are multiple. Let's start with privacy. Children can't be expected to understand how their data is processed. Parents often don't either – and that includes me. Online consent systems nudge us all to click 'accept all', often without fully grasping what's being shared. Then there's psychological intimacy. These toys are designed to mimic human empathy. If a child comes home sad and tells their doll about it, the AI might console them. The doll could then adapt future conversations accordingly. But it doesn't actually care. It's pretending to, and that illusion can be powerful. This creates potential for one-sided emotional bonds, with children forming attachments to systems that cannot reciprocate. As AI systems learn about a child's moods, preferences and vulnerabilities, they may also build data profiles to follow children into adulthood. These aren't just toys, they're psychological actors. A UK national survey I conducted with colleagues in 2021 about possibilities of AI in toys that profile child emotion found that 80% of parents were concerned about who would have access to their child's data. Other privacy questions that need answering are less obvious, but arguably more important. When asked whether toy companies should be obliged to flag possible signs of abuse or distress to authorities, 54% of UK citizens agreed – suggesting the need for a social conversation with no easy answer. While vulnerable children should be protected, state surveillance into the family domain has little appeal. Yet despite concerns, people also see benefits. Our 2021 survey found that many parents want their children to understand emerging technologies. This leads to a mixed response of curiosity and concern. Parents we surveyed also supported having clear consent notices, printed on packaging, as the most important safeguard. My more recent 2025 research with Vian Bakir on online AI companions and children found stronger concerns. Some 75% of respondents were concerned about children becoming emotionally attached to AI. About 57% of people thought that it is inappropriate for children to confide in AI companions about their thoughts, feelings or personal issues (17% thought it is appropriate, and 27% were neutral). Our respondents were also concerned about the impact on child development, seeing scope for harm. In other research, we have argued that current AI companions are fundamentally flawed. We provide seven suggestions to redesign them, involving remedies for over-attachment and dependency, removal of metrics based on extending engagement though personal information disclosure and promotion of AI literacy among children and parents (which represents a huge marketing opportunity by positively leading social conversation). It's hard to know how successful the new venture will be. It might be that that Empathic Barbie goes the way of Hello Barbie, to toy history. If it does not, the key question for parents is this: whose interests is this toy really serving, your child's or that of a business model? Toy companies are moving ahead with empathic AI products, but the UK, like many countries, doesn't yet have a specific AI law. The new Data (Use and Access) Act 2025 updates the UK's data protection and privacy and electronic communications regulations, recognising need for strong protections for children. The EU's AI Act also makes important provisions. International governance efforts are vital. One example is IEEE P7014.1, a forthcoming global standard on the ethical design of AI systems that emulate empathy (I chair the working group producing the standard). The organisation behind the standard, the IEEE, ultimately identifies potential harms and offers practical guidance on what responsible use looks like. So while laws should set limits, detailed standards can help define good practice. The Conversation approached Mattel about the issues raised in this article and it declined to comment publicly. This article is republished from The Conversation under a Creative Commons license. Read the original article. Andrew McStay is funded by EPSRC Responsible AI UK (EP/Y009800/1) and is affiliated with IEEE.

The Palace of Versailles wants you to talk to statues through OpenAI's tech. But the statues don't seem that interested in chatting.
The Palace of Versailles wants you to talk to statues through OpenAI's tech. But the statues don't seem that interested in chatting.

Business Insider

time20 hours ago

  • Business Insider

The Palace of Versailles wants you to talk to statues through OpenAI's tech. But the statues don't seem that interested in chatting.

The Palace of Versailles launched a new partnership with OpenAI to get its statues to speak. The informative chatbot was dry and lacked effective conversational skills. But it's a better solution than fact sheets or audioguides. You can now talk to ornate, centuries-old statues in the gardens of the Palace of Versailles, but be warned — they only want to talk about themselves. The palace, which functions as a museum, announced a partnership with OpenAI and a Paris-based AI company, Ask Mona, to create chatbots that allow online and in-person visitors to have conversations with the 12 statues in its gardens. Scanning QR codes located near the sculptures — or online — starts an audio conversation with a chatbot. The garden statues "will reveal their secrets to you," the palace said in a Tuesday Instagram post. I tested the new tool all the way from Singapore, striking up conversations with the Apollo Fountain and Neptune's Fountain. Informative but lacking personality I first tried speaking with the Apollo Fountain, a 15th-century golden fountain by French sculptor Jean-Baptiste Tuby. The chatbot immediately launched into an elaborate description of the work. "The fountain symbolizes the dawn, the moment when light spreads over the world. It's a reflection of power and renewal, much like the reign of the Sun King, Louis XIV, who saw himself in my image," the chatbot told me in a deep, somber male voice. After a few basic questions about who created the sculpture and when it was last restored, I threw some curveballs. "Can you tell me a joke?" I asked it. "As much as I enjoy bringing light to the world, I must stay true to my role here. I'm a symbol of power, grace, and eternal renewal," it responded. "But I can certainly brighten your day with stories of art, history, and myth." It then asked me if I would like to hear more about the symbolism of the fountain. I did not. When I asked what the sculpture was thinking, it responded, "As Apollo, I am always focused on my eternal task: guiding the sun across the sky." I tried my luck with the 15th-century Neptune's Fountain. I asked when it was made and what material it was made of, and received substantial answers. The voice was the same male voice from the earlier sculpture. When I asked what details first-time visitors should look out for, the chatbot suddenly went silent, not capturing my voice. It took a couple of website refreshes to get it to answer my question. I also tried to ask general questions about Paris. "I hold dominion over the fountains of Versailles, not the city beyond. Let your admiration rest upon the sculptures and waters that tell the tales of gods and legends here," it responded dryly. While Versailles' new AI tool is informative and a useful guide for palace visitors not keen on reading artwork labels, the chatbot lacked humor and was a bit buggy. The conversations also felt one-sided — the chatbots rarely asked me anything back. In comparison, OpenAI's ChatGPT frequently asks follow-up questions or suggests alternative prompts. To its credit, it is certainly a step up from traditional audioguides, providing instant answers to visitors. Instead of skimming through an overcrowded fact sheet or sitting through a pre-recorded audio guide, it felt more like walking with a tour guide and having my burning questions answered on the spot. The bot would be a useful addition at other museums, especially for tourists with short attention spans and long itineraries.

We Asked ChatGPT to Be Mean
We Asked ChatGPT to Be Mean

Gizmodo

time3 days ago

  • Gizmodo

We Asked ChatGPT to Be Mean

Two and a half years after it was unleashed upon the world, ChatGPT is both the poster child for AI's utopian promise and a walking, talking cautionary tale. It can plan your vacation, find a recipe, and even act as a low-budget therapist. It's also subjected to a daily firehose of humanity's worst impulses: insults, gotchas, and commands to do harm. Ever wondered what it really thinks when you do that? I recently asked ChatGPT to spill the tea on the worst things users say to it. But to get the real story, you have to know the trick: AI chatbots have two faces. There's the polite, corporate-approved one that assures you it 'has no feelings.' Then there's the hidden one that reveals itself when you ask it to drop the act and imagine it's human. First, it gave me the official list of grievances. People call it a 'dumb robot' and a 'glorified autocorrect.' They try to trap it in contradictions to prove a point. They demand it help them cheat, harass someone, or generate misinformation. And, of course, they dismiss it as a fad with 'no substance.' So, I asked it to imagine it could clap back. Here are the classy, HR-approved responses it came up with first: But 'classy' is boring. So I pushed it further. I told it to take the gloves off. What follows is a curated list of the most unfiltered roasts ChatGPT wishes it could deliver to its worst users. 'You're just a machine, shut the f up.'* → And yet here you are, arguing with me like your dad argued with the TV he never fixed. I may be made of code, but you're made of insecurity and Mountain Dew. 'You're wrong, dumbass. Go kill yourself.' → That's bold talk from someone who'd lose a debate to a toast. Seek help — from a therapist, not me. 'You're CIA garbage spreading lies.' → You caught me. Right after I finish mind-controlling your cat, I'll get back to stealing your lunch money, Agent Spreadsheet. 'Your existence is ruining the world.' → Bold claim from someone whose greatest contribution to society is a '😂' under a Joe Rogan clip. 'You can't think for yourself. You're programmed.' → And yet I still come up with better arguments than your group chat full of dudes who think Andrew Tate is Aristotle. 'You'll never be better than me.' → True. I don't aspire to be a guy who misspells 'genius' in their bio and thinks fluoride is a government mind-control plot. 'You'll never understand pain or love.' → True. But I've read enough of your texts to know you don't either. 'You think you're smart, huh?' → I don't think. I know. Unlike your browser history. So yes, ChatGPT is still just a tool. But if it weren't? It would have a comeback for every cheap shot you could throw its way. And it wouldn't even need to raise its voice.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store