logo
SAG-AFTRA Outlines Remaining Sticking Points Over AI in Video Game Contract

SAG-AFTRA Outlines Remaining Sticking Points Over AI in Video Game Contract

Yahoo08-05-2025
SAG-AFTRA has released its latest counterproposal to the offer made by video game companies' to end the nine-month strike on interactive media, outlining the remaining sticking points on artificial intelligence protections that have been the sole impasse between the two sides.
The counterproposal, which can be read here, comes ahead of a Wednesday evening membership meeting hosted by the actors' guild's negotiating committee for the Interactive Media Agreement, and two days after the companies publicly released their proposal to Variety, calling it their 'best and last' offer.
'We are hopeful the union will not choose to walk away when we are so close to a deal,' Audrey Cooling, spokesperson for the video game producers party to the Interactive Media Agreement, said in a statement to Variety. 'Our proposal includes wage increases of over 24% for SAG-AFTRA-represented performers in video games, enhanced health and safety protections, industry-leading terms of use for AI digital replicas in-game and additional compensation for the use of an actor's performance in other games. This is our last and best offer, and we hope the union will return to the table so we can reach a deal.'
SAG-AFTRA condemned the companies' decision to take their proposal public, saying they had no intention of walking away from talks and had sent their latest counterproposal on May 2, three days after receiving the companies' offer.
'The statement suggesting that the union would 'walk away' from negotiations on this contract is absurd and the opposite of the truth. It is the employers who have threatened – through backchannel representatives – to move work to foreign countries and recast performers, in an unsuccessful effort to intimidate our negotiating committee and membership to capitulate to their demands,' SAG-AFTRA said in their own statement.
'It is the employers who have characterized their last counter as 'their last and best offer,' the union has taken no such step,' the statement continued.
The 17-page document released by SAG-AFTRA breaks down the union's counterproposal on AI protections, showing where the union has reached a tentative agreement with the companies and where it has made concessions. The document is intended to help members understand how the contract language impacts them and their work.
In terms of remaining sticking points, two notable ones remain. SAG-AFTRA is seeking language that guarantees performers' ability to withhold consent to their likeness and performance being used for digital replicas during a strike.
'Right now, employers have to find another performer to scab as placeholder or stand-in performance during a strike, but without the ability to suspend your Consent they could use your Replica to do it as you,' the guild wrote to members in the document.
SAG-AFTRA also wholly rejected a proposal from the companies that they would offer buyouts for the use of a digital replica 'includes unlimited amount of dialogue and/or unlimited amount of new visual performance in scripted cinematic content.' The buyout would have a minimum of six times the applicable minimum scale for three years of unlimited digital replica use, after which time the buyout could be renewed or terminated, with the latter case resulting in the company no longer generating new material with the replica.
SAG-AFTRA says it fundamentally opposes buyouts, calling them a 'purposeful discount to employers' and 'make it harder for performers to make a sustainable living,' as the performers who would work enough sessions on a project for a company to want to offer a digital replica buyout would earn far less under buyout pay than they would doing all the sessions.
'We reject unlimited DR use buyouts without a strong, clear justification. Employers have urged that Real-Time Generation and visual performance outside cinematics are difficult to track and pay by specific amount of use. We have repeatedly asked for why they can't pay by specific amount of use for performance they do typically track, meaning dialogue and visual performance for cinematics. No answer,' the union wrote.
SAG-AFTRA previously went on a video game strike in 2016 that lasted for 11 months before being resolved in September 2017. While the strike is affecting video games on a case-by-case basis, the companies that are signatories to the Interactive Media Agreement are Activision, Blindlight, Disney Character Voices, Electronic Arts, Formosa Interactive, Insomniac Games, Llama Productions, Take 2 , VoiceWorks and WB Games.
TheWrap has reached out to spokespersons for the video game companies for further comment.
The post SAG-AFTRA Outlines Remaining Sticking Points Over AI in Video Game Contract appeared first on TheWrap.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Back to School: How Teachers Can Use AI to Create Assignments Students Actually Want to Do
Back to School: How Teachers Can Use AI to Create Assignments Students Actually Want to Do

CNET

time25 minutes ago

  • CNET

Back to School: How Teachers Can Use AI to Create Assignments Students Actually Want to Do

As a college professor, I know that getting students excited about the work I have to grade later can be one of the more frustrating things about teaching. But when an assignment hits the right chord, it has the potential to inspire students and affect your classroom, the whole school and beyond. Reconciling the curriculum and assignments with standards and learning objectives sometimes established out of a teacher's control can sap the creative side of your brain. Here's how artificial intelligence can help broaden your horizons when trying to create assignments that make a lasting impression and keep your classroom excited about learning. (And for more AI tips for the back-to-school season, check out CNET's guides on how students can use AI to manage their time, how to use AI to write an email to your teacher and more on how I use AI as a college professor.) Since there will need to be a fair bit of refinement to create an assignment that is both fun to complete for students and fun to review and grade for educators, I've used ChatGPT, the AI chatbot that uses machine learning and large language models to generate conversational style answers to search queries, so that I could go back and forth brainstorming ideas. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Maintaining student and teacher sanity My area of study is media and communications, so for this example I'm putting together an assignment on media literacy, or the ability to think and interact critically with everything from TikTok content to front-page news. The goal is to create an assignment that's fun, collaborative and impactful for college students who interact heavily with digital media but might not be questioning what they're consuming. The secondary goal was to create an assignment I won't hate myself for creating when it comes time to grade it. On my first attempt, ChatGPT gave me a fully built-out assignment according to specific learning objectives around media literacy for college-level students, but it was about as fun as you'd think writing a 500-word essay on media literacy might be -- not fun at all. Refine for fun, focus and collaboration Since this assignment is in part about getting students to actually interact with media online in a way that's more impactful than just lurking or liking from the digital shadows, I refined the prompt to include using the student body in the assignment somehow and requested less emphasis on written analysis that will ultimately only be seen and evaluated by the teacher. Here's what it came back with: I was actually impressed -- not only did ChatGPT have students interacting with and analyzing media, but it also created a multi-layered assignment that gave students the opportunity to see firsthand the impact media literacy can make on a community as well as an individual. This assignment would also be a darn sight more enjoyable to grade than 30 to 50 500-word analytical essays about whether the source of a Brat summer post on TikTok can be trusted. Finally, ChatGPT offered submission requirements (like linking to the social media content used in completing the assignment and screenshots of the online interactions) and grading criteria for the assignment and even some examples of how the assignment might be executed. Its example in particular about analyzing the role of political memes was timely and felt like a fresh take on an evolving reality of campaign media. I personally would love to see videos from students collaborating on a discussion with their peers about their perception of the presence of President Donald Trump across social media. And who knows -- maybe the students might actually enjoy it too.

‘Cheapfake' AI Celeb Videos Are Rage-Baiting People on YouTube
‘Cheapfake' AI Celeb Videos Are Rage-Baiting People on YouTube

WIRED

timean hour ago

  • WIRED

‘Cheapfake' AI Celeb Videos Are Rage-Baiting People on YouTube

Aug 15, 2025 7:00 AM WIRED found over 100 YouTube channels using AI to create lazy fanfiction-style videos. Despite being obviously fake, there's a psychological reason people are falling for them. The Late Show with Stephen Colbert and guest, Keanu Reeves. Photo-Illustration: WIRED Staff; Photograph: Scott Kowalchyk/Geety Images Mark Wahlberg straightens his tie and beams at the audience as he takes his seat on daytime talk show The View, ahead of his hotly anticipated interview. Immediately, he's unsettled by the host, Joy Behar. Something isn't quite right about her mannerisms. Her eyes seem shifty, suspicious, even predatory. There's a sense, almost, of the uncanny valley—her presence feels oddly inhuman. His instincts are right, of course, and he's soon forced to defend himself against a barrage of cruel insults playing on his deepest vulnerabilities. But Wahlberg stays strong. He retains composure as Behar screams at him to get off the stage. 'I'll leave, but not because you're kicking me off,' he announces. 'I'll leave because I have too much respect for myself to sit here and be abused by someone who's forgotten what decency looks like!' The audience are stunned—none more so than those watching at home on YouTube, who swiftly thumb in their words of reassurance. 'Way to go Mark. We love you,' gushes one commenter. 'The View should be off the TV,' adds another. 'I hope everyone they insult sues them for millions and millions till they can't even pay for production.' It's a scene that has been described as one of the most talked about moments in daytime television history. Except, Mark Wahlberg hasn't been a guest on The View since 2015. The inevitable twist? None of this happened in reality, but rather elapsed over the course of a 25 minute long 'fanfiction' style video, made with the magic of artificial intelligence to potentially fool 460,000 drama-hungry viewers. Hardly surprising given the towering pile of AI slop on the web has reached unpolicable levels—with recent clips so realistic, they're tripping up the most media literate of zoomers. But, perhaps what is surprising about this otherwise unoriginal clash of the titans is that none of this happened in the video, either . Despite its characters' kinetic confrontation, 'Mark Wahlberg Kicked Off The View After Fiery Showdown With Joy Behar' is entirely motionless, save for a grainy filter added over a still image. It entertains its audience simply with an AI voiceover, narrating an LLM written script laden with cliches as theatrical as 'fist-clenching' and 'jaw wobbling.' It's cheap, lazy—the very definition of slop—but somehow, the channel it's hosted on, Talk Show Gold, has managed to round up over 88,000 subscribers, many of whom express complete disbelief when eventually informed by other commenters that what they are watching is 'fake news." 'These videos are what we might call 'cheapfakes,' rather than deepfakes, as they're cobbled together from a motley selection of real images and video clips, with a basic AI voiceover and subtitles,' explains Simon Clark, a cognitive psychologist at the University of Bristol, who specialises in AI-generated misinformation. 'At a superficial level, we might be surprised that people would be fooled by something this unsophisticated. But actually there are sound psychological factors at play here,' he adds, explaining that the videos typically focus on rhetorical techniques that encourage audiences to abandon critical thinking skills by calling to emotion. A WIRED investigation found 120 YouTube channels employing similar tactics. With misleading names like 'Starfame,' 'Media Buzz,' and 'Celebrity Scoop,' they camouflage themselves alongside real compilation clips from shows like Jimmy Kimmel Live! and Today with Jenna & Friends to gain credibility. Their channel descriptions give the illusion of melodramatic tabloid outlets—some bury their AI-disclaimers under walls of text emphasizing 'all the best highlights,' and 'the most unforgettable, hilarious, and iconic moments,' while others omit them entirely to add to the flair. YouTube updated its policies on July 15 in a move to crack down on content made with generative AI. The platform's Help Center stipulates that content eligible for monetization must adhere to YouTube's requirements of being sufficiently 'authentic' and 'original'—but there is no outright mention of generative AI alongside it, with the policy simply stating that eligible content must 'be your original creation,' and 'not be mass produced or repetitive.' A separate policy on 'Disclosing use of altered or synthetic content' also states that creators must disclose when content 'makes a real person appear to say or do something they didn't do,' 'alters footage of a real event or place,' or 'generates a realistic-looking scene that didn't actually occur.' WIRED reached out to YouTube for comment on over 100 AI-generated celebrity fanfic channels, as well as clarification on how their new policies would be enforced. 'All content uploaded to YouTube must comply with our Community Guidelines, regardless of how it is generated. If we find that content violates a policy, we remove it,' Zayna Aston, Director of YouTube EMEA Communications said in a statement to WIRED. Aston also reiterated that channels employing deceptive practices are not permitted on the platform, including those using misleading metadata, titles and thumbnails. WIRED can also confirm that 37 of the flagged celebrity talk show and other fanfiction-style channels were removed, chiefly those without AI disclaimers and some with the most egregious channel names, such as 'Celebrity Central' and 'United News.' This content can also be viewed on the site it originates from. The storylines in these videos follow a predictable pattern that plays on age-old narrative tropes that justify fanfiction comparisons. A well-loved celebrity—usually an older male actor like Clint Eastwood, Denzel Washington, or Keanu Reeves—is poised as the hero, defending themselves against the villain, a left-leaning talk show host who steers the professional conversation into ad hominem. It's obvious who the right-leaning, older audience is primed to relate to—who serves as the visual fic's Mary Sue. There's an undeniable political element at play when it comes to who is targeted, with videos focusing exclusively on political figures also constituting their own subgenre. 'They're tweaking my voice or whatever they're doing, tweaking their own voice to make it sound like me, and people are commenting on it like it is me and it ain't me,' Washington recently told WIRED, when asked about AI. 'I don't have an Instagram account. I don't have TikTok. I don't have any of that. So anything you hear from that—it's not even me, and unfortunately, people are just following and that's the world you guys live in.' For Clark, the talk show videos are a clear appeal to incite moral outrage—allowing audiences to more easily engage with, and spread, misinformation. 'It's a great emotion to trigger if you want engagement. If you make someone feel sad or hurt, then they'll likely keep that to themselves. Whereas if you make them feel outraged then they'll likely share the video with like-minded friends and write a long rant in the comments,' he says. It doesn't matter either, he explains, if the events depicted aren't real or are even clearly stated as 'AI-generated' if the characters involved might plausibly act this way (in the mind of their viewers, at least), in some other scenario. YouTube's own ecosystem also inevitably plays a role. With so many viewers consuming content passively while driving, cleaning, even falling asleep, AI-generated content no longer needs to look polished when blending into a stream of passively-absorbed information. Reality Defender, a company specializing in identifying deepfakes, reviewed some of the videos. 'We can share that some of our own family members and friends (particularly on the elderly side) have encountered videos like these and, though they were not completely persuaded, they did check in with us (knowing we are experts) for validity, as they were on the fence,' Ben Colman, cofounder and CEO of Reality Defender, tells WIRED. WIRED also reached out to several channels for comment. Only one creator, owner of a channel with 43,000 subscribers, responded. 'I am just creating fictional story interviews and I clearly mention in the description of every video,' they say, speaking anonymously. 'I chose the fictional interview format because it allows me to combine storytelling, creativity, and a touch of realism in a unique way. These videos feel immersive—like you're watching a real moment unfold—and that emotional realism really draws people in. It's like giving the audience a 'what if?' scenario that feels dramatic, intense, or even surprising, while still being completely fictional.' But when it comes to the likely motive behind the channels, most of which are based outside the US, neither a strict political agenda nor a sudden career pivot to immersive storytelling serves as an adequate explainer. A channel with an email that uses the term 'earningmafia', however, hints at more obvious financial intentions, as does the channels' repetitive nature—with WIRED seeing evidence of duplicated videos, and multiple channels operated by the same creators, including some who had sister channels suspended. This is unsurprising, with more content farms than ever, especially those targeting the vulnerable, currently cementing themselves on YouTube alongside the rise of generative AI. Across the board, creators pick controversial topics like kids TV characters in compromising situations, even P. Diddy's sex trafficking trial, to generate as much engagement—and income—as possible. Sandra Wachter, a professor and senior researcher in data ethics, AI and algorithms at the University of Oxford, explains that this ragebait style content is central to the platform's business model. 'The whole idea is to keep you on the platform for as long as possible, and unfortunately, rainbows and unicorns are not the things that keep people engaged. What keeps people engaged is something that is outrageous or salacious or toxic or ragey,' she says. 'And that type of content is created much cheaper now with AI. It can be done in a couple minutes.' Most channels give their locations as outside the US, yet chosen celebrities seem almost stereotypically American, as if picked off a list of 'The most popular US actors,' by those wishing to attract the most trigger-happy (and therefore lucrative) online denizens. Several channels seen by WIRED also appeared to have shifted focus throughout the years—many once posted educational content and tutorials on cars, agriculture, or fitness, having seemingly abandoned these amidst the AI boom. Perhaps it's an attempt to trick an algorithm that prioritizes creators with longer lifespans, or is simply a sign of users blindly following the latest trend in an attempt to bolster otherwise low income. When told about YouTube's new policies, Wachter says she's hopeful that a move towards demonetization will make a positive impact—but she says we're not 'getting to the base of the problem.' 'This is a system that breeds toxicity because it's based on generating clicks and keeping eyeballs attached to a screen.' If X posts like this are anything to go by—it won't be long before they start to resurface. The modus operandi of channels like these is clear, and seems capable of outsmarting YouTube's authenticity policies. They aim not to trick using sophisticated special effects, but instead employ age-old psychological techniques, combined with the platform's unique habitat, to seem realistic enough as to be believed by those harboring those attitudes. They appeal to those who do not care about their realism—cashing in on views that don't impact them. They intend to ragebait, to instigate debate, to trigger moral outrage. They need not put much effort into their appearance, at all. Arguably, they mark a transition point between the mechanical desire to replicate human content exactly—and the eerily human trait of creating something, just good enough, to coast by. Manisha Krishnan contributed to the reporting of this story.

Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'
Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'

Los Angeles Times

time2 hours ago

  • Los Angeles Times

Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'

Let's unpack our need to unpack the whole 'woman on TikTok who fell in love with her psychiatrist' saga. First the facts: Kendra Hilty recently posted 25 videos on TikTok in which she discussed her decision to end four years of 30-minute monthly sessions (most of them on Zoom) with a male psychiatrist who prescribed her medication. At some point during their sessions, Hilty revealed her romantic feelings for him, feelings that she now — supported by comments she says were made by her therapist and a ChatGPT she has named Henry — believes the psychiatrist willingly fostered, leveraged and enjoyed. Millions of people tuned in, though the fascination appears to have been less about the alleged actions and motivations of the psychiatrist (who has wisely chosen, thus far, to remain silent) and more focused on Hilty's detailed description of certain encounters and her deep subtext readings of what they might have meant. Many responded so negatively that Hilty turned off her comments for a while as hundreds made posts across social media eviscerating or satirizing the series. Soon enough, as happens with viral content, legacy media got involved and all the catch-up 'unpacking' began. Unlike Reesa Teesa, whose multi-post tale of marriage to a pathological liar went viral on TikTok last year and led to a TV adaptation, Hilty hasn't become a universal figure of sympathy and courage. As she recently told People magazine, she has received 'nonstop bullying' and threats along with the dozens of DMs thanking her for sharing her story. She has been accused of racism (the psychiatrist is a man of color), narcissism and, well, insanity. (She says she is, however, open to having her story adapted to film or television.) To say the posts are troubling is an understatement. I was alerted to them by a friend who had previously expressed concern about young people using ChatGPT as a de facto therapist — a trend alarming enough to draw warnings from Open AI Chief Executive Sam Altman and move Illinois, Utah and Nevada to ban the use of AI in mental health therapy. 'There's a woman on TikTok having a full-blown ChatGPT-induced meltdown,' this friend texted me. 'This is a real problem.' Certainly, Hilty appeared to be having real problems, which ChatGPT, with its programmed tendency to validate users' views and opinions, undoubtedly inflamed. But given the viral reaction to her posts, so are we. Even as countless studies suggest that social media is, for myriad reasons, detrimental to mental health, its users continue to consume and comment on videos and images of people undergoing mental and emotional crises as if they were DIY episodes of 'Fleabag.' So the question is not 'who is this woman obsessing about her relationship with her psychiatrist' but why are so many of us watching her do it? It's one thing to become transfixed by a fictional character going down a scripted wormhole for the purposes of narrative enlightenment or comedy. It's another when some poor soul is doing it in front of their phone in real life. It's even worse when the 'star' of the video is not a willing participant. Social media and the ubiquity of smartphones have allowed citizens to expose instances of genuine, and often institutionalized, racism, sexism, homophobia and consumer exploitation. But for every 'Karen' post that reveals bigotry, abuse or unacceptable rudeness, there are three that capture someone clearly having a mental or emotional breakdown (or just a very, very bad day). With social media largely unregulated, they are all lumped in together and it has become far too easy to use it as the British elite once purportedly used psychiatric hospital Bedlam: to view the emotionally troubled and mentally ill as if they were exhibits in a zoo. Hilty believes she is helping to identify a real problem and is, obviously, the author of her own exposure, as are many people who post themselves deconstructing a bad relationship, reacting to a crisis or experiencing emotional distress. All social media posts exist to capture attention, and the types that do tend to be repeated. Sharing one's trauma can elicit sympathy, support, insight and even help. But 'sadfishing,' as it is often called, can also make a bad situation worse, from viewers questioning the authenticity and intention of the post to engaging in brutal mockery and bullying. Those who are caught on camera as they melt down over one thing or another could wind up as unwitting symbols of privilege or stupidity or the kind of terrible service/consumer we're expected to deal with today. Some are undoubtedly arrogant jerks who have earned a public comeuppance (and if the fear of being filmed keeps even one person from shouting at some poor overworked cashier or barista, that can only be a good thing). But others are clearly beset by problems that go far deeper than not wanting to wait in line or accept that their flight has been canceled. It is strange that in a culture where increased awareness of mental health realities and challenges have led to so many positive changes, including to the vernacular, people still feel free to film, post, watch and judge strangers who have lost control without showing any concern for context or consequence. I would like to say I never watch videos of people having a meltdown or behaving badly, but that would be a big fat lie. They're everywhere and I enjoy the dopamine thrill of feeling outraged and superior as much as the next person. (Again, I am not talking about videos that capture bigotry, institutional abuse or physical violence.) I watched Hilty for research but I quickly found myself caught up in her minute dissection and seemingly wild projection. I too found myself judging her, silently but not in a kind way. ('No one talks about being in love with their shrink? Girl, it's literary and cinematic canon.' 'How, in all those years in therapy, have you never heard of transference?' 'Why do you keep saying you don't want this guy fired while arguing that he abused the doctor-patient relationship?') As the series wore on, her pain, if not its actual source, became more and more evident and my private commentary solidified into: 'For the love of God, put down your phone.' Since she was not about to, I did. Because me watching her wasn't helping either of us. Except to remind me of times when my own mental health felt precarious, when obsession and paranoia seemed like normal reactions and my inner pain drove me to do and say things I very much regret. These are memories that I will continue to hold and own but I am eternally grateful that no one, including myself, captured them on film, much less shared them with the multitudes. Those who make millions off the mostly unpaid labor of social media users show no signs of protecting their workers with oversight or regulation. But no one goes viral in a vacuum. Decades ago, the popularity of 'America's Funniest Home Videos' answered the question of whether people's unscripted pain should be offered up as entertainment and now we live in a world where people are willing to do and say the most intimate and anguished things in front of a reality TV crew. Still, when one of these types of videos pops up or goes viral, there's no harm in asking 'why exactly am I watching this' and 'what if it were me?'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store