
AI Interview With Deceased Parkland Teen Sparks Outrage
The interview — shared to Acosta's SubStack page — depicts Oliver, one of 17 who died in the 2018 massacre, remarking on the cause of death and stressing why it's 'important' to talk about gun violence in schools.
The AI — when asked for its 'solution' to gun violence — emphasized a need for a mix of stronger gun-control laws, mental health support and community engagement.
The bot of Oliver proceeded to speak in a noticeably higher tone before discussing the late teen's interests in the Miami Heat and Star Wars.
Acosta — who declared that the technology left him 'speechless' — called the conversation 'so insightful,' telling the late teen's father, Manuel Oliver, that it felt like the first time he really got to know his son.
'People say, 'Well, AI, you know, it could be bad, it could cause all these destructive things.' This is an example of how it might actually do some good, it might help some people who have suffered tremendous losses like your family have a way to hold on to who this person was, which I think is a beautiful thing.'
A show you don't want to miss at 4p ET / 1p PT. I'll be having a one of a kind interview with Joaquin Oliver. He died in the Parkland school shooting in 2018. But his parents have created an AI version of their son to deliver a powerful message on gun violence. Plus Texas State… pic.twitter.com/mbdM2WxwUR
— Jim Acosta (@Acosta) August 4, 2025
@JimAcosta
Manuel Oliver and his wife, Patricia Oliver — co-founders of the gun-control advocacy group Change the Ref — have been involved in other projects that demand action against gun violence, including a school shooting video game, a play and a site that uses AI to recreate victims' voices for calls to Congress.
Manuel Oliver — in a video shared to X, formerly Twitter — acknowledged that the AI was his and his wife's idea, adding that Acosta shouldn't be blamed for 'what he was able to do' in the 'interview.'
'If the problem that you have is with the AI, then you have the wrong problem,' he said.
'The real problem is that my son was shot eight years ago. So if you believe that that is not the problem, you are part of the problem.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
16 minutes ago
- Fast Company
4 ways states are placing guardrails around AI
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025. Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI. Government use of AI The oversight and responsible use of AI are especially critical in the public sector. Predictive AI—AI that performs statistical analysis to make forecasts—has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole. But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases. Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment. Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them. Montana's new 'Right to Compute' law sets requirements that AI developers adopt risk management frameworks —methods for addressing security and privacy in the development process—for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York's SB 8755 bill. AI in health care In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers' use of AI and clinicians' use of AI. Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose. Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology. Bills covering insurers provide oversight of the payers' use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients. Facial recognition and surveillance In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases. Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces. Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity. By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies. Generative AI and foundation models The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah's Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they're using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information. Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training. AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency. Trying to fill the gap In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers' compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections. Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says 'The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations . . .' The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration's definition of burdensome against needed federal funding for AI.


The Hill
an hour ago
- The Hill
Doug Ford: Trump is probably most disliked politician in Canada
Ontario Premier Doug Ford said Thursday that President Trump is probably the most disliked politician in Canada after months of trade tension between the two countries. 'What's the general impression of Trump in Canada?' CNN's Wolf Blitzer asked on 'The Situation Room.' 'He's probably the most disliked politician in the world in Canada, because he's attacked his closest family member, and that's the way we look on it,' Ford replied. 'And when I talk to the governors and senators and congresspeople, even Republicans, totally disagree, but they're too scared to come out and say anything, because the president will go after them, outside of a few senators,' he added. On Wednesday, Ford told reporters he believes Trump will initiate a review of the U.S.-Mexico-Canada Agreement a year early, in 2026, and added that he doesn't trust the American president. 'He's not waiting until 2026. At any given time, President Trump — not that he even follows the rules — he can pull the carpet out from underneath us,' Ford said, according to the Associated Press. On Sunday, Canadian Minister for U.S.-Canada Trade Dominic LeBlanc expressed optimism about the likelihood of a trade deal between Ottawa and Washington, even as Trump said he would impose 35 percent tariffs on goods from Canada. 'We were obviously, obviously disappointed by that decision. We believe there's a great deal of common ground between the United States and Canada in terms of building two strong economies that work well together,' LeBlanc said on CBS News's 'Face the Nation.' On Wednesday, Canadian Prime Minister Mark Carney said he and his country's premiers talked that day and that Canada was 'staying focused on building our industrial strength at home.' 'As we work towards a new trade agreement with the United States, we're staying focused on building our industrial strength at home,' Carney said in a post on the social platform X. 'Together, we're strengthening our trading partnerships at home and abroad — including breaking down barriers between provinces and territories — and supporting our industries and workers to meet the demands of new markets,' he added.


New York Post
an hour ago
- New York Post
Chris Cuomo mocked after falling for deepfake video of AOC slamming Sydney Sweeney ad
Veteran newsman Chris Cuomo apparently can't tell the difference between AOC and an AI-OC. The NewsNation anchor was mercilessly mocked — including by Rep. Alexandria Ocasio-Cortez — after he fell for a deepfake video of the progressive firebrand denouncing Sydney Sweeney's American Eagle ad campaign as 'Nazi propaganda.' Cuomo posted a video to his X account Wednesday showing the New York pol making crude references to female body parts while speaking about Sweeney's photo shoot on the House floor. 5 Chris Cuomo was widely mocked online after mistaking an AI-generated deepfake of Rep. Alexandria Ocasio-Cortez for a real congressional speech. Getty Images for Tribeca Festival In his post accompanying the ersatz video showing a AOC wearing a black blazer and with her hair in bun, Cuomo denounced the Democrat for having misplaced priorities. 'Nothing about hamas or people burning jews cars. But sweeney jeans ad? Deserved time on floor of congress? What happd to this party? Fight for small business …not for small culture wars,' he wrote. Cuomo failed to notice that the AI-generated video bore a clear watermark stating it was 'parody 100% made with AI. The supposed hard-hitting journalist also apparently forgot that Congress is not in session. The real Ocasio-Cortez quickly called out the brother of New York mayoral candidate Andrew Cuomo. 'This is a deepfake dude. Please use your critical thinking skills. At this point, you're just reposting Facebook memes and calling it journalism,' she replied to Cuomo's post. American Eagle's ad campaign features Sweeney making a tongue-in-cheek reference to having 'great jeans,' using wordplay between the denim wear and genetics. It has sparked a culture war online between 'woke' commentators and right-leaning social media users. 5 Cuomo posted the fabricated clip—falsely showing Ocasio-Cortez denouncing Sydney Sweeney's American Eagle ad—without realizing it was clearly labeled '100% parody made with AI.' Instagram/memerunnergpt After the embarrassing gaffe, Cuomo acknowledged his error and removed the original post. However, his response attempted to shift focus back to his original criticism of the congresswoman regarding the Israel-Hamas war. 'You are correct… that was a deepfake (but it really does sound like you). Thank you for correcting. But now to the central claim: show me you calling on hamas to surrender or addressing the bombing of a car in st louis belonging to the idf american soldier?…dude?' he wrote. Cuomo was referring to a recent incident in Clayton, Mo., where several cars outside the home of an American who once served in the Israeli military were set on fire and defaced with 'Death to the IDF' graffiti in what authorities are investigating as a hate crime and act of antisemitic intimidation. 5 Sydney Sweeney, whose American Eagle campaign was the subject of the fake video, has been at the center of controversy. American Eagle The congresswoman, who has criticized Israel's military action in Gaza to root out Hamas terrorists following the Oct. 7 massacre, delivered a sharp response. 'You seem to struggle with knowing how to write an apology. Do you need help? Maybe you should call someone,' she replied. 'I was wrong,' Cuomo admitted. 'AOC gave me a smack today because I tried to give her a smack first.' But again he went on to insist that the lawmaker 'also ain't right' because she 'ignored the part of the tweet that mattered' — namely his demand that she 'call on Hamas to surrender to end the war they started' as well as to condemn the Clayton, Mo. incident. 5 The congresswoman criticized Cuomo after he declined to offer up an apology. X/AOC 5 Ocasio-Cortez swiftly corrected the record on X, writing, 'This is a deepfake dude. Please use your critical thinking skills.' Getty Images Cuomo's protestations did little to shield him from being mercilessly mocked online. Piers Morgan, the British media personality who like Cuomo is a former CNN prime time host, suggested Cuomo should focus less on personal conflicts and more on identifying obvious fakes. 'Oh dear,' Morgan wrote on X, adding several laughing hysterically emojis. He urged Cuomo to 'perhaps spend less time bitching about me and more time trying to spot obvious fakes…' Cuomo replied to Morgan, writing: 'You got me…silly clip i didnt pay attn to….and I wont block you for saying so…see how easy that is, my yappy friend?' Tim Miller from the 'Never Trump' publication The Bulwark expressed concern about the larger implications, writing on X: 'It doesn't auger well for our societal AI future if a professional news anchor gets tricked by a video that has a '100% parody' watermark.' The Post has sought comment from Cuomo, Ocasio-Cortez and NewsNation. In June, Chris Cuomo blasted Ocasio-Cortez as 'deranged' after the New York City mayoral candidate that she endorsed in the Democratic primary, State Assemblyman Zohran Mamdani, defeated his brother.