Which photo is real? Ohio lawmakers want to regulate deepfakes, AI content
Demonstrating impact of deepfakes: On the left is the original, real photo of state Rep. Adam Mathews (R-Lebanon). The photo on the right was manipulated, yet looks realistic, by reporter Morgan Trau for the sake of this news report. (Photo by: Adam Mathews and Morgan Trau.)
Ohio lawmakers are trying to combat misinformation by requiring disclaimers on realistic artificial intelligence-generated or created content.
'AI — every single day, (it) seems to be getting better and better,' state Rep. Adam Mathews, R-Lebanon, said.
And it's becoming increasingly accessible. There are now widely accessible, free, or inexpensive services that use AI to create and edit photos, videos, and audio. But with that, Case Western Reserve University technology law professor Erman Ayday says come major problems.
'People can also use (sites) in order to create effects for themselves, like you can actually put yourself in front of a beach right now,' Ayday said. 'But sometimes, they're also used for manipulation, for bad purposes, for blackmailing.'
Over the past several years, deepfakes have become more prevalent online — with fake explicit photos of celebrities to politicians urging you not to vote for them.
Mathews is trying to stop this type of misinformation. He and state Rep. Ty Mathews, R-Findlay, have proposed Ohio House Bill 185, which would regulate AI content.
'This is a way for the normal Ohioans to protect themselves,' Adam Mathews said.
Malicious deepfakes, AI meant to harm someone's image, would only be allowed as long as they have a clear watermark that they were artificially manipulated.
'If it would confuse a reasonable person, there is some type of disclaimer,' the lawmaker added. 'If a reasonable person would say that this is a manipulated image, then that would not need a disclaimer.'
Let's show some examples.
This is the original photo of Rep. Mathews by a sign for now Vice President JD Vance.
This is a deepfake made of him — one by a Kamala Harris sign. This image would require a disclaimer.
This is an expertly drawn 'deepfake' we made of him. This one would not require a disclaimer.
'Political cartoons are totally protected; no one thinks that a drawing actually happened,' Mathews said.
The legislation would allow everyone to own their image, and it would be a trademark infringement to produce malicious content without someone's consent. There could be civil penalties of tens of thousands of dollars.
The bill also makes it a third-degree felony to create or distribute malicious AI content in order to extort the individual being deepfaked.
Any pornographic deepfakes or deepfakes of a child are completely banned, with or without a disclaimer.
No one has spoken out publicly against the bill, but Ayday said the biggest challenge will be enforcement.
'This is not really something scary for people who are doing this for malicious purposes — especially during the elections or during some high-stakes events,' the professor said. 'Distributing this thing in an anonymous way is extremely easy.'
The bill will continue being heard this week.
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
SUPPORT: YOU MAKE OUR WORK POSSIBLE

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race
FIRST ON FOX – Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administration (NSA) to develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. The bill, dubbed the "Advanced AI Security Readiness Act," directs the NSA's Artificial Intelligence Security Center to develop an "AI Security Playbook to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems." It is co-sponsored by House Select Committee on China Chairman Rep. John Moolenaar, R-Mich., Ranking Member Rep. Raja Krishnamoorthi, D-Ill., and Rep. Josh Gottheimer, D-N.J. LaHood, who sits on the House Intelligence Committee and the House Select Committee on China, told Fox News Digital that the legislative proposal, if passed, would be the first time Congress codifies a "multi-prong approach to ensure that the U.S. remains ahead in the advanced technology race against the CCP." The new bill follows another bipartisan legislative proposal, the "Chip Security Act," which he introduced in late May. That proposal aims to improve export control mechanisms – including for chips and high-capacity chip manufacturing – protect covered AI technologies with a focus on cybersecurity, and limit outbound investment to firms directly tied to the Chinese Community Party or China's People's Liberation Army. Chinese Bioweapon Smuggling Case Shows Us 'Trains Our Enemies,' 'Learned Nothing' From Covid: Security Expert "We start with the premise that China has a plan to replace the United States. And I don't say that to scare people or my constituents, but they have a plan to replace the United States, and they're working on it every single day. And that entails stealing data and infiltrating our systems," LaHood told Fox News Digital. "AI is the next frontier on that. We lead the world in technology. We lead the world when it comes to AI. But what this bill will do will again make sure that things are done the right way and the correct way, and that we're protecting our assets and promoting the current technology that we have in our country." Read On The Fox News App LaHood pointed to evidence uncovered by the committee that he said shows the CCP's DeepSeek used illegal distillation techniques to steal insights from U.S. AI models to accelerate their own technology development. He also pointed to how China allegedly smuggled AI chips through Singapore intermediaries to circumvent U.S. export controls on the technology. "As we look at, 'How do we win the strategic competition?' I think most experts would say we're ahead in AI right now against China, but not by much. It is a short lead," LaHood told Fox News Digital. He said he is confident his legislative proposals will put the U.S. "in the best position to protect our assets here and make sure that we're not shipping things that shouldn't go to AI that allow them to win the AI race in China." "Whoever wins this race in the future, it's going to be critical to future warfare capabilities, to, obviously, cybersecurity," LaHood continued. "And then, whoever wins the AI competition is going to yield really unwavering economic influence in the future. And so we're aggressive in this bill in terms of targeting those areas where we need to protect our AI and our companies here in the United States, both on the commercial side and on the government side, to put us in the best position possible." The "Advanced AI Security Readiness Act" calls on the NSA to develop a playbook that identifies vulnerabilities in AI data centers and developers producing sensitive AI technologies with an emphasis on unique "threat vectors" that do not typically arise, or are less severe, in the context of conventional information technology systems." The bill says the NSA must develop "core insights" in how advanced AI systems are being trained to identify potential interferences and must develop strategies to "detect, prevent and respond to cyber threats by threat actors targeting covered AI technologies." Amazon Announces $20B Investment In Rural Pennsylvania For Ai Data Centers The bill calls on the NSA to "identify levels of security, if any, that would require substantial involvement" by the U.S. government "in the development or oversight of highly advanced AI systems." It cites a "hypothetical initiative to build covered AI technology systems in a highly secure government environment" with certain protocols in place, such as personnel vetting and security clearance processes, to mitigate "insider threats." Though not directly related, the new bill is being introduced a week after FBI Director Kash Patel sounded the alarm on how the CCP continues to deploy operatives and researchers to "infiltrate" U.S. institutions. Patel laid out the risk in announcing that two Chinese nationals were charged with smuggling a potential bioweapon into the U.S. LaHood said that case further highlights "the level of penetration and sophistication that the CCP will engage in," but he added that his bill focuses on putting a "protective layer" on U.S. AI tech and "restricting outbound investment to China." He pointed to how the CCP also has bought up farmland around strategic U.S. national security locations, particularly in Montana, North Dakota and South Dakota. "If everything was an even playing field, and we were all abiding by the same rules and standards and ethical guidelines, I have no doubt the U.S. would win [the AI race], but China has a tendency and a history of playing by a different set of rules and standards," LaHood said. "They cheat, they steal, they take our intellectual property. Not just my opinion, that's been factually laid out, you know, in many different instances. And that's the reason why we need to have a bill like this." The bill comes as the Trump administration has been pushing to bolster artificial intelligence infrastructure in the United States, and major tech companies, including Amazon, Nvidia, Meta, OpenAI, Oracle and others, have made major investments in constructing AI-focused data centers and enhancing U.S. cloud computing. Last week, Amazon announced a $20 billion investment in constructing AI data centers in rural Pennsylvania. It followed a similar $10 billion investment in North Carolina. In late May, the NSA's Artificial Intelligence Security Center released "joint guidance" on the "risks and best practices in AI data security." The recommendations include implementing methods to secure the data used in AI-based systems, "such as employing digital signatures to authenticate trusted revisions, tracking data provenance, and leveraging trusted infrastructure." The center said its guidance is "critically relevant for organizations – especially system owners and administrators within the Department of Defense, National Security Systems, and the Defense Industrial Base – that already use AI systems in their day-to-day operations and those that are seeking to integrate AI into their infrastructure."Original article source: House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race
Yahoo
2 hours ago
- Yahoo
State, federal cannabis changes threaten Ohio festival's future
WAVERLY, Ohio (WCMH) — Several thousand marijuana enthusiasts will descend upon Waverly next month for the second annual Stargazer Cannabis Festival. It may also be the last. 'We're very sad about that fact, and it would be very ironic that this festival was conceived out of a celebration for legalization, and then ironically, the law will make it illegal,' festival organizer Chad Thompson said. Dispensary reacts to Ohio marijuana limit changes Thompson is concerned about changes to cannabis laws in consideration at the state and federal level that would minimize or eliminate the three-day cannabis festival. Ohio lawmakers have weighed multiple changes to Ohio's recreational marijuana laws, which passed in 2023 with 57% of the vote. Federally, a provision currently included in the 'Big Beautiful Bill' would prohibit most products that use hemp-derived THC. Under the new federal proposal, the U.S. would redefine which hemp products are legal. Hemp was redefined in President Donald Trump's first term to mean cannabis that has minimal THC levels, 0.3% or lower. People have used this definition to synthetically manufacture cannabis products that are both legal and intoxicating. The budget draft would ban those hemp products, making ingestible hemp federally illegal again. 'There is an effort to eliminate that segment of the hemp market, which would really give a big hit to hemp businesses and us, as a cannabis festival, also would be greatly impacted by that,' Thompson said. 'It would take a big portion of our vendors and make them illegal.' Black bear spotted in Licking County for the first time in over two decades Thompson said changes to cannabis law would significantly harm Ohio's emerging marijuana industry, which is on track to be a $2 billion market. Among higher taxes and a decreased legal maximum THC level, legislators have also considered a ban on ingesting cannabis in public spaces, which would largely prevent the festival from taking place. Thompson said if the state changes cannabis laws, Stargazer Festival will have to consider relocating to a 'friendlier state,' taking its business with it. The festival will nearly double the population of Waverly for the weekend, and Thompson said visitors are coming from 35 states. 'This festival this year will cost us close to $200,000 to organize and put on, and that's all revenue that is being filtered in the state down through porta potties, tent rentals, local performers,' Thompson said. More than 400,000 Ohioans see driver's license suspensions lifted Thompson said he does not want to move the festival, saying it would be a 'sad day' if they had to leave their home state. Stargazer began as a celebration for Ohio activists after voters successfully legalized recreational marijuana. As part of the celebratory mission, many festival events involve giveaways for festival attendees, including a $1,000 bonus for five randomly selected cannabis judges and a giveaway dream cannabis wedding officiated by Afroman. Thompson said as this July 25-27 may be the last festival, people should get their tickets ahead of time, as they will likely be sold out at the door. He said he and other advocates are prepared to rally and adjust, but hope the laws will stay favorable. 'Even after legalization, the fight is not done to ensure the best possible cannabis laws on the books for the greater good of the people currently and into the future,' Thompson said. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
2 hours ago
- Yahoo
AI literacy: What it is, what it isn't, who needs it and why it's hard to define
It is 'the policy of the United States to promote AI literacy and proficiency among Americans,' reads an executive order President Donald Trump issued on April 23, 2025. The executive order, titled Advancing Artificial Intelligence Education for American Youth, signals that advancing AI literacy is now an official national priority. This raises a series of important questions: What exactly is AI literacy, who needs it, and how do you go about building it thoughtfully and responsibly? The implications of AI literacy, or lack thereof, are far-reaching. They extend beyond national ambitions to remain 'a global leader in this technological revolution' or even prepare an 'AI-skilled workforce,' as the executive order states. Without basic literacy, citizens and consumers are not well equipped to understand the algorithmic platforms and decisions that affect so many domains of their lives: government services, privacy, lending, health care, news recommendations and more. And the lack of AI literacy risks ceding important aspects of society's future to a handful of multinational companies. How, then, can institutions help people understand and use – or resist – AI as individuals, workers, parents, innovators, job seekers, students, employers and citizens? We are a policy scientist and two educational researchers who study AI literacy, and we explore these issues in our research. At its foundation, AI literacy includes a mix of knowledge, skills and attitudes that are technical, social and ethical in nature. According to one prominent definition, AI literacy refers to 'a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.' AI literacy is not simply programming or the mechanics of neural networks, and it is certainly not just prompt engineering – that is, the act of carefully writing prompts for chatbots. Vibe coding, or using AI to write software code, might be fun and important, but restricting the definition of literacy to the newest trend or the latest need of employers won't cover the bases in the long term. And while a single master definition may not be needed, or even desirable, too much variation makes it tricky to decide on organizational, educational or policy strategies. Who needs AI literacy? Everyone, including the employees and students using it, and the citizens grappling with its growing impacts. Every sector and sphere of society is now involved with AI, even if this isn't always easy for people to see. Exactly how much literacy everyone needs and how to get there is a much tougher question. Are a few quick HR training sessions enough, or do we need to embed AI across K-12 curricula and deliver university micro credentials and hands-on workshops? There is much that researchers don't know, which leads to the need to measure AI literacy and the effectiveness of different training approaches. While there is a growing and bipartisan consensus that AI literacy matters, there's much less consensus on how to actually understand people's AI literacy levels. Researchers have focused on different aspects, such as technical or ethical skills, or on different populations – for example, business managers and students – or even on subdomains like generative AI. A recent review study identified more than a dozen questionnaires designed to measure AI literacy, the vast majority of which rely on self-reported responses to questions and statements such as 'I feel confident about using AI.' There's also a lack of testing to see whether these questionnaires work well for people from different cultural backgrounds. Moreover, the rise of generative AI has exposed gaps and challenges: Is it possible to create a stable way to measure AI literacy when AI is itself so dynamic? In our research collaboration, we've tried to help address some of these problems. In particular, we've focused on creating objective knowledge assessments, such as multiple-choice surveys tested with thorough statistical analyses to ensure that they accurately measure AI literacy. We've so far tested a multiple-choice survey in the U.S., U.K. and Germany and found that it works consistently and fairly across these three countries. There's a lot more work to do to create reliable and feasible testing approaches. But going forward, just asking people to self-report their AI literacy probably isn't enough to understand where different groups of people are and what supports they need. Governments, universities and industry are trying to advance AI literacy. Finland launched the Elements of AI series in 2018 with the hope of educating its general public on AI. Estonia's AI Leap initiative partners with Anthropic and OpenAI to provide access to AI tools for tens of thousands of students and thousands of teachers. And China is now requiring at least eight hours of AI education annually as early as elementary school, which goes a step beyond the new U.S. executive order. On the university level, Purdue University and the University of Pennsylvania have launched new master's in AI programs, targeting future AI leaders. Despite these efforts, these initiatives face an unclear and evolving understanding of AI literacy. They also face challenges to measuring effectiveness and minimal knowledge on what teaching approaches actually work. And there are long-standing issues with respect to equity − for example, reaching schools, communities, segments of the population and businesses that are stretched or under-resourced. Based on our research, experience as educators and collaboration with policymakers and technology companies, we think a few steps might be prudent. Building AI literacy starts with recognizing it's not just about tech: People also need to grasp the social and ethical sides of the technology. To see whether we're getting there, we researchers and educators should use clear, reliable tests that track progress for different age groups and communities. Universities and companies can try out new teaching ideas first, then share what works through an independent hub. Educators, meanwhile, need proper training and resources, not just additional curricula, to bring AI into the classroom. And because opportunity isn't spread evenly, partnerships that reach under-resourced schools and neighborhoods are essential so everyone can benefit. Critically, achieving widespread AI literacy may be even harder than building digital and media literacy, so getting there will require serious investment – not cuts – to education and research. There is widespread consensus that AI literacy is important, whether to boost AI trust and adoption or to empower citizens to challenge AI or shape its future. As with AI itself, we believe it's important to approach AI literacy carefully, avoiding hype or an overly technical focus. The right approach can prepare students to become 'active and responsible participants in the workforce of the future' and empower Americans to 'thrive in an increasingly digital society,' as the AI literacy executive order calls for. The Conversation will be hosting a free webinar on practical and safe use of AI with our tech editor and an AI expert on June 24 at 2pm ET/11am PT. Sign up to get your questions answered. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Daniel S. Schiff, Purdue University; Arne Bewersdorff, Technical University of Munich, and Marie Hornberger, Technical University of Munich Read more: Navigating deepfakes and synthetic media: This course helps students demystify artificial intelligence technologies The hidden risk of letting AI decide – losing the skills to choose for ourselves Verifying facts in the age of AI – librarians offer 5 strategies Funding from Google Research helped to support part of the authors' research on AI literacy. Funding from the German Federal Ministry of Education and Research under the funding code 16DHBKI051 helped to support part of the authors' research on AI literacy. Arne Bewersdorff does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.