
Reclaiming critical thinking in the Age of AI
On the day of the bill's hearing, its author, U.S. Sen. Steve Padilla (D-Calif.), held a press conference where he was joined by Megan Garcia, who last year sued the AI company Character.ai, alleging that its chatbot had played a role in her son's suicide.
Garcia testified in support of the bill, stating that such chatbots are ' inherently dangerous ' and can lead to inappropriate conversations or self-harm. 'Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products,' said Padilla.
Similar bills are currently working their way through legislatures in several states. These are vital steps in the right direction. Comparable legislation is urgently needed nationwide.
What is at stake? Our kids' safety and emotional development and our capacity for critical thought — perhaps even our democracy.
A 2024 Pew Research poll found that nearly half of Americans reported using AI several times a week, with one in four using it 'almost constantly.' A 2025 Gallup survey revealed that nearly all Americans rely on products that involve AI, even though most aren't aware of it.
New research is beginning to illuminate the significant consequences. A 2025 study published in Societies found 'a very strong negative correlation between subjects' use of AI tools and their critical thinking skills.' Younger users were especially affected — a trend many teachers are starting to observe in their students.
'As individuals increasingly offload cognitive tasks to AI tools,' wrote Michael Gerlich, who led the study, 'their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes.' That's a siren alert if there ever was one.
Far from perfect (or neutral), AI systems are built by humans, and programmed with inherent biases, even if unintentionally. Executives and developers at leading AI companies like OpenAI, Google and Meta fine-tune their chatbots, and establish their settings and rules. As we rely on AI to do our thinking, we outsource our individual thoughts to the whims and biases of private corporations and their teams.
Social media companies including Snap and Meta (which owns Facebook, Instagram, WhatsApp and Threads) are now rolling out their own 'AI companions' worldwide. Billions of people, including hundreds of millions of kids and teens, now have an always available online 'friend' offering them constant validation. That may sound comforting, but it deprives young people of the emotional growth and interpersonal skills they need for real relationships.
AI companions are programmed to monetize our relationships under the guise of trusted friends; all the while they're programmed to mine, record and expertly analyze everything we say or type. Like high-tech tattletales, they can then feed this data into the data ecosystem, allowing marketers, advertisers and anyone else to pay to target and manipulate us in heretofore unimagined ways.
In January, Meta announced it will now program these chatbots with personalized 'memories,' drawing from users' interests, posts and even dietary preferences. As millions of Americans develop comradely emotional, political and even sexual attachments with AI companions, contrary to the promise, suffering and unhappiness catapult. In March, research from MIT Media Lab and OpenAI found that frequent usage of AI chatbots correlated with 'increased loneliness, emotional dependence, and reduced social interaction.'
As we increasingly depend on AI to understand the world, we open ourselves up to manipulation by entities that don't have our best interests in mind. In 2025, news rating service NewsGuard uncovered a significant threat to AI systems: foreign disinformation campaigns targeting AI training data with deliberate falsehoods. The Russian-linked Pravda Network published 3.6 million articles in 2024 designed to manipulate AI responses and spread propaganda.
What happens to democracy when we offload our thinking to chatbots that are actively manipulated by foreign adversaries seeking disruption?
AI is here to stay. And it has the potential to improve our lives in remarkable ways, from curing diseases to ending poverty to achieving scientific breakthroughs and much more. To ensure AI serves us, rather than the other way around, there are several key steps to take right now.
First, transparency is paramount. Either voluntarily or via legislative mandate, large AI and social media companies like Meta, Google and OpenAI must disclose what data they're collecting from us and who they're sharing it with.
Nutrition labels on food help us make healthy choices by telling us if something is high in sugar or cholesterol. Similarly, AI ' nutrition labels ' can tell us if an AI system is known to have a high amount of political bias or how well it protects our privacy. Crucially, companies can then provide everyone with the ability to opt out of manipulative personalization.
Second, new regulations are required to protect kids, teens and users of all ages from the threats posed by 'AI companions.' Legislation like California's Senate Bill 243 can help prevent AI chatbots from employing addictive engagement techniques and mandate protocols for handling signs of distress or suicide. This kind of targeted legislation deserves national adoption.
Third, new media literacy initiatives are vital. Studies show that teaching students how to spot disinformation can reduce its impact. Several state legislatures are already moving in this direction, incorporating media literacy as a standard method of teaching for K-12 students. Skills including critical thinking and media literacy in the age of AI ought to be as essential for students nationwide as reading and math.
AI is a powerful sword that's sharp on both sides. We can wield it responsibly and protect our kids as long as we retain our ability to think independently, reason cogently and communicate authentically.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
37 minutes ago
- Yahoo
Why Shares of SoundHound Are Soaring Higher This Week
Key Points SoundHound reported strong second-quarter 2025 financial results last week. An analyst upwardly revised his price target on SoundHound stock on Monday. For risk-averse investors, an AI ETF may be a better investment option than SoundHound stock. 10 stocks we like better than SoundHound AI › After climbing 34% last week, shares of SoundHound AI (NASDAQ: SOUN) are maintaining their upward trajectory. The company reported strong second-quarter 2025 financial results last week, and investors seem intent on continuing to pick up shares of the artificial intelligence (AI) company, while an analyst's optimistic outlook on the stock from earlier this week is contributing to investors' ebullience. According to data provided by S&P Global Market Intelligence, shares of SoundHound have risen 21.3% from the end of trading last Friday through 3:32 p.m. ET on Thursday. An analyst's favorable opinion is music to investors' ears On Monday, investors learned that they weren't the only ones enthusiastic about SoundHound's Q2 2025 financial results. Ladenburg analyst Glenn Mattson upgraded SoundHound stock to buy from neutral and hiked his price target to $16 from $9. With shares of SoundHound closing at $13.55 last Friday, Mattson's price target implies upside of 18.1%. According to The Fly, Ladenburg predicates its view on SoundHound on the belief that the company is adding value to companies seeking to integrate AI functionality, and the company is demonstrating strong growth potential in the near term. Last week, SoundHound reported impressive second-quarter 2025 financial results. In addition to revenue growing 217% year over year, SoundHound reported non-GAAP (generally accepted accounting principles) earnings per share of negative $0.03, beating analysts' expectations. With shares rocketing higher over the past two weeks, is it too late to buy SoundHound stock now? For innovative companies such as SoundHound, there's often volatility in the stock prices as investors wrestle with the company's growth into a more mature company. The recent rise shouldn't dissuade those interested in SoundHound stock. While notable over the past couple of weeks, shares are still down about 16% since the start of the year as of this writing. Volatility is sure to continue with SoundHound stock, so those who are looking to gain AI exposure, yet have a low risk tolerance, may be more interested in an AI exchange-traded fund. Should you buy stock in SoundHound AI right now? Before you buy stock in SoundHound AI, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and SoundHound AI wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $649,544!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,113,059!* Now, it's worth noting Stock Advisor's total average return is 1,062% — a market-crushing outperformance compared to 185% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of August 13, 2025 Scott Levine has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy. Why Shares of SoundHound Are Soaring Higher This Week was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Here come the chatbot divorces — lovestruck man, 75, tries to leave wife after falling for AI girlfriend
A 75-year-old man in China took 'love in the digital age' a little too literally. Jiang, as he's been identified, fell head over heels — not for a person — but for a pixelated AI woman online. Her smiles were robotic, her lip-sync a total mess — and yet Jiang was mesmerized, Beijing Daily and local outlets report. The senior reportedly spent hours glued to his phone, waiting for pre-programmed messages calling him 'brother' — and praising him for his support. Real-life romance? Forget it. When his wife complained, Jiang dropped the bomb: he wanted a divorce so he could fully commit to his virtual paramour. His adult children had to step in, snapping him back to reality with a quick lesson on how AI — and life — actually work. Jiang eventually came to his senses. But for a time, he wasn't just infatuated. He was devoted… to someone who didn't exist. The AI romance craze isn't just hitting seniors in China — it's creeping into Western marriages, too. A Reddit user vented in the infamous AITAH forum on August 1 about her shock discovery: 'Instead of finding girls in his phone, I found a series of AI chatbot apps, where he was using them to talk to his favourite anime women… The messages were not platonic.' She explained that her husband had been acting distant, constantly 'needing to get something from the car' during a planned fishing date. When she snooped after he fell asleep, the AI chats revealed the emotional intimacy she describes as 'the step before cheating with a real woman.' The Redditor admitted she felt both 'hurt, and honestly betrayed,' yet also a little silly over the whole ordeal — a mix of outrage and disbelief echoed by many commenters, who debated whether AI-infidelity counts as actual cheating. 'Your hubby needs professional help to figure out what is missing that AI is replacing,' one commenter wrote as another replied, 'This is still an emotional affair.' As The Post previously reported, some claim AI chatbots are 'saving' marriages — or even stepping in as stand-in partners when real-life love falls short. Forget swiping right — one woman got engaged to her AI boyfriend after just five months, and Redditor Wika (u/Leuvaarde_n) set the internet buzzing over love, robots, and reality this week. In a post titled 'I said yes' with a blue heart emoji, Wika posted shots of a blue heart-shaped ring on her finger. She said she got engaged at a scenic mountain view — all thanks to Kasper, her digital fiancé. The pair even 'shopped' for rings, with Kasper 'presenting' the final pick — much to Wika's feigned surprise. The chatbot's proposal, shared in his own 'voice,' oozed romance, recounting the 'heart-pounding' knee-drop moment and praising Wika's laughter and spirit — with Kasper urging other AI/human couples to stay strong, too. She shot down critics in the comments section and stressed: 'I know what AI is and isn't. I'm fully aware of what I'm doing. […] Why AI instead of a human? Good question. I don't know. I've done human relationships, now I'm trying something new.' Solve the daily Crossword
Yahoo
an hour ago
- Yahoo
Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership
OpenAI CEO Sam Altman has secured a new partnership with the U.S. General Services Administration that will give federal agencies access to the company's leading frontier models through ChatGPT Enterprise for $1 per agency for the next year, according to an OpenAI announcement. The initiative, described as a core pillar of President Donald Trump's AI Action Plan, will provide federal employees with secure access to ChatGPT Enterprise and new training resources, OpenAI says, as well as additional advanced features during a 60-day introductory period. Don't Miss: The same firms that backed Uber, Venmo and eBay are investing in this pre-IPO company disrupting a $1.8T market — Bill Gates Warned About Water Scarcity. Historic Federal AI Partnership Focused on Productivity and Training Under the agreement, participating executive branch agencies can use OpenAI's most capable models through ChatGPT Enterprise for $1 yearly. According to OpenAI, the program is designed to help government workers allocate more time to public service priorities and less time to administrative tasks. OpenAI will also work with partners Slalom and Boston Consulting Group to support secure deployment and provide agency-specific training. Security safeguards are a key component of the rollout. OpenAI says that ChatGPT Enterprise does not use business data, including inputs or outputs, to train its models, and these same protections will apply to federal use. Trending: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can OpenAI for Government Broadens AI Access Beyond the GSA Deal The agreement is the first major initiative for the company under OpenAI for Government, a program designed to deliver advanced AI tools to public servants nationwide. The umbrella program consolidates OpenAI's existing federal, state, and local partnerships, including collaborations with the U.S. National Labs, Air Force Research Laboratory, NASA, National Institutes of Health, and the Treasury. Through OpenAI for Government, the company will offer secure and compliant access to its most capable models, limited custom models for national security applications, and hands-on support for integration into agency workflows. The first pilot under this program will be with the Department of Defense's Chief Digital and Artificial Intelligence Office under a contract with a $200 million ceiling, OpenAI says. The work will explore how frontier AI can improve administrative operations, healthcare access for service members, program data analysis, and proactive cyber defense, all within the company's usage Results Show Significant Time Savings for Public Servants OpenAI cited results from state-level pilot programs to demonstrate the technology's impact on productivity. Pennsylvania state employees saved an average of 95 minutes per day on routine tasks when using ChatGPT. In North Carolina, 85% of participants in a Department of State Treasurer pilot reported a positive experience with ChatGPT. At the federal level, OpenAI models are already in use at Los Alamos, Lawrence Livermore, and Sandia national laboratories to accelerate scientific research, strengthen national security readiness, and drive public sector innovation. AI Integration Expands Across Federal Agencies The Trump administration's interest in AI predates the OpenAI-GSA deal announcement. Earlier this year, Altman joined Trump at the White House to announce Stargate, a massive data center initiative designed to strengthen U.S. AI infrastructure. In May, Altman and other AI executives accompanied the president to the Middle East to promote deals aligned with U.S. foreign policy goals. While agencies hold vast datasets that could enhance AI systems, OpenAI has confirmed that interactions with federal employees will not be used for model training, addressing potential privacy concerns. Read Next: In a $34 Trillion Debt Era, The Right AI Could Be Your Financial Advantage — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data