
Anxious parents face tough choices on AI
Tal is "very worried" about the future this technology holds for his children -- whether it's deepfakes, "the inability to distinguish between reality and AI," or "the thousands of possible new threats that I wasn't trained to detect."
Mike Brooks, a psychologist from Austin, Texas, who specializes in parenting and technology, worries that parents are keeping their heads in the sand, refusing to grapple with AI.
"They're already overwhelmed with parenting demands," he observed -- from online pornography and TikTok to video games and "just trying to get them out of their rooms and into the real world."
For Marc Watkins, a professor at the University of Mississippi who focuses on AI in teaching, "we've already gone too far" to shield children from AI past a certain age.
Yet some parents are still trying to remain gatekeepers to the technology.
"In my circle of friends and family, I'm the only one exploring AI with my child," remarked Melissa Franklin, mother of a 7-year-old boy and law student in Kentucky.
"I don't understand the technology behind AI," she said, "but I know it's inevitable, and I'd rather give my son a head start than leave him overwhelmed."
'Benefits and risks'
The path is all the more difficult for parents given the lack of scientific research on AI's effects on users.
Several parents cite a study published in June by MIT, showing that brain activity and memory were more stimulated in individuals not using generative AI than in those who had access to it.
"I'm afraid it will become a shortcut," explained a father of three who preferred to remain anonymous. "After this MIT study, I want them to use it only to deepen their knowledge."
This caution shapes many parents' approaches. Tal prefers to wait before letting his sons use AI tools. Melissa Franklin only allows her son to use AI with her supervision to find information "we can't find in a book, through Google, or on YouTube."
For her, children must be encouraged to "think for themselves," with or without AI.
But one father -- a computer engineer with a 15-year-old -- doesn't believe kids will learn AI skills from their parents anyway.
"That would be like claiming that kids learn how to use TikTok from their parents," he said. It's usually "the other way around."
Watkins, himself a father, says he is "very concerned" about the new forms that generative AI is taking, but considers it necessary to read about the subject and "have in-depth conversations about it with our children."
"They're going to use artificial intelligence," he said, "so I want them to know the potential benefits and risks."
The CEO of AI chip giant Nvidia, Jensen Huang, often speaks of AI as "the greatest equalization force that we have ever known," democratizing learning and knowledge.
But Watkins fears a different reality: "Parents will view this as a technology that will be used if you can afford it, to get your kid ahead of everyone else."
The computer scientist father readily acknowledged this disparity, saying "My son has an advantage because he has two parents with PhDs in computer science, but that's 90 percent due to the fact that we are more affluent than average" -- not their AI knowledge.
"That does have some pretty big implications," Watkins said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
4 hours ago
- France 24
AI search pushing an already weakened media ecosystem to the brink
"The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media. "Publishers need to build their own shelters or risk being swept away." While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles. When AI summaries are present, users click on suggested links half as often compared to traditional searches. This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web." The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions. But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations. Limited lifelines The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said. However, "these remain incredibly modest compared to other platforms, including even smaller search engines." Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added. To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) -- a technique that replaces traditional SEO (Search Engine Optimization). This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies. But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimization startup OtterlyAI. Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content. "We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organization. Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others. But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft. Let them crawl Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers. Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed. Yet even with open access, success isn't guaranteed. According to OtterlyAI data, media outlets represent just 29 percent of citations offered by ChatGPT, trailing corporate websites at 36 percent. And while Google search has traditionally privileged sources recognized as reliable, "we don't see this with ChatGPT," Peham noted. The stakes extend beyond business models. According to the Reuters Institute's 2025 Digital News Report, about 15 percent of people under 25 now use generative AI to get their news. Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it. "At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarize." Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward. "I think the platforms will realize how much they need the press," predicted Wihbey -- though whether that realization comes soon enough to save struggling newsrooms remains an open question.


Euronews
2 days ago
- Euronews
EU AI Act doesn't do enough to protect artists' copyright, groups say
As the European Artificial Intelligence Act (AI Act)comes into force, groups representing artists say there are still many loopholes that need to be fixed for them to thrive in a creative world increasingly dominated by AI. The AI Act, celebrated for being the first comprehensive legislation to regulate AI globally, is riddled with problems, these organisations say. Groups like the European Composer and Songwriter Alliance (ECSA) and the European Grouping of Societies of Authors and Composers (GESAC) argue that it fails to protect creators whose works are used to train generative AI models. Without a clear way to opt out or get paid when tech companies use their music, books, movies, and other art to train their AI models, experts say that their work is continually at risk. 'The work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us,' Marc du Moulin, ECSA's secretary general, told Euronews Next. 'Putting the cart before the horse' The purpose of the AI Act is to make sure AI stays 'safe, transparent, traceable, non-discriminatory and environmentally friendly,' the European Commission, the European Union's executive body, says in an explainer on the law. The law rates AI companies based on four levels of risk: minimal, limited, high, or unacceptable. Those in the unacceptable range are already banned, for example AIs that are manipulative or that conduct social scoring, where they rank individuals based on behaviour or economic status. Most generative AI falls into a minimal risk category, the Commission says. The owners of those technologies still have some requirements, like publishing summaries of the copyrighted data that companies used to train their AIs. Under the EU's copyright laws, companies are allowed to use copyrighted materials for text and data mining, like they do in AI training, unless a creator has 'reserved their rights,' Du Moulin said. Du Moulin said it's unclear how an artist can go about opting out of their work being shared with AI companies. 'This whole conversation is putting the cart before the horse. You don't know how to opt out, but your work is already being used,' he said. The EU's AI Code of Practice on General-Purpose (GPAI), a voluntary agreement for AI companies, asks providers to commit to a copyright policy, put in place safeguards to avoid any infringements of rights, and designate a place to receive and process complaints. Signatories so far include major tech and AI companies such as Amazon, Google, Microsoft, and OpenAI. AI providers have to respect copyright laws, the Commission says The additional transparency requirements under the AI Act give artists clarity on who has already used their material and when, du Moulin added, making it difficult to claim any payment for work that's already been scraped to train AI models. 'Even if the AI Act has some good legal implications, it only works for the future – it will not be retroactive,' Du Moulin said. 'So everything which has been scraped already … it's a free lunch for generative AI providers who did not pay anything'. Adriana Moscono, GESAC's general manager, said some of her members tried opting out by sending letters and emails to individual AI companies to get a license for their content, but were not successful. 'There was no answer,' Moscono told Euronews Next. 'There was absolute denial of the recognition of … the need to respect copyright and to get a license. So please, European Commission, encourage licensing'. Thomas Regnier, a Commission spokesperson, said in a statement to Euronews Next that AI providers have to respect the rights holders when they carry out text and data mining, and if there have been infringements, they can settle it privately. The AI Act 'in no way affects existing EU copyright laws,' Regnier continued. Mandate licence negotiations, groups ask Du Moulin and Moscono are asking the Commission to urgently clarify the rules around opting out and copyright protection in the law. 'The code of practice, the template and the guidelines, they don't provide us any capacity to improve our situation,' Moscono said. 'They're not guaranteeing … a proper application of the AI Act'. The advocates said the Commission could also mandate that AI companies negotiate blanket or collective licenses with the respective artist groups. Germany's Society for Musical Performing and Mechanical Reproduction Rights (GEMA) filed two copyright lawsuits against AI companies OpenAI, the parent of ChatGPT, and Suno AI, an AI music generation app. While not directly related to the AI Act, Du Moulin says the verdict could determine to what extent AI companies could be bound to copyright laws. The Commission and the European Court of Justice, the EU's high court, have also signalled that they will review the text and data mining exemption in the copyright legislation issued in 2019, Du Moulin said. New AI companies have to make sure they are compliant with the AI Act's regulations by 2026. That deadline extends to 2027 to companies already operating in the EU.


Sustainability Times
2 days ago
- Sustainability Times
'AI Will Change Everything About Nuclear' as US Lab Partners With Amazon Cloud to Build the First Smart Reactors in American History
IN A NUTSHELL 🔧 Idaho National Laboratory partners with Amazon Web Services to develop AI-powered digital twins for nuclear reactors. partners with Amazon Web Services to develop for nuclear reactors. 💡 The collaboration aims to modernize the U.S. nuclear sector, making reactors autonomous and efficient . and . 🌐 The initiative is part of a national push to integrate artificial intelligence into energy infrastructure. into energy infrastructure. 🔍 Focus on safety, cost reduction, and sustainability in nuclear energy development. The United States is taking a bold step in nuclear energy innovation, leveraging the power of artificial intelligence (AI) to transform how nuclear reactors are designed and operated. The Idaho National Laboratory (INL) has partnered with Amazon Web Services (AWS) to develop autonomous nuclear systems. This collaboration aims to create digital twins of nuclear reactors using AWS's advanced cloud technology. The move is part of a larger effort to modernize the nuclear energy sector, which has historically faced challenges such as high costs and regulatory hurdles. Digital Twins: A Groundbreaking Approach The concept of digital twins is at the heart of this initiative. Digital twins are virtual replicas of physical systems that enable detailed modeling and simulation. By utilizing AWS's cloud infrastructure, INL aims to create digital twins of small modular reactors (SMRs). These reactors, with capacities ranging from 20 to 300 megawatts, are poised to benefit from AI-driven efficiencies. John Wagner, Director of INL, highlighted the significance of this collaboration, stating that it marks a leap forward in integrating AI with nuclear energy research. 'Our collaboration with Amazon Web Services marks a significant leap forward in integrating advanced AI technologies into our nuclear energy research and development initiatives,' Wagner noted. The partnership underscores the critical role of linking the nation's nuclear energy laboratory with AWS to accelerate nuclear energy deployment. By using real-time data, these digital twins will enhance modeling capabilities, facilitate simulations, and eventually allow for safe autonomous operations. This initiative is expected to revolutionize how nuclear plants are built and operated, offering potential cost reductions and improved safety. America's Artificial Sun Is Here and It's Already Tearing the Country Apart Between Tech Elites, Climate Rebels, and Energy Giants Harnessing the Power of Machine Learning As part of INL's broader vision, the integration of machine learning with nuclear technology aims to create an AI-nuclear ecosystem. This ecosystem will connect Department of Energy (DOE) labs, tech companies, and energy developers. The ultimate goal is to develop nuclear reactors that are not only faster to construct but also safer and more intelligent in operation. The INL-AWS partnership follows a similar collaboration between Westinghouse and Google Cloud, highlighting the growing importance of AI in the nuclear sector. By combining AI platforms with proprietary nuclear data, these partnerships aim to accelerate the development of advanced nuclear technologies. In May 2025, President Donald Trump signed executive orders to streamline reactor permitting and expand domestic nuclear fuel production. These efforts are part of a broader strategy to modernize the U.S. nuclear energy infrastructure and support increasing AI-driven power demands. 'Hotter Than Hellfire Itself': Critics Slam Fusion Reactor Part That Withstands Temperatures Higher Than Asteroid Impacts A National Push for AI-Driven Nuclear Power The U.S. government has recognized the critical role of nuclear energy in maintaining technological competitiveness and supporting future data center growth. The release of a national AI Action Plan in July 2025 identified reliable, dispatchable energy, including nuclear power, as essential components of this strategy. The partnership between INL and AWS is a reflection of this national push. By leveraging customized chips like Inferentia and Trainium, along with tools such as Amazon SageMaker, the collaboration aims to drive the adoption of AI in nuclear applications. Chris Ritter, division director of Scientific Computing and AI at INL, emphasized the importance of this partnership in accessing AI models and specialized cloud services. This initiative is not just about technological advancements; it is also about redefining the future of energy production. By embracing AI, the U.S. is positioning itself at the forefront of global nuclear innovation. 'Nuclear Fusion Just Got Real': Scientists Unveil Breakthrough That Could Deliver Endless Clean Energy and Erase Fossil Fuel Dependency The Road Ahead for AI and Nuclear Energy While the potential benefits of AI-driven nuclear energy are immense, the path forward is not without challenges. The integration of AI into nuclear systems requires careful consideration of safety protocols, regulatory compliance, and public acceptance. However, the collaboration between INL and AWS is a promising step toward overcoming these hurdles. As the U.S. continues to invest in AI-driven nuclear technologies, the focus will be on creating a sustainable and secure energy future. The development of autonomous reactors and digital twins represents a significant shift in how nuclear energy is perceived and utilized. The question remains: How will this transformation in nuclear energy impact global energy dynamics, and what role will AI play in shaping the future of sustainable power? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.6/5 (22)