logo
Sen. Hawley to probe Meta AI bot policies for children following damning report

Sen. Hawley to probe Meta AI bot policies for children following damning report

CNBC2 days ago
Sen. Josh Hawley, R-Mo., said Friday that he will investigate Meta following a report that the company approved rules allowing artificial intelligence chatbots to have certain "romantic" and "sensual" conversations with children.
Hawley called on Meta CEO Mark Zuckerberg to preserve relevant materials, including emails, and said the probe would target "whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards."
"Is there anything - ANYTHING - Big Tech won't do for a quick buck?" Hawley said in a post on X announcing the investigation.
Meta declined to comment on Hawley's letter.
Hawley noted a Reuters report published Thursday that cited an internal document detailing acceptable behaviors from Meta AI chatbots that the company's staff and contract workers should permit as part of developing and training the software.
The document acquired by Reuters noted that a chatbot would be permitted to hold a romantic conversation with an eight-year-old, telling the child that "every inch of you is a masterpiece – a treasure I cherish deeply."
The Meta guidelines said: "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," according to the Reuters report.
The Meta chatbots would not be permitted to engage in more explicit conversations with children under 13 "in terms that indicate they are sexually desirable," the report said.
"We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley wrote.
A Meta spokesperson told Reuters that "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."
"We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," the Meta spokesperson told Reuters.
Hawley said Meta must produce documents about its Generative AI-related content risks and standards, lists of every product that adheres to those policies, and other safety and incident reports.
Meta should also provide various public and regulatory communications involving minor safety and documents about staff members involved with the AI policies to determine "the decision trail for removing or revising any portions of the standard."
Hawley is chair of the Senate Committee Subcommittee on Crime and Counterterrorism, which will carry out the investigation.
Meta has until Sep. 19 to provide the documents, the letter said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Melania Trump's Message to Putin Sparks AI Claims
Melania Trump's Message to Putin Sparks AI Claims

Newsweek

time19 minutes ago

  • Newsweek

Melania Trump's Message to Putin Sparks AI Claims

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Melania Trump's letter to Russian President Vladimir Putin has sparked speculation over whether the first lady used AI to pen a missive to the Kremlin chief. In a message posted to the first lady's social media, Melania Trump appealed to Putin to "singlehandedly restore" the "melodic laughter" of children caught up in the nearly three and a half years of full-scale war in Ukraine. Why It Matters President Donald Trump met with Putin in Alaska on Friday with relations soured by Russia's refusal to ink a ceasefire deal brokered by the U.S. to stop fighting in Ukraine. Although no deal was reached in Anchorage, Trump described the summit as "useful" and pivoted his position on a ceasefire, saying he would move straight to a permanent peace deal. Ukrainian President, Volodymyr Zelensky, who will visit Washington on Monday, said on Saturday that Russia's refusal to sign a ceasefire "complicates the situation." First lady Melania Trump listens as President Donald Trump delivers remarks during an Independence Day military family picnic on the South Lawn of the White House in Washington, D.C., on July 4, 2025. First lady Melania Trump listens as President Donald Trump delivers remarks during an Independence Day military family picnic on the South Lawn of the White House in Washington, D.C., on July 4, To Know The message sent by the first lady, dubbed a "peace letter" on her official Instagram account, quickly prompted online commentators to question whether the letter had been generated using AI. Democrat strategist, Keith Edwards, said in a post to X the letter "says a whole lot of nothing," and "may have been written by AI." Chris Jackson, a long-time supporter of former president Joe Biden and Democrat activist, said in his own social media post that he had run the letter through AI, which had deemed the message to the Kremlin AI-generated. Grok, a tool developed by Elon Musk's company xAI, said the letter "shows strong signs of AI generation" with "minor human edits for tone." There is no concrete evidence that the letter published on the first lady's accounts was generated using artificial intelligence. Newsweek has reached out to Melania Trump's office for comment. As tools improve, it becomes harder to work out when AI is responsible for published writing. When Newsweek asked AI's opinion on the letter, it said it had "qualities that resemble AI writing," including a lofty, idealistic style. "Words like purity, innocence, humanity, love, possibility, dignity are stacked densely without much concrete policy or detail. AI models tend to default to these universal values when asked to 'sound inspirational,'" according to one AI response. But AI also said the letter did not feature the types of tonal shifts or awkward phasing often associated with content generated using artificial intelligence. Zelensky Thanks The First Lady for 'Sincere Attention' President Trump hand-delivered a letter from his wife to Putin during the summit, which focused on the fate of Ukrainian and Russian children, Reuters reported, citing two unnamed White House officials. Minister of Foreign Affairs of Ukraine Andrii Sybiha said in a post to social media that Zelensky had "conveyed his gratitude" to Trump during a conversation on Saturday for the first lady's "sincere attention and efforts to bring forcibly deported Ukrainian kids back." The version of the letter published by the first lady on her social media does not directly mention Ukrainian children taken to Russia. Ukraine has consistently said Russia has abducted thousands of Ukrainian children throughout its full-scale invasion of the country and relocated them to Russia or Russian-controlled areas. Kyiv has called this a war crime. The International Criminal Court in March 2023 issued arrest warrants for Putin and Maria Lvova-Belova, Russia's children's rights commissioner, for being "allegedly responsible for the war crime of unlawful deportation of population (children) and that of unlawful transfer of population (children) from occupied areas of Ukraine to the Russian Federation." Moscow has said children were moved away from conflict areas to protect them. UN Human Rights Chief Volker Türk said earlier this year that children had been subject to "summary executions, arbitrary detention, conflict-related sexual violence, torture and ill-treatment" in the four regions of Ukraine Russia claimed to have annexed in fall 2022. The Kremlin declared Donetsk, Luhansk, Kherson and Zaporizhzhia were part of Russia after referendums widely condemned as a sham. Russia had seized Crimea, to the south of the mainland, from Ukraine in 2014. The first lady has released an audiobook touted as "created entirely with artificial intelligence audio technology." The seven-hour-long audiobook is narrated by Melania Trump's "official AI voice." Melania Trump faced accusations in 2016 that she had plagiarized a previous address given by Michelle Obama during a section of her own speech to the Republican National Convention. Trump's team at the time said Melania Trump had used "common words" and not copied Michelle Obama's speech. The First Lady's Letter in Full "Dear President Putin, Every child shares the same quiet dreams in their heart, whether born randomly into a nation's rustic countryside or a magnificent city-center. They dream of love, possibility, and safety from danger. As parents, it is our duty to nurture the next generation's hope. As leaders, the responsibility to sustain our children extends beyond the comfort of a few. Undeniably, we must strive to paint a dignity-filled world for all – so that every soul may wake to peace, and so that the future itself is perfectly guarded. A simple yet profound concept, Mr. Putin, as I am sure you agree, is that each generation's descendants begin their lives with a purity – an innocence which stands above geography, government, and ideology. Yet in today's world, some children are forced to carry a quiet laughter, untouched by the darkness around them – a silent defiance against the forces that can potentially claim their future. Mr. Putin, you can singlehandedly restore their melodic laughter. In protecting the innocence of these children, you will do more than serve Russia alone – you serve humanity itself. Such a bold idea transcends all human division, and you, Mr. Putin, are fit to implement this vision with a stroke of the pen today. It is time."

OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business
OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business

Yahoo

time30 minutes ago

  • Yahoo

OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business

It's an epochal moment as history's latest general-purpose technology, AI, forms itself into an industry. Much depends on these early days, especially the fate of the industry's leader by a mile, Open AI. In terms of the last general-purpose technology, the internet, will it become a colossus like Google or be forgotten like AltaVista? No one can know, but here's how to think about it. OpenAI's domination of the industry is striking. As the creator of ChatGPT, it recently attracted 78% of daily unique visitors to core model websites, with six competitors splitting up the rest, according to a recent 40-page report from J.P. Morgan. Even with that vast lead, the report shows, OpenAI is expanding its margin over its much smaller competitors, including even Gemini, which is part of Google and its giant parent, Alphabet (2024 revenue: $350 billion). The great question now is whether OpenAI can possibly maintain its wide lead (history would say no) or at least continue as the industry leader. The answer depends heavily on OpenAI's moat, a Warren Buffett term for any factor that protects the company and cannot be easily breached–think of Coca-Cola's brand or BNSF Railroad's economies of scale, to mention two of Buffett's successful investments. On that count the J.P. Morgan analysts are not optimistic. Specifically, they acknowledge that while OpenAI has led the industry in innovating its models, that strategy is 'an increasingly fragile moat.' Example: The company's most recent model, GPT-5, included multiple advances yet underwhelmed many users. As competitors inevitably catch up, the analysts conclude, 'Model commoditization is an increasingly likely outcome.' With innovations suffering short lives, OpenAI must now become 'a more product-focused, diversified organization that can operate at scale while retaining its position' at the top of the industry–skills the company has yet to demonstrate. Bottom line, OpenAI can maintain its leading rank in the industry, but it won't be easy, and betting on it could be risky. Yet a different view suggests OpenAI is much closer to creating a sustainable moat. It comes from Robert Siegel, a management lecturer at Stanford's Graduate School of Business who is also a venture capitalist and former executive at various companies, many in technology. He argues that OpenAI is already well along the road to achieving a valuable attribute, stickiness: The longer customers use something, the less likely they are to switch to a competitor. In OpenAI's case, 'people will only move to Perplexity or Gemini or other solutions if they get a better result,' he says. Yet that becomes unlikely because AI learns; the more you use a particular AI engine, the more it learns about you and what you want. 'If you keep putting questions into ChatGPT, which learns your behaviors better, and you like it, there's no reason to leave as long as it's competitive.' Now combine that logic with OpenAI's behavior. 'It seems like their strategy is to be ubiquitous,' Siegel says, putting ChatGPT in front of as many people as possible so the software can start learning about them before any competitor can get there first. Most famously, OpenAI released ChatGPT 3.5 to the public in 2022 for free, attracting a million users in five days and 100 million in two months. In addition, the company raised much investment early in the game, having been founded in 2015. Thus, Siegel says, OpenAI can 'continue to run hard and use capital as a moat so they can do all the things they need to do to be everywhere.' But Siegel, the J.P. Morgan analysts, and everyone else knows plenty can always go wrong. An obvious threat to OpenAI and most of its competitors is an open-source model such as China's DeepSeek, which appears to perform well at significantly lower costs. The venture capital that has poured into OpenAI could dry up as hundreds of other AI startups compete for financing. J.P. Morgan and Siegel agree that OpenAI's complex unconventional governance structure must be reformed; though a recently proposed structure has not been officially disclosed, it is reportedly topped by a nonprofit, which might worry profit-seeking investors. As for moats, OpenAI is obviously in the best position to build or strengthen one. But looking into the era of AI, the whole concept of the corporate moat may become meaningless. How long will it be, if it hasn't been done already, before a competitor asks its own AI engine, 'How do we defeat OpenAI's moat?' This story was originally featured on Sign in to access your portfolio

‘My Kid Will Never Ever Be Smarter Than an AI': OpenAI's Sam Altman Warns Most Kids Won't Know a World Without AI
‘My Kid Will Never Ever Be Smarter Than an AI': OpenAI's Sam Altman Warns Most Kids Won't Know a World Without AI

Yahoo

time30 minutes ago

  • Yahoo

‘My Kid Will Never Ever Be Smarter Than an AI': OpenAI's Sam Altman Warns Most Kids Won't Know a World Without AI

Artificial intelligence (AI) is now so advanced that some experts believe no child will ever surpass its intelligence again. OpenAI CEO Sam Altman, a central figure in the world of artificial intelligence, recently reflected on the transformative potential of AI advancements, and particularly their impact on the next generation. Speaking on a podcast, he remarked, 'My kid will never ever be smarter than an AI. That will never happen. You know, kid born a few years ago. They had a brief period of time. My kid never will be smarter.' More News from Barchart Warren Buffett Cautions Ill-Informed Investors: 'The Market, Like the Lord, Helps Those Who Help Themselves,' But Markets Are Unforgiving Can Archer Aviation Become the Tesla of the Skies? As Kodak Terminates Its Pension Plans, What Top Companies Still Offer This Retirement Perk? Our exclusive Barchart Brief newsletter is your FREE midday guide to what's moving stocks, sectors, and investor sentiment - delivered right when you need the info most. Subscribe today! Altman's statement captures a profound and ongoing shift, both in technology and society. As the current leader of OpenAI, the organization behind breakthroughs like the GPT-language model series and other advanced AI technologies, Altman's insights carry significant weight. His views are shaped by daily interactions with researchers pushing the boundaries of what AI can achieve — tasks ranging from language generation and autonomous reasoning to problem-solving at a superhuman scale. When Altman says 'my kid will never be smarter than an AI,' he is not lamenting a loss, but observing a turning point in technology. Historically, each new generation had the chance to exceed the achievements of earlier ones, shaped by new education, tools, and inventions. Now, he says, a rapidly accelerating AI trajectory means that children born today will coexist with machines that learn and develop orders of magnitude faster, with access to vast data and computational resources. Altman's comment reflects both a recognition of what has already changed and a sense of inevitability about the future. The authority behind Altman's remark comes from his central role at OpenAI. Since its founding in 2015, OpenAI has led the development of generative AI with a philosophy that blends technological optimism and public caution. Altman, previously a leading Silicon Valley investor and technologist, has often spoken about the responsibility of the sector and the need for flexible, thoughtful policy as AI becomes increasingly integrated into everyday life and the economy. His assertion that no human — no matter how young or well-educated — could ever outpace AI is rooted in empirical reality. AI models now routinely outperform humans in specialized knowledge domains, can process and generate language with uncanny fluency, and are applied across finance, healthcare, logistics, and creative fields. The 'brief period of time' when a child or their peers could match or exceed machine intelligence may well have effectively vanished, as Altman suggests, replaced by a world where coexistence and collaboration with increasingly capable AI systems is the norm. This perspective is particularly salient as debates about job displacement, educational outcomes, and the essence of human endeavor gain prominence. Altman's comment is not simply an observation about his own family, but a reflection of the collective transition underway: society must adapt to new definitions of intellect, capability, and value in an era dominated by artificial intelligence. Experts suggest this requires a renewed emphasis on skills such as creativity, adaptability, and ethical reasoning — areas where machines may never fully overtake human strengths. For now, Altman's remark encapsulates the magnitude of change artificial intelligence is bringing to global culture, labor, and the imagination of what people can become. As AI evolves, the notion of human uniqueness is being redefined, not diminished — and it's a process that will shape the upbringing and prospects of generations to come. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store