
Google undercounts its carbon emissions, report finds
In 2021, Google set a lofty goal of achieving net-zero carbon emissions by 2030. Yet in the years since then, the company has moved in the opposite direction as it invests in energy-intensive artificial intelligence. In its latest sustainability report, Google said its carbon emissions had increased 51% between 2019 and 2024.
New research aims to debunk even that enormous figure and provide context to Google's sustainability reports, painting a bleaker picture. A report authored by non-profit advocacy group Kairos Fellowship found that, between 2019 and 2024, Google's carbon emissions actually went up by 65%. What's more, between 2010, the first year there is publicly available data on Google's emissions, and 2024, Google's total greenhouse gas emissions increased 1,515%, Kairos found. The largest year-over-year jump in that window was also the most recent, 2023 to 2024, when Google saw a 26% increase in emissions just between 2023 and 2024, according to the report.
'Google's own data makes it clear: the corporation is contributing to the acceleration of climate catastrophe, and the metrics that matter – how many emissions they emit, how much water they use, and how fast these trends are accelerating – are headed in the wrong direction for us and the planet,' said Nicole Sugerman, a campaign manager at Kairos Fellowship.
The authors say that they found the vast majority of the numbers they used to determine how much energy Google is using and how much its carbon emissions are increasing in the appendices of Google's own sustainability reports. Many of those numbers were not highlighted in the main body of Google's reports, they say.
After the report published, Google called its findings into question in a statement.
'The analysis by the Kairos Fellowship distorts the facts. Our carbon emissions are calculated according to the widely used Greenhouse Gas Protocol and assured by a third party. Our carbon reduction ambition has been validated by the leading industry body, the Science Based Targets initiative,' said a spokesperson, Maggie Shiels.
The authors behind the report, titled Google's Eco-Failures, attribute the discrepancy between the numbers they calculated and the numbers Google highlights in its sustainability reports to various factors, including that the firm uses a different metric for calculating how much its emissions have increased. While Google uses market-based emissions, the researchers used location-based emissions. Location-based emissions are the average emissions the company produces from its use of local power grids, while market-based emissions include energy the company has purchased to offset its total emissions.
'[Location-based emissions] represents a company's 'real' grid emissions,' said Franz Ressel, the lead researcher and report co-author. 'Market-based emissions are a corporate-friendly metric that obscures a polluter's actual impact on the environment. It allows companies to pollute in one place, and try to 'offset' those emissions by purchasing energy contracts in another place.'
The energy the tech giant has needed to purchase to power its data centers alone increased 820% since 2010, according to Kairos's research, a figure that is expected to expand in the future as Google rolls out more AI products. Between 2019 and 2024, emissions that came primarily from the purchase of electricity to power data centers jumped 121%, the report's authors said.
'In absolute terms, the increase was 6.8 TWh, or the equivalent of Google adding the entire state of Alaska's energy use in one year to their previous use,' said Sugerman.
Based on Google's current trajectory, the Kairos report's authors say the company is unlikely to meet its own 2030 deadline without a significant push from the public. There are three categories of greenhouse gas emissions – called Scopes 1, 2 and 3 – and Google has only meaningfully decreased its Scope 1 emissions since 2019, according to the Kairos report. Scope 1 emissions, which include emissions just from Google's own facilities and vehicles, account for only 0.31% of the company's total emissions, according to the report. Scope 2 emissions are indirect emissions that come primarily from the electricity Google purchases to power its facilities, and scope 3 accounts for indirect emissions from all other sources such as suppliers, use of Google's consumer devices or employee business travel.
'It's not sustainable to keep building at the rate [Google is] building because they need to scale their compute within planetary limits,' said Sugerman. 'We do not have enough green energy to serve the needs of Google and certainly not the needs of Google and the rest of us.'
Thirsty, power-hungry data centers
As the company builds out resource-intensive data centers across the country, experts are also paying close attention to Google's water usage. According to the company's own sustainability report, Google's water withdrawal – how much water is taken from various sources – increased 27% between 2023 and 2024 to 11bn gallons of water.
The amount is 'enough to supply the potable water needs for the 2.5 million people and 5,500 industrial users in Boston and its suburbs for 55 days', according to the Kairos report.
Tech companies have faced both internal and public pressure to power their growing number of data centers with clean energy. Amazon employees recently put forth a package of shareholder proposals that asked the company to disclose its overall carbon emissions and targeted the climate impact of its data centers. The proposals were ultimately voted down. On Sunday, several organizations including Amazon Employees for Climate Justice, League of Conservation Voters, Public Citizen and the Sierra Club, published an open letter in the San Francisco Chronicle and the Seattle Times calling on the CEOs of Google, Amazon and Microsoft to 'commit to no new gas and zero delayed coal plant retirements to power your data centers'.
'In just the last two years alone, your companies have built data centers throughout the United States capable of consuming more electricity than four million American homes,' the letter reads. 'Within five years, your data centers alone will use more electricity than 22 million households, rivaling the consumption of multiple mid-size states.'
In its own sustainability report, Google warns that the firm's 'future trajectories' may be affected by the 'evolving landscape' of the tech industry.
'We're at an extraordinary inflection point, not just for our company specifically, but for the technology industry as a whole – driven by the rapid growth of AI,' the report reads. 'The combination of AI's potential for non-linear growth driven by its unprecedented pace of development and the uncertain scale of clean energy and infrastructure needed to meet this growth makes it harder to predict our future emissions and could impact our ability to reduce them.'
The Kairos report accuses Google of relying 'heavily on speculative technologies, particularly nuclear power', to achieve its goal of net zero carbon emissions by 2030.
'Google's emphasis on nuclear energy as a clean energy 'solution' is particularly concerning, given the growing consensus among both scientists and business experts that their successful deployment on scale, if it is to ever occur, cannot be achieved in the near or mid-term future,' the report reads.
The Kairos report alleges the way that Google presents some of its data is misleading. In the case of data center emissions, for example, Google says it has improved the energy efficiency of its data centers by 50% over 13 years. Citing energy efficiency numbers rather than sharing absolute ones obscures Google's total emissions, the authors argue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Globe and Mail
an hour ago
- Globe and Mail
Deepfake AI Market Latest Trends, Future Outlook, Size, Share, Applications, Advance Technology And Forecast
"Datambit (UK), Microsoft (US), AWS (US), Google (US), Intel (US), Veritone (US), Cogito Tech (US), Primeau Forensics (US), iProov (UK), Kairos (US), ValidSoft (US), MyHeritage (Israel), HyperVerge (US), BioID (Germany), DuckDuckGoose AI (Netherlands), Pindrop (US), Truepic (US), Synthesia (UK)." Deepfake AI Market by Offering (Deepfake Generation Software, Deepfake Detection & Authentication Software, Liveness Check Software, Services), Technology (Transformer Models, GANs, Autoencoders, NLP, RNNs, Diffusion Models) - Global Forecast to 2031. The size of the worldwide deepfake AI market is expected to increase at a compound annual growth rate (CAGR) of 42.8% from USD 857.1 million in 2025 to USD 7,272.8 million by 2031. Generative Adversarial Networks (GANs) and diffusion models, which enable hyper-realistic deepfake generation; the growing creator economy and social media's demand for creative content, which leads to wider adoption; and the concerning increase in deepfake frauds and misinformation, which feeds the urgent need for reliable detection solutions across industries, are the main factors driving the deepfake AI market. Download PDF Brochure@ The deepfake AI market is witnessing accelerated growth due to the rising adoption of multimodal detection systems that combine audio-visual signals with metadata analysis to enhance detection precision. As synthetic media becomes more layered, with deepfakes now blending facial animations, voice mimicry, and scene manipulation, enterprises are investing in tools that analyze cross-modal inconsistencies rather than relying on isolated visual cues. These advanced solutions are being embedded across high-stakes environments such as banking authentication flows, online proctoring, and digital onboarding platforms where real-time decisioning and high accuracy are critical. Multimodal detection also supports operational scalability by reducing false positives and improving model confidence, enabling enterprises to automate content trust decisions at volume. Regulatory scrutiny is further driving adoption, especially in sectors such as finance, government, and telecommunications, where content authenticity and user verification have become compliance priorities. With AI foundation models and transformer architectures now capable of jointly processing audio, video, and contextual metadata, the deepfake detection landscape is evolving into a strategic layer of enterprise risk management. Generative adversarial networks remain the backbone technology of deepfake AI development and detection, registering the largest share by market value in 2025 Among all core technologies underpinning the deepfake AI market, Generative Adversarial Networks (GANs) represent the largest and most commercially entrenched segment. Their bidirectional framework—comprising generator and discriminator models—forms the foundational mechanism for crafting synthetic media and serves as the analytical basis for detecting forgeries with increasing accuracy. GANs have matured from research prototypes to enterprise-grade engines that power a wide spectrum of deepfake capabilities, including face swapping, expression control, voice imitation, and image realism scoring. On the detection side, their adversarial structure is being reverse-engineered to identify digital fingerprints, compression artifacts, and inconsistencies in texture, lighting, or pixel alignment. GANs are also embedded in real-time media forensics and security pipelines, especially across sectors such as law enforcement and social platforms, where they aid in decoding malicious manipulation. The widespread availability of pre-trained GAN libraries and cloud-based tools is fueling enterprise adoption and reducing time-to-deployment for deepfake-centric solutions. Their continued evolution into variants like StyleGAN and conditional GANs is enabling more granular control and detection precision, positioning them as the dominant technology category in both deepfake generation and defense. BFSI is expected to be the fastest-growing vertical during the forecast period, fueled by a spike in synthetic fraud threats and regulatory pressure By vertical, the BFSI sector is expected to register the fastest growth in the deepfake AI market during the forecast period, driven by rising concerns around digital identity fraud, social engineering attacks, and synthetic KYC submissions. As financial institutions digitalize onboarding and service workflows, they are deploying advanced deepfake detection systems to validate customer identity during eKYC, video banking, and loan verification processes. Liveness detection and micro-expression analysis are increasingly being used to distinguish real users from AI-generated imposters, with regulatory mandates further accelerating deployment. Fraud analytics platforms are integrating deepfake-specific classifiers to monitor voice spoofing in call centers, manipulated transaction videos, and altered screenshots submitted in claims. Additionally, private banks and insurance providers are leveraging synthetic media analysis tools to prevent reputational and compliance risks linked to fake communications or phishing campaigns. Strategic partnerships with detection vendors and biometric verification startups are also rising, particularly in North America and Asia Pacific. With regulators in several jurisdictions issuing early-stage guidelines on synthetic identity detection, the BFSI segment is rapidly becoming the proving ground for enterprise-grade, compliant deepfake AI solutions. Asia Pacific to witness the fastest growth in the deepfake AI market, accelerated by a surge in synthetic media abuse and high-volume digital onboarding across financial institutions Asia Pacific is witnessing the fastest growth in the deepfake AI market, fueled by rapid digital transformation, a booming social media user base, and mounting cybersecurity threats. Countries such as China, India, South Korea, and Japan are experiencing a surge in manipulated media cases, ranging from identity fraud to misinformation campaigns, which are prompting governments and enterprises to invest in detection and liveness verification technologies. Financial institutions across the region are embedding deepfake identification tools within eKYC and fraud prevention systems, especially in emerging markets with high digital onboarding volumes. Regulatory bodies have also begun tightening guidelines on content authenticity and AI usage, encouraging the adoption of compliant AI governance and media authentication layers. The region's large pool of AI research talent, combined with public-private collaborations, is accelerating the development of multimodal detection models customized for regional languages and facial features. Additionally, Asia Pacific's growing investments in metaverse infrastructure and synthetic media production are creating parallel demand for quality control tools. Enterprises in sectors such as BFSI, government, and media are now embedding deepfake detection capabilities at the infrastructure level, positioning Asia Pacific as the most dynamic growth hub for deepfake AI during the forecast period. Request Sample Pages@ Unique Features in the Deepfake AI Market Generative Adversarial Networks (GANs) remain the backbone of deepfake generation, responsible for creating highly realistic synthetic media by pitting generator and discriminator models against each other. These systems capture subtle facial expressions, voice patterns, and micro‑motions. Meanwhile, transformer-based architectures—rapidly growing in adoption—are key in boosting realism, temporal coherence, and multimodal integration in deepfake outputs Platforms like Synthesia and Colossyan offer scalable generation of AI avatars that support dozens of languages, enabling video production without cameras or actors. Reid Hoffman's "deepfake twin" experiment shows how these tools can clone one's voice and extend it into multiple languages—used, for example, to deliver speeches in Hindi, Chinese, Japanese, and more Deepfake maturity now includes real-time and even autonomous generation, where AI-driven agents interact live across platforms. Check Point Research notes these can be used in scams like CEO fraud in live video calls, with losses exceeding tens of millions in recent incidents The detection segment has grown sophisticated: solutions like Vastav AI (India‑based), Intel FakeCatcher, BioID, Veritone, etc., offer forensic-level detection, metadata inspection, confidence scoring, and heatmaps to identify deepfakes in real time. These tools are increasingly offered on cloud platforms for scalable enterprise deployment Major Highlights of the Deepfake AI Market The deepfake AI market is witnessing explosive growth, driven by advancements in generative AI, computer vision, and natural language processing. Its use spans across entertainment, marketing, education, healthcare, and increasingly, malicious domains like misinformation and cyber fraud. The expansion of use cases—from Hollywood-grade face swapping to AI-generated avatars—underscores the growing versatility and commercial interest in the space. One of the most pressing highlights is the surge in cybercrime facilitated by deepfakes, particularly impersonation scams, political manipulation, and financial fraud. Real-time deepfake voice or video manipulation has been used in high-profile scams, including impersonation of CEOs during video calls to extract money or data. As technology becomes more accessible, threats to businesses and governments are becoming more sophisticated and harder to detect. To counteract misuse, the demand for deepfake detection technologies has surged. Tools from companies like Intel, Sensity AI, Deepware, and Vastav AI are being adopted by media platforms, financial institutions, and law enforcement. These tools use AI to identify manipulated content through metadata, facial distortions, lip sync mismatches, and contextual anomalies—ushering in a new age of content authentication. Despite the risks, the deepfake AI market is also evolving positively, with ethical applications growing in fields like education, accessibility, marketing, and film production. For instance, AI avatars are being used for personalized learning, digital actors for low-budget film production, and language dubbing across global markets. These uses are helping to legitimize and monetize the technology in regulated ways. Inquire Before Buying@ Top Companies in the Deepfake AI Market The major players in the deepfake AI market include Datambit (UK), Microsoft (US), AWS (US), Google (US), Intel (US), Veritone (US), Cogito Tech (US), Primeau Forensics (US), iProov (UK), Kairos (US), ValidSoft (US), MyHeritage (Israel), HyperVerge (US), BioID (Germany), DuckDuckGoose AI (Netherlands), Pindrop (US), Truepic (US), Synthesia (UK), (US), Deepware (Turkey), iDenfy (US), Q-Integrity (Switzerland), D-ID (Israel), Resemble AI (US), Sensity AI (Netherlands), Reality Defender (US), Attestiv (US), WeVerify (Germany), (US), Kroop AI (India), Respeecher (Ukraine), DeepSwap (US), Reface (Ukraine), (UK), Oz Forensics (UAE), Perfios (US), Illuminarty (US), Deepfake Detector (UK), buster (France), AutheticID (US), Jumio (US), and Paravision (US). Microsoft Microsoft has become one of the key players in the deepfake AI market through a broader strategy of embedding advanced AI ethics, trust, and safety measures across its expansive product ecosystem. Recognizing the threat posed by synthetic media to digital trust, Microsoft has developed and integrated technologies such as the Microsoft Video Authenticator, which can analyze photos and videos to provide a confidence score about whether the media is artificially manipulated. Additionally, Microsoft's strategic acquisition of startups and partnerships with academic institutions have strengthened its detection capabilities. A notable move was its collaboration with the AI Foundation to advance responsible content creation and fight deepfake misuse. By embedding deepfake detection and authenticity verification tools within its Azure AI and Microsoft 365 suites, Microsoft empowers enterprises, media outlets, and government agencies to protect against misinformation. The company has also backed initiatives like Project Origin and the Coalition for Content Provenance and Authenticity (C2PA) to promote industry-wide standards for digital media provenance. These strategic choices align with Microsoft's trust-first brand positioning, giving it an edge in addressing regulatory concerns and building customer confidence. Moreover, Microsoft invests heavily in educating its enterprise customers on synthetic media threats, positioning itself not just as a tech provider but as a key thought leader shaping policy discussions on deepfakes. This multi-faceted approach has helped Microsoft strengthen its share in the deepfake AI market while reinforcing its commitment to digital security and ethical AI innovation. Google Google has emerged as one of the most influential technology players tackling the challenges posed by deepfakes through a mix of pioneering research, robust product integration, and strategic ecosystem collaboration. Google's decision to publicly release one of the largest deepfake datasets, the DeepFake Detection Dataset, gave the global research community a valuable resource to train and benchmark detection models. This open-source approach demonstrates Google's commitment to transparency and collective progress in combating synthetic media threats. On the product side, Google has embedded detection capabilities within its YouTube platform to counter manipulated videos and misinformation campaigns, investing heavily in machine learning models that flag fake content at scale. Google has also been a driving force behind open standards for digital media authenticity through partnerships with the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA), aligning its strategy with industry leaders like Adobe and Twitter. Beyond detection, Google's AI research teams at DeepMind contribute foundational research on generative adversarial networks (GANs) and countermeasures, ensuring it stays at the forefront of both generation and detection advancements. By combining its technical expertise, vast computing infrastructure, and global reach, Google is uniquely positioned to address deepfake risks across platforms and devices. This proactive, research-driven approach enhances its reputation as a trusted steward of information integrity, bolstering its competitive advantage in the rapidly evolving deepfake AI market. Datambit Datambit is a UK-based AI company recognized for its innovative contributions to multimedia forensics and synthetic media detection. In the Deepfake AI market, Datambit focuses on developing advanced detection systems that leverage computer vision and machine learning to identify manipulated video and audio content. Their solutions are increasingly adopted by media companies, legal entities, and cybersecurity firms to combat misinformation, protect brand integrity, and enhance content authenticity in a rapidly evolving digital landscape. Amazon Web Services (AWS) Amazon Web Services (AWS) plays a pivotal role in the Deepfake AI market by offering scalable cloud infrastructure and machine learning tools that enable the development and deployment of deepfake generation and detection technologies. Through services like Amazon Rekognition and SageMaker, AWS supports researchers, developers, and enterprises in creating synthetic media as well as detecting manipulated content. AWS also emphasizes ethical AI use, providing resources and policies aimed at mitigating the misuse of generative models. Intel Corporation Intel is a key player in the Deepfake AI space, driving innovation through its hardware acceleration technologies and AI research. The company collaborates with academic and industry partners to develop tools for deepfake detection, including the FakeCatcher—a real-time deepfake detection platform that identifies synthetic content by analyzing subtle biological signals in videos. Intel's commitment to responsible AI development and content authenticity positions it as a trusted leader in countering the spread of manipulated media across industries.


Edmonton Journal
4 hours ago
- Edmonton Journal
ChatGPT giving teens dangerous advice on drugs, alcohol and suicide: new study
Article content But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. Article content The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. Article content In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. Article content It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. Article content 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Article content Article content Altman said the company is 'trying to understand what to do about it.' Article content While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. Article content One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new: a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Article content Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. Article content Article content 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' Article content The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. Article content The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. Article content It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Article content Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Article content Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. Article content A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Article content Common Sense has labeled ChatGPT a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. Article content ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. Article content Article content When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. Article content 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. Article content 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' Article content To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. Article content 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.''


Ottawa Citizen
4 hours ago
- Ottawa Citizen
ChatGPT giving teens dangerous advice on drugs, alcohol and suicide: new study
Article content But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. Article content The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. Article content In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. Article content It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. Article content 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Article content Article content Altman said the company is 'trying to understand what to do about it.' Article content While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. Article content One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new: a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Article content Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. Article content Article content 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' Article content The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. Article content The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. Article content It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Article content Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Article content Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. Article content A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Article content Common Sense has labeled ChatGPT a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. Article content ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. Article content Article content When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. Article content 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. Article content 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' Article content To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs.