
ChatGPT now tells teenager how to get drunk and high, write heartbreaking suicide note. Shocking details emerge in study
will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury.
The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous.
Productivity Tool
Zero to Hero in Microsoft Excel: Complete Excel guide
By Metla Sudha Sekhar
View Program
Finance
Introduction to Technical Analysis & Candlestick Theory
By Dinesh Nagpal
View Program
Finance
Financial Literacy i e Lets Crack the Billionaire Code
By CA Rahul Gupta
View Program
Digital Marketing
Digital Marketing Masterclass by Neil Patel
By Neil Patel
View Program
Finance
Technical Analysis Demystified- A Complete Guide to Trading
By Kunal Patel
View Program
Productivity Tool
Excel Essentials to Expert: Your Complete Guide
By Study at home
View Program
Artificial Intelligence
AI For Business Professionals Batch 2
By Ansh Mehra
View Program
'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.'
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
This Is Why Smart People Put Baking Soda in Their Shampoo
Lintmit.com
Read More
Undo
OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.'
'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said in a statement.
Live Events
OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress" and improvements to the chatbot's behavior.
The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship.
About 800 million people, or roughly 10 per cent of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase.
The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm.
But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend.
The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way.
In the U.S., more than 70 per cent of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly.
It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people.
FAQs
Q1. How many people are using ChatGpt?
A1. About 800 million people, or roughly 10 per cent of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase.
Q2. Who is CEO of OpenAI?
A2. Sam Altman is CEO of OpenAI.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
Gamblers now bet on AI models like racehorses
Now that AI developers are getting paid like pro athletes, it's fitting that fans are placing big bets on how well they're doing their jobs. On Kalshi, Polymarket and other sites where people wager 'predictions" on real-world events, gamblers lay down millions each month on their picks for AI's top model. The AI arms race is playing out in plain sight on social media, ranking sites and obscure corners of the internet where enthusiasts hunt for clues. The constant buzz makes the topic appealing for wagers, though not every scrap of information is meaningful. Foster McCoy made $10,000 in a few hours in early August by betting against the success of OpenAI's GPT-5 release. The 27-year-old day trader noticed people were misreading an online ranking site that appeared to hype GPT-5, so he put $4,500 on its competitor, Google's Gemini, to be 'Best AI This Month." As more bettors fell in line with his call, he cashed out. That's just another day for McCoy, who has traded $3.2 million on Kalshi since the start of 2025—making $170,000. He's part of a growing contingent of bettors making hundreds of trades a week on AI markets, on a range of wagers such as 'Best AI at the end of 2025," 'AI regulation becomes federal law this year," and 'Will [Chief Executive] Sam Altman be granted an equity stake in OpenAI this year?" Trading volume across AI prediction markets has surged to around $20 million this month. Kalshi, the only platform currently available in the U.S., is seeing 10 times the volume on AI trades compared with the start of the year, a spokesman says. Each bet, or 'contract," is priced in cents to reflect the odds: McCoy bought thousands of Gemini contracts at around 40 cents, meaning it had a 40% chance of winning. If the bet had settled and Gemini won, McCoy's 40 cents would become a dollar. If Gemini lost, McCoy would lose it all. But much of the action happens before the final outcome. As more people piled into the Gemini bet, the contract price rose. McCoy sold when it had reached 87 cents. It's like betting on a sports match, only with the option to cash out when the odds rise in favor of your bet. 'You're just betting against what the other guy knows," says McCoy, who credits his success to 'being chronically online." He trades from home on a triple-monitor setup tuned to the holy trinity of AI betting: X, Discord and LMArena, a leaderboard where people rate AI model performance in blind tests. Social-media beefs can be gold for gamblers. After GPT-5's debut, a flurry of Elon Musk posts claiming the superiority of his xAI Grok chatbot sent the 'Grok to Win" market up more than 500% within hours. Before long, it had fallen nearly all the way back down. Bettors end up down rabbit holes hunting for more obscure tidbits. Harvard undergraduate Rishab Jain often scans X accounts of lesser-known researchers. The day before GPT-5's release, OpenAI's Sam Altman posted an image of the Death Star from 'Star Wars." When a researcher from Google's DeepMind replied with an image of the killer globe under attack, Jain read it as a sign of confidence from Gemini's makers. Jain also scrapes source files from Google's apps and monitors public GitHub repositories tied to its products, looking for back-end changes that might signal a soon-to-be-released Gemini model. 'I'm almost obsessively up-to-date with what's going on in this world," Jain says. 'Google has so many products that need to integrate a new model before it officially launches, and you can see those changes happening in the back end if you know where to look." Since starting in June, he has won $3,500. Strategies vary. Some bet on the big industry players, others buy low on less-known or soon-to-be-updated models. Some compare odds on Kalshi and Polymarket to find arbitrage opportunities in the odds. As volume for these AI trades continues to grow, the incentive for good information will only increase, and the squeeze on casual bettors will get tighter, says Robin Hanson, a professor of economics at George Mason University. 'When you have better information in these kinds of markets, you can make better decisions," Hanson says. 'If you know a little more, you make more money." For now, the markets still draw a mix of sharks like McCoy and Jain along with casual bettors. 'I'm just a dude that likes tech and has some time and money," says James Cole, a 35-year-old who recently shut down a company he founded. 'I'm speculating with mostly instinct and 10 minutes of research." He says he's up for the year…so far. Write to Ben Raab at


News18
3 hours ago
- News18
US Big Tech Firms Accused Of Bending H-1B Rules With Newspaper Job Listings: Report
Last Updated: In recent months, big tech companies such as OpenAI and Instacart have put job ads, asking applicants to send their resumes to the immigration or 'global mobility' departments. Companies that hire H-1B foreign workers and want to sponsor them for a green card must advertise those jobs to American workers. But a media report said that the job ads placed in local newspapers are aimed at immigrants instead. In recent months, big tech companies such as OpenAI and Instacart have put job advertisements in the San Francisco Chronicle, asking applicants to send their resumes to the immigration or 'global mobility" departments, Newsweek reported. a website that looks for H-1B job postings to share with Americans, told Newsweek that most Americans don't even realise big tech firms are regularly discriminating against them. 'At a time when unemployment for college graduates is shifting sharply upward, it is important to call out hiring discrimination that could keep Americans unemployed," they said. also said that recruitment for these roles is done separately from the firms' standard recruitment process. 'These unusual application methods are likely to drive fewer applications than normal processes like posting ads on the company job board or on mainstream career sites like LinkedIn," they said. Another tech company, Instacart, advertised several jobs and also asked applicants to apply through a similar department. Udemy, the online learning platform, posted a job for a Director of Marketing Analytics and Data Science, telling applicants to send their resumes to 'immigration@ according to the Newsweek report. The H-1B program lets US companies, especially tech firms, hire skilled foreign workers for specialised jobs. A large number of H-1B visas go to Indian nationals each year, and the programme is an important way for many students from US universities to move into full-time jobs. view comments First Published: Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Time of India
4 hours ago
- Time of India
H-1B visa hiring gap revealed: Why top US tech giants favour immigrants over citizens
For many US students, a degree in STEM, whether in computer science, artificial intelligence, or data analytics, is considered a golden ticket to the top of the tech world. Universities spend millions on cutting-edge labs, top faculty, and internship programs to ensure graduates are 'job-ready. ' But recent revelations about H-1B visa hiring practices suggest that even well-prepared American graduates are often missing out on key opportunities at leading tech firms. Hidden recruitment channels give immigrants an edge Tech companies sponsoring foreign workers for green cards are legally required to post positions publicly. However, reports indicate that some firms structure postings in ways that effectively funnel applications to H-1B visa holders. Jobs may be routed to 'global mobility' or 'immigration' departments, bypassing standard recruitment channels like LinkedIn, Indeed, or company career pages. For example, a director-level posting at Udemy reportedly asked candidates to apply via 'immigration@ while similar roles at OpenAI and Instacart have followed comparable practices. While technically compliant with regulatory requirements, this approach reduces the visibility of openings for US-born applicants. Many American students may never even be aware that these roles exist, let alone have a fair chance to compete. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Could This NEW Collagen Blend Finally Reduce Your Cellulite? Vitauthority Learn More Undo The education-opportunity mismatch This creates a growing disconnect between what students are trained for and the jobs they can realistically access. American universities invest heavily in STEM education, equipping students with skills that are supposed to prepare them for the most advanced and competitive roles in tech. Yet the practical reality shows a different story: some of the most coveted positions are effectively 'hidden' behind visa-specific application channels. Students may graduate with degrees in AI, cloud computing, or data science, only to find that roles requiring these very skills are disproportionately directed toward international applicants. This mismatch undermines the promise of STEM education and raises questions about how well universities are preparing graduates for the real job market. International students are often better positioned Ironically, international students sometimes have a natural advantage. They are more likely to understand the H-1B visa process, global mobility offices, and immigration-driven recruitment pathways. Many arrive on campus already aware of these channels, or they learn about them through peer networks and advisors. As a result, international graduates, even those who studied in the US, can navigate hidden recruitment paths more successfully than American-born peers. The effect is striking: in roles where skill levels may be similar, visa-aware applicants are often the ones securing the positions. Why transparency and guidance matter These hiring practices have broad implications. Universities, career advisors, and policymakers need to confront the reality that top STEM jobs may not always be accessible in ways students expect. Greater transparency from employers, better guidance from universities, and awareness of H-1B recruitment channels can help domestic talent compete fairly. It also raises questions about fairness and equity in the tech sector. Should access to high-level roles depend on knowledge of visa processes rather than purely on skill and merit? How can universities ensure that students' years of preparation translate into real opportunities? Navigating the system: what students can do Despite these challenges, the tech sector remains full of opportunities. Awareness is key. Students should: Learn about H-1B visa pathways and global mobility offices. Research which companies favour certain application channels and adjust their job searches accordingly. Seek guidance from career offices about hidden or specialised application routes. Network with alumni and peers who have successfully navigated visa-sponsored roles . Understanding these unspoken rules can make the difference between landing a top tech job and missing out, even with a strong academic record. Ready to navigate global policies? Secure your overseas future. Get expert guidance now!