
Stop Using ChatGPT for These 11 Things Immediately
So I'm a fan -- but I also know its limitations. You should, too, whether you're on a roll with it or just getting ready to take the plunge. It's fun for trying out new recipes, learning a foreign language or planning a vacation, and it's getting high marks for writing software code. Still, you don't want to give ChatGPT carte blanche in everything you do. It's not good at everything. In fact, it can be downright sketchy at a lot of things.
It sometimes hallucinates information that it passes off as fact, it may not always have up-to-date information, and it's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.)
That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat.
If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios where you should think seriously about putting down the AI and choosing another option. Don't use ChatGPT for any of the following.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
1. Diagnosing your aches, pains and other health issues
I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to cancer. I have a lump on my chest, and I entered that information into ChatGPT. Lo and behold, it told me I might have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. Which my licensed doctor told me.
I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you walk in better prepared. That could help make doctor visits less overwhelming.
However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits.
2. Handling your mental health
ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist -- CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst.
It doesn't have lived experience, can't read your body language or tone and has zero capacity for genuine empathy -- it can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work, the hard, messy, human work, to an actual human who's trained to handle it.
If you or someone you love is in crisis, please dial 988 in the US, or your local hotline.
3. Making immediate safety decisions
If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew, and in a fast-moving crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder.
4. Getting personalized financial or tax planning
ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, long-term goals or appetite for risk. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter.
I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot can't replace a CPA who'll catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI.
Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information.
5. Dealing with confidential or regulated data
As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement.
The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It also applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it might be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT.
6. Doing anything illegal
This is self-explanatory.
7. Cheating on schoolwork
I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame.
Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you.
8. Monitoring up-to-date information and breaking news
Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source.
However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet.
9. Gambling
I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I'd never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information when it comes to player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky.
ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win.
10. Drafting a will or other legally binding contract
As I've mentioned several times now, ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away, but the moment you ask it to draft actual legal text, you're rolling the dice.
Estate and family-law rules vary by state, and sometimes even by county, so skipping a required witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, and then pay that lawyer to turn that checklist into a document that stands up in court.
11. Making art
This isn't an objective truth, just my own opinion, but I don't believe that AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
19 minutes ago
- Yahoo
Former OpenAI Board Member Questions Zuckerberg AI Hiring Spree
(Bloomberg) -- Meta Platforms Inc.'s lavish multimillion-dollar budget for recruiting top AI talent may not guarantee success, said Helen Toner, former OpenAI board member and director of strategy at Georgetown's Center for Security and Emerging Technology. NYC Commutes Resume After Midtown Bus Terminal Crash Chaos Struggling Downtowns Are Looking to Lure New Crowds Massachusetts to Follow NYC in Making Landlords Pay Broker Fees What Gothenburg Got Out of Congestion Pricing California Exempts Building Projects From Environmental Law The poaching of artificial intelligence researchers from the likes of OpenAI — with salaries in the tens of millions of dollars — and the debut of Meta's new Superintelligence group comes after the Facebook operator developed a reputation for 'having a dysfunctional team,' Toner said in an interview with Bloomberg TV. The practice of luring away high performers from each other's AI labs has intensified among Silicon Valley companies since the launch of ChatGPT, she said. 'The question is, can it turn around Meta's fortunes and turn it into a real juggernaut?' Toner said. 'It'll be difficult, there's a lot of organizational politics at play.' Meta's troubles began compounding when China's AI upstart DeepSeek came out of nowhere this year and put forward credible competition to Meta's open source models. 'The fact that DeepSeek was outshining them was really not a good look for the company,' according to Toner. Chief Executive Officer Mark Zuckerberg is now plowing financial resources in, but whether he'll be able to change organizational dynamics and make progress fast enough to retain top engineers is an open question. 'Can Meta convince them that they are moving fast enough?' Toner said. Toner, an influential voice in artificial intelligence, came into the limelight first as a board member of OpenAI and then for her vote to oust Sam Altman from the CEO post in late 2023. The Melbourne-educated academic departed from the board following Altman's brief stepping down and restoration to the top job, and has since advanced her career in studying the AI race between the US and China. That race is now spilling across borders as the two superpowers vie for the business and collaboration of other countries, Toner said. US companies like OpenAI and Chinese players like Alibaba Group Holding Ltd., DeepSeek and Zhipu AI are making plays for international partnerships with governments and businesses. South Korea's Kakao Corp. is integrating ChatGPT and other AI services into the country's most used social media platform, while Alibaba is adding new data centers in Southeast Asia. China has a long history of working with other governments and is chipping away at the US tech monopoly globally, Toner said. 'It's certainly a strong showing they're making,' she said. China's models are widely available even if they are less technically sophisticated. They compete on the basis that they're 'cheaper, easier to use, and they help you adopt and customize.' Toner hasn't interacted with Altman since their clash in the November 2023 OpenAI boardroom battle. 'At some point, we'll wind up at the same event, the AI world is pretty small,' she said. 'I'm sure we'll both be happy to shake each other's hand.' SNAP Cuts in Big Tax Bill Will Hit a Lot of Trump Voters Too America's Top Consumer-Sentiment Economist Is Worried How to Steal a House China's Homegrown Jewelry Superstar Pistachios Are Everywhere Right Now, Not Just in Dubai Chocolate ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Washington Post
28 minutes ago
- Washington Post
Massachusetts advocates fear Trump's bill will unravel health safety net in Obamacare's model state
BOSTON — In the state that served as the model for Obamacare , advocates and health care workers fear the Trump administration is trying to dismantle piece-by-piece a popular program that has provided insurance, preventive care and life-saving medication to hundreds of thousands of people. Provisions contained in both the Senate and House versions of the massive tax and spending cuts bill advancing in Congress — a centerpiece of President Donald Trump's agenda — could strip health insurance from up to a quarter of the roughly 400,000 people enrolled in the Massachusetts Health Connector, according to state estimates.


Entrepreneur
30 minutes ago
- Entrepreneur
Max-Hervé George Details SWI's Investment Case for Underground AI and Data Infrastructure
As AI reshapes economic landscapes, SWI Group specializes in delivering the essential, high-performance infrastructure that underpins all intelligent systems. Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur Asia Pacific, an international franchise of Entrepreneur Media. SWI Group, a recently merged entity formed from Stoneweg and Icona Capital with over €10 billion in assets under management, has broadened its investment remit to include digital infrastructure across Europe. Known for its pan-European real assets platform and alternatives strategy, the group is now building out foundational technologies that support intelligent systems. "We are at the start of a secular transformation in how data is processed, stored, and monetized," said Max-Hervé George. "Our aim is to invest not just in trends, but in foundations." SWI Group, under the leadership of Max-Hervé George, has begun acquiring and developing AI-ready data centers, edge compute nodes, and sovereign compute regions in locations selected for access to renewable energy, regulatory stability, and potential for future expansion. This infrastructure push represents a strategic shift that aligns SWI's long-term capital cycle with Europe's growing demand for compliant and resilient compute capacity. In parallel, SWI is retrofitting brownfield sites to meet state‑of‑the‑art technical and sustainability benchmarks. Modular designs and efficient cooling systems are being deployed to ensure adherence to EU ESG regulations and prepare for forthcoming AI‑centric frameworks. Leveraging its combined structure—Stoneweg Real Assets and Icona Alternatives—SWI benefits from deep expertise in structured finance, thematic investing, and operational delivery. That cross‑functional advantage strengthens the underwriting of complex digital infrastructure. The current pipeline spans enterprise, government, and cloud‑oriented facilities offering stable, utility‑style revenue and diversified use cases. With Europe sharpening its focus on secure and ethically aligned AI deployment, SWI's infrastructure play reflects both commercial foresight and robust institutional support.