
If You're Using ChatGPT for Any of These 11 Things, Stop Immediately
I use ChatGPT every day. I've written extensively about the AI chatbot, including how to create good prompts, why you should be using ChatGPT's voice mode more often and how I almost won my NCAA bracket thanks to ChatGPT.
So I'm a fan -- but I also know its limitations. You should, too, whether you're on a roll with it or just getting ready to take the plunge. It's fun for trying out new recipes, learning a foreign language or planning a vacation, and it's getting high marks for writing software code. Still, you don't want to give ChatGPT carte blanche in everything you do. It's not good at everything. In fact, it can be downright sketchy at a lot of things.
It sometimes hallucinates information that it passes off as fact, it may not always have up-to-date information, and it's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.)
That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat.
If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios where you should think seriously about putting down the AI and choosing another option. Don't use ChatGPT for any of the following.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
1. Diagnosing your aches, pains and other health issues
I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to cancer. I have a lump on my chest, and I entered that information into ChatGPT. Lo and behold, it told me I might have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. Which my licensed doctor told me.
I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you walk in better prepared. That could help make doctor visits less overwhelming.
However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits.
2. Handling your mental health
ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist -- CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst.
It doesn't have lived experience, can't read your body language or tone and has zero capacity for genuine empathy -- it can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work, the hard, messy, human work, to an actual human who's trained to handle it.
If you or someone you love is in crisis, please dial 988 in the US, or your local hotline.
3. Making immediate safety decisions
If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew, and in a fast-moving crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder.
4. Getting personalized financial or tax planning
ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, long-term goals or appetite for risk. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter.
I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot can't replace a CPA who'll catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI.
Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information.
5. Dealing with confidential or regulated data
As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement.
The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It also applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it might be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT.
6. Doing anything illegal
This is self-explanatory.
7. Cheating on schoolwork
I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame.
Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you.
8. Monitoring up-to-date information and breaking news
Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source.
However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet.
9. Gambling
I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I'd never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information when it comes to player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky.
ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win.
10. Drafting a will or other legally binding contract
As I've mentioned several times now, ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away, but the moment you ask it to draft actual legal text, you're rolling the dice.
Estate and family-law rules vary by state, and sometimes even by county, so skipping a required witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, and then pay that lawyer to turn that checklist into a document that stands up in court.
11. Making art
This isn't an objective truth, just my own opinion, but I don't believe that AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
28 minutes ago
- Forbes
Responsible AI Starts With The C-Suite
Shane Buckley is President and Chief Executive Officer of Gigamon, a leader in deep observability. AI is at the top of every board agenda today. With global AI investment expected to surpass $200 billion in 2025 and $750 billion by 2028, the central conversation has shifted to balancing innovation with responsible data use. As AI matures, so do its risks—particularly those related to privacy, security and ethics. To scale responsibly, business leaders must find equilibrium between aggressive AI adoption and intentional governance. As I emphasized in my last Forbes Technology Council article, AI budgeting must begin with a security-first mindset. Today, that mindset is no longer optional. It's a strategic imperative for the C-suite—one that sets the foundation for sustainable, scalable success. Data Risk Is The Core Challenge At the center of AI-related risk lies a fundamental uncertainty: how large language models (LLMs) process, retain and expose sensitive data. These models often contradict Zero Trust principles by opening broader access to networks and information. When confidential business data is entered into third-party AI tools, it may be stored in jurisdictions with incompatible compliance or privacy laws. Employee and third-party misuse—intentional or not—can expose organizations to data leakage, regulatory risk or public breach. According to my company Gigamon's 2025 Hybrid Cloud Security Survey, which included over 1,000 IT and security leaders, visibility into data in motion is now a top business priority. More than half (54%) of respondents expressed reluctance to use AI in public cloud environments due to intellectual property risks, while seven in ten are considering moving data from public to private clouds. The Internal Threat Is Often Overlooked While headlines focus on deepfakes and AI-enabled phishing attacks, a quieter threat looms within: employees unknowingly inputting sensitive data into unsecured AI tools. Even well-intentioned teams can become the weakest link if the organization lacks appropriate controls. A Growing Dark Market For Malicious AI On the dark web, malicious AI tools—black-hat versions of ChatGPT—are enabling adversaries to launch more frequent and sophisticated attacks. Our survey found that 58% of leaders observed an increase in AI-powered ransomware. In 2024 alone, there was an 11% increase global spike in reported ransomware attacks, with over 5,400 attacks logged. The convergence of an expanding threat surface and rapidly advancing attacker capabilities makes a reactive cybersecurity strategy untenable. Accountability Must Extend To Vendors As AI use becomes embedded in third-party systems, vendors represent a growing risk surface. Increasingly, companies are requiring detailed disclosures on how their partners use AI. Transparency, accountability and aligned standards across vendors are critical. If even one supplier is compromised, AI-powered malware can cascade through interconnected systems and impact entire supply chains. Business leaders must extend security-first thinking to external partnerships and vendor ecosystems. When AI is accessible without oversight, organizations risk losing control over their data footprint. But bans aren't the answer—these only encourage unmonitored "shadow AI" use. Instead, responsible enablement must prevail. That includes educating employees, enforcing clear policies and building visibility across enterprise AI stack. Boards Must Lead Governance AI governance is no longer the sole domain of IT. It's a board-level issue. Forward-looking organizations are forming AI governance committees that include the CEO, CISO, CRO and General Counsel. These cross-functional teams are tasked not only with risk oversight, but also with defining the organization's risk appetite, monitoring AI use and maintaining compliance across jurisdictions. True governance is more than policy—it's cultural. It ensures AI is used safely but applied in ways that benefit both people and the business. Ethical Risks Carry Legal Consequences Security isn't the only concern. AI systems can carry ethical reputational risks—from bias to misinformation. Bias in AI can lead to discriminatory results. Hallucinations, when models generate convincing but false information, can mislead decision-making and create legal exposure. Organizations can mitigate these risks through measures like pseudonymization—removing personally identifiable information (PII) before inputting the data into AI systems. Even simple steps—such as stripping customer or vendor names—can improve privacy protection and reduce harmful outcomes. Prioritize Responsible Innovation AI has transformative potential, but only for organizations that wield it with care. C-suite leaders must guide their organizations through bold innovation while safeguarding core values—people, data and trust. Those who take a security-first, governance-led approach today will shape the AI-powered businesses of tomorrow. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
37 minutes ago
- Yahoo
Europe's AI gigafactory push attracts 76 bids, EU tech chief says
By Foo Yun Chee BRUSSELS (Reuters) -Seventy-six companies have bid to develop Europe's artificial intelligence gigafactories, the EU's tech chief said on Monday, hailing a bigger than expected response to the bloc's push to catch up with the U.S. and China in this key technology. The European Commission made the announcement four months after it allocated 20 billion euros ($23 billion) in EU funding for the construction of four AI gigafactories across the bloc. Such facilities, in essence large-scale AI computing and data storage hubs, will be equipped with about 100,000 state-of-the-art AI chips. "We got 76 submissions proposing to set up AI gigafactories in 16 member states and across 60 different sites," EU tech chief Henna Virkkunen told a press conference. "And this exceeds far beyond our expectations and this showcases Europe's growing momentum and enthusiasm for innovating in AI in Europe," she said, declining to name the companies because of business confidentiality information. The EU executive said applicants included EU and non-EU companies, among them tech giants, data centre operators, telecoms providers, power suppliers and financial investors. "Together they have now indicated plans to acquire at least 3 million latest generation of AI processors (GPUs) in total," Virkkunen said. The Commission will launch an official call for setting up the AI gigafactories at the end of the year. ($1 = 0.8537 euros)


Fox News
37 minutes ago
- Fox News
Trump says more nations want to expand ties with Israel under Abraham Accords
All times eastern FOX News Radio Live Channel Coverage WATCH LIVE: DOJ announces largest coordinated healthcare fraud effort in dept's history