Latest news with #limitations


CNET
21 hours ago
- Health
- CNET
Stop Using ChatGPT for These 11 Things Immediately
There are a lot of good reasons to use ChatGPT. I've written extensively about the AI chatbot, including how to create good prompts, why you should be using ChatGPT's voice mode more often and how I almost won my NCAA bracket thanks to ChatGPT. So I'm a fan -- but I also know its limitations. You should, too, whether you're on a roll with it or just getting ready to take the plunge. It's fun for trying out new recipes, learning a foreign language or planning a vacation, and it's getting high marks for writing software code. Still, you don't want to give ChatGPT carte blanche in everything you do. It's not good at everything. In fact, it can be downright sketchy at a lot of things. It sometimes hallucinates information that it passes off as fact, it may not always have up-to-date information, and it's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios where you should think seriously about putting down the AI and choosing another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing your aches, pains and other health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to cancer. I have a lump on my chest, and I entered that information into ChatGPT. Lo and behold, it told me I might have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. Which my licensed doctor told me. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you walk in better prepared. That could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Handling your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist -- CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. It doesn't have lived experience, can't read your body language or tone and has zero capacity for genuine empathy -- it can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work, the hard, messy, human work, to an actual human who's trained to handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew, and in a fast-moving crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, long-term goals or appetite for risk. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot can't replace a CPA who'll catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It also applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it might be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring up-to-date information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I'd never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information when it comes to player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract As I've mentioned several times now, ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away, but the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a required witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, and then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe that AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.


Forbes
16-06-2025
- Science
- Forbes
Beyond The Hype: What Apple's AI Warning Means For Business Leaders
A new Apple research paper reveals hidden flaws in today's most advanced AI models, showing they may ... More fail completely when faced with complex tasks. A groundbreaking Apple research paper has sent shockwaves through the AI community, revealing serious limitations in today's most advanced models, flaws that have gone undetected until now. The paper 'The Illusion Of Thinking" shows that the "chain-of-thought" reasoning applied by advanced models like GPT-4, Deep Seek, and Claude Sonnet suffer from "complete accuracy collapse' when tasks become too complex. And the most worrying aspect seems to be that once tasks are complicated enough, throwing more processing power, tokens or data at them does little to help. This has obvious implications for big-picture ideas that we've become accustomed to hearing, such as AI solving huge challenges like climate change, energy shortages, or global poverty. Large Reasoning Models, or LRMs, are the problem-solving engines powering agentic AI. Some consider them to be a step on the path towards artificial general intelligence, AI that can apply its learning to any task, just like humans can. Huge amounts of investment have been made in developing them, as they are considered the most advanced and useful AI models available today. But does this mean billions of dollars worth of investment have been poured into what is essentially a technological dead end? I don't think so. But I do believe there are important lessons to be learned for businesses and organizations looking to unlock the true potential of AI, so let's take a closer look. The headline premise of the report is that AI 'thinking' may just be an illusion rather than a true, functioning mirror of the objective reasoning humans use to solve problems in the real world. This is supported by findings of "accuracy collapse," which show that while LRMs excel at managing low-complexity tasks, as complexity increases, they eventually reach a point where they fail completely. Perhaps most unexpectedly, the models appear to throw in the towel, using fewer tokens and putting in less effort once the task becomes too complex. And even if they are explicitly told how to solve the problem, they will often fail to do so, casting doubt on our ability to train them to move past this behavior. These are important findings because, in business AI, the belief has often been that bigger is better, meaning bigger data, bigger algorithms and more tokens. Apple's findings suggest that beyond a certain point, these benefits of scale dissipate and eventually break down. The implication is that usefulness also diminishes when AI is asked to perform tasks that are too complex, such as formulating broad, high-level strategies in chaotic real-world scenarios or complex legal reasoning. Rather than an insurmountable obstacle, I see this as a signpost that generative language AI shouldn't be treated as a magic bullet to solve all problems. For me, there are three key lessons here. Firstly, focusing the attention of AI on structured, low-to-mid complexity tasks is more likely to hit the sweet spot. For example, a law firm shouldn't expect it to simply produce a winning case strategy for them. The problem is too complex and open-ended and will inevitably lead to generic, useless output once the model reaches a point where it can no longer reason effectively. The firm can, however, use it to extract relevant points from contracts, create summaries of relevant prior case law, and flag up risks. Secondly, it emphasizes the importance of the human-in-the-loop, the vital element of human oversight that's needed to ensure AI is used responsibly and accountably. Thirdly, when 'accuracy collapse' is a danger, learning to recognize the signs, such as a drop in token use as the model gives up its attempts at reasoning, is critical to mitigating its impact. Playing to the strengths of AI while cushioning against the impact of its weaknesses is the name of the game. In my opinion, Apple's research doesn't herald a 'dead end' or end-of-the-road scenario for AI. Instead, it should be used by businesses to help them focus on areas where they are likely to succeed and to understand where they should build resilience against AI failure. Understanding the limitations of AI shouldn't stop us from benefiting from it. But it helps us avoid situations where serious harm or damage could be caused by reasoning collapse or just wasted time and money. Agentic AI has the potential to help in this regard, with its ability to deploy various tools to bridge the gaps in situations where reasoning alone is insufficient. Similarly, the concept of explainable AI is important because designing systems to be transparent means that when a collapse does occur, we will have a better understanding of what went wrong. Certainly, no one should expect AI to always work perfectly and produce the best solution to every possible problem. However, the more we understand it, the more we can leverage its strengths and the more likely we are to create genuine value.