
Most ChatGPT users make this one common mistake — try these 3 tips for better results
When it comes to prompting, many users (casual and expert) tend to write far more than is needed and start with the wrong wrongs. Many users are over-explaining to the already intelligent chatbot or adding details that simply do not belong.
Like most users, when I first started using ChatGPT I added extra details or used complete sentences and proper grammar, just to ensure that the chatbot understood me. This is a rookie move because chatbots in general are really good at understanding context. Here's a little-known secret — an overly wordy prompt often causes the chatbot to miss the mark, it's why the 3-word-prompt is my go-to and works so well. Less is more when it comes to instructing your chatbot, with short, 5-to-7-word "command prompts" working the best most of the time.
Wordy prompt: "Can you help me by writing an article about the best budget security cameras under $100, including some models that have night vision and work well for outdoor use?"
Concise prompt: "Compare budget security cameras under $100."
The result? Bloated, generic answers often miss the point entirely. According to the 2024 AI Index from Stanford and OpenAI:
And in my own testing, short prompts consistently give sharper, more focused answers.
Instead of a long-winded request, the best word choice for users to start with is a clear action. Among other power words, here are a few of my favorites:
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
This tells ChatGPT exactly what you want, which eliminates the guesswork (and potential for error).
Prompt 1: "Can you help me compare the best summer side dishes for a backyard barbecue?
Prompt 2: "Compare easy side dishes for a backyard barbeque."
You can see that the shorter, less-wordy prompt with a clear action produces better, more focused results. Users have become accustomed to chatting with their chatbot, which can lead to being overly verbose in their prompts.
But for many uses, that is just not necessary.
Rather than stuffing everything into one giant prompt, I build the answer step-by-step.
First prompt: 'List 3 popular types of running shoes.'Follow-up: 'Now include only brands under $150.'Next: 'Summarize their best features.'
This makes the conversation flow naturally — and makes it easier to refine on the fly.
Avoid the common mistake of stuffing too much information into a prompt with this simple fix.
Instead of this prompt:"Explain quantum computing like I'm 12, using metaphors but not oversimplifying, and include real-world applications."
Try this prompt chain: "Define quantum computing in one sentence." "Now add a metaphor." "Give 3 use cases."
By chaining simple prompts, I now get clearer, more accurate results — and avoid the dreaded 'As an AI, I can't...' replies.
Of course, there are times when long prompts are worth it — especially for creative tasks:
In these cases, the extra detail helps the AI adopt the right tone, perspective or persona.
Since adopting these 3 simple rules, I've saved time on excess prompt rewrites and have gotten higher-quality responses with far fewer frustrations.
There's nothing worse than when the response is incomplete or the AI gives off-topic replies. If you've ever gotten a response that refers to an earlier conversation and has nothing to do with the current one, you know what I'm talking about. This shorter prompt trick will fix that, too.
Long story, uh, short, prompt by cutting to the chase. The chatbot will be able to handle your prompts if you start simple and build on your request. Then sit back and watch your results improve.
What's the shortest prompt that ever gave you genius results? Share in the comments — I'd love to try it!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
an hour ago
- Newsweek
American Students Are Relying On ChatGPT - At Their Own Risk
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. The use of generative artificial intelligence by students has increased over the last two years, but now research has revealed what is driving the trend. A new survey found that students appreciated the ability of large language models (LLMs) like ChatGPT to provide information without any judgement, with many respondents describing it as a "safe and supportive" learning tool. Why It Matters The use of artificial intelligence in academic work is one of the biggest ethical issues facing the education sector. Tools like ChatGPT, which are being updated regularly to be more intelligent, can serve a purpose in helping students' work, but there is a worry that an overreliance could lead to problems. What To Know Last year, a study in the journal Computers and Education: Artificial Intelligence outlined that of 490 university students, one in four respondents (23.1 percent) relied on ChatGPT for drafting assignments and writing homework. That research has now been backed up by a new report in the Tech Trends journal, published in June this year, which found that 78.7 percent of respondents were using generative AI regularly for their studies. "Particularly noteworthy is that students perceived GenAI as useful because they are not judged by it and because of its anonymity," the report read. "Students generally feel comfortable using GenAI for either general or learning purposes, perceiving these tools as beneficial especially with regard to their anonymity and non-judgmental nature." Photo-illustration by Newsweek/Getty/Canva However, the reliance on AI can be a double-edged sword. Another study from MIT found that extended use of LLMs for research and writing could have long-term behavioral effects, such as lower brain engagement and laziness. The study, released this week without peer review, indicated that an overreliance on tools like ChatGPT "could actually harm learning, especially for younger users." It compared brain activity between students using ChatGPT and students using traditional writing methods. The study found that the AI-assisted writers were engaging their deep memory processes far less than the control groups, and that their information recall skills were worse after producing work with ChatGPT. What People Are Saying Akli Adjaoute, an artificial intelligence security expert and author of Inside AI, told Newsweek of another pitfall for students. He says generative AI remained influenced by human hands in its programming, and "cannot be trained to be completely free of bias." He added, "This is not a bug, it just reflects our world. AI does not invent knowledge. It learns from data created by people. And people, even with the best intentions, carry assumptions, disagreements, and historical baggage. "AI systems are trained on information from many sources: books, websites, job applications, police records, medical histories, and social media. All of this information reflects human choices, including what we believe, what we value, and who has held power. "If the data contains stereotypes or discrimination, the AI will absorb it. In many cases, it does not just copy the bias; it amplifies it." What Happens Next ChatGPT and other LLM tools continue to be updated regularly, but the academic sector is not moving as fast, and there is still no united approach on how AI tools should be handled.


Business Insider
2 hours ago
- Business Insider
Is ChatGPT Dumbing Us Down? MIT Study Says Yes
Artificial intelligence might be getting smarter, but your brain might be doing the opposite. A new MIT study just dropped a bombshell: heavy reliance on ChatGPT can shrink cognitive effort, suppress brain activity, and kill memory retention. In plain terms, your brain could be coasting on autopilot while the AI does the heavy lifting. And that's not as smart as it sounds. Confident Investing Starts Here: Scientists Track Brainpower and Watch It Fade MIT researchers strapped EEG headsets onto 54 volunteers and tracked their brain activity while writing SAT-style essays. One group used ChatGPT, another used Google (GOOGL) Search, and the last went old school, relying on pure brainpower. The results were crystal clear. The ChatGPT users showed the weakest engagement. Their brains were quieter, less focused, and less activated in areas tied to memory, logic, and executive control. Even more revealing, the group relying solely on their own thinking outperformed others not just in neural engagement but also in writing quality and depth. They were thinking, processing, and retaining far more than the AI-assisted crowd. ChatGPT Users Took a Backseat, and It Showed What started as helpful guidance from ChatGPT quickly turned into full-on outsourcing. By the third writing session, users were simply typing the essay prompt into the chatbot, copying the answer, and pasting it into their work. No edits, no effort, no thought. Teachers described the work as polished but empty, technically correct but intellectually disengaged. This isn't just about laziness. It's about what happens when convenience starts replacing cognition. You stop challenging your memory, you stop pushing for clarity, and you stop engaging with the ideas you're writing about. That's a recipe for mental decline. Google Keeps the Brain Fired Up The group that used Google fared far better. Unlike ChatGPT, Google didn't hand them neatly packaged answers. It forced them to sift through sources, analyze content, and stitch together their own arguments. That friction kept their brains active. Their EEG readings stayed higher, their essays were more coherent, and they actually remembered what they had written. This matters because it shows that tech isn't the enemy, but how you use it is. Google helped support critical thinking, while ChatGPT replaced it. Once You Go Full AI, Bouncing Back Is Tough Even more concerning, the study found that switching away from ChatGPT didn't reverse the damage right away. People who started with AI and then went solo didn't show a quick recovery in brain engagement. Meanwhile, those who used their own brains first, then later tried ChatGPT, stayed sharper overall. It's like giving your brain a seatbelt before letting AI take the wheel. That early thinking effort makes a difference. MIT Coins a New Problem called Metacognitive Laziness Furthermore, lead researcher Nataliya Kosmyna warned that what we're seeing is not just a shift in learning, it's a drop in how we approach thinking itself. She calls it metacognitive laziness, the brain's natural tendency to let AI do the hard part when given the option. Instead of thinking about how we think, we're letting algorithms short-circuit the process. And that can have ripple effects far beyond one essay or school project. Kosmyna stresses that this is early data, based on 54 people and limited writing tasks. But the findings raise a red flag for schools rushing to embed AI tools into classrooms. If ChatGPT is replacing the hard work of learning, then educators need to hit pause, not plug in faster. Otherwise, we risk training a generation to look smart without being smart. And that's not progress, that's a problem. What Is the Best AI Stock to Buy Right Now? You can't buy OpenAI or ChatGPT stock, but if you're trying to tap into the AI boom without falling for the froth, Wall Street's got a shortlist of serious contenders. Based on data from TipRanks' Comparison tool, Nvidia (NVDA) still leads the charge in investor love with a 20.4 percent upside and a 'Strong Buy' consensus. It is the chip king behind nearly every AI breakthrough from ChatGPT to Tesla's Dojo, and its $3.51 trillion market cap proves it is more than a one-trick pony. But it is not the only contender. Amazon (AMZN) and Google (GOOGL) also hold 'Strong Buy' ratings with significant upside at 15.6 and 19.5 percent respectively. Microsoft (MSFT), meanwhile, is the quiet giant with its tight OpenAI partnership and 8.1 percent upside. Meta and Apple round out the tech elite, with Meta holding a pristine 10 Smart Score, suggesting it's got the tech, momentum, and analyst backing to keep climbing. Looking for safety with a sprinkle of AI? IBM's 'Moderate Buy' may surprise, but it scores a perfect 10 on the Smart Score and offers a rare 2 percent dividend yield.


Tom's Guide
4 hours ago
- Tom's Guide
Most ChatGPT users make this one common mistake — try these 3 tips for better results
When it comes to prompting, many users (casual and expert) tend to write far more than is needed and start with the wrong wrongs. Many users are over-explaining to the already intelligent chatbot or adding details that simply do not belong. Like most users, when I first started using ChatGPT I added extra details or used complete sentences and proper grammar, just to ensure that the chatbot understood me. This is a rookie move because chatbots in general are really good at understanding context. Here's a little-known secret — an overly wordy prompt often causes the chatbot to miss the mark, it's why the 3-word-prompt is my go-to and works so well. Less is more when it comes to instructing your chatbot, with short, 5-to-7-word "command prompts" working the best most of the time. Wordy prompt: "Can you help me by writing an article about the best budget security cameras under $100, including some models that have night vision and work well for outdoor use?" Concise prompt: "Compare budget security cameras under $100." The result? Bloated, generic answers often miss the point entirely. According to the 2024 AI Index from Stanford and OpenAI: And in my own testing, short prompts consistently give sharper, more focused answers. Instead of a long-winded request, the best word choice for users to start with is a clear action. Among other power words, here are a few of my favorites: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. This tells ChatGPT exactly what you want, which eliminates the guesswork (and potential for error). Prompt 1: "Can you help me compare the best summer side dishes for a backyard barbecue? Prompt 2: "Compare easy side dishes for a backyard barbeque." You can see that the shorter, less-wordy prompt with a clear action produces better, more focused results. Users have become accustomed to chatting with their chatbot, which can lead to being overly verbose in their prompts. But for many uses, that is just not necessary. Rather than stuffing everything into one giant prompt, I build the answer step-by-step. First prompt: 'List 3 popular types of running shoes.'Follow-up: 'Now include only brands under $150.'Next: 'Summarize their best features.' This makes the conversation flow naturally — and makes it easier to refine on the fly. Avoid the common mistake of stuffing too much information into a prompt with this simple fix. Instead of this prompt:"Explain quantum computing like I'm 12, using metaphors but not oversimplifying, and include real-world applications." Try this prompt chain: "Define quantum computing in one sentence." "Now add a metaphor." "Give 3 use cases." By chaining simple prompts, I now get clearer, more accurate results — and avoid the dreaded 'As an AI, I can't...' replies. Of course, there are times when long prompts are worth it — especially for creative tasks: In these cases, the extra detail helps the AI adopt the right tone, perspective or persona. Since adopting these 3 simple rules, I've saved time on excess prompt rewrites and have gotten higher-quality responses with far fewer frustrations. There's nothing worse than when the response is incomplete or the AI gives off-topic replies. If you've ever gotten a response that refers to an earlier conversation and has nothing to do with the current one, you know what I'm talking about. This shorter prompt trick will fix that, too. Long story, uh, short, prompt by cutting to the chase. The chatbot will be able to handle your prompts if you start simple and build on your request. Then sit back and watch your results improve. What's the shortest prompt that ever gave you genius results? Share in the comments — I'd love to try it!