
The one word you should never say to your AI chatbot
Getting the best response from your favorite chatbot, whether it's ChatGPT, Gemini, Claude or 500 other bots, doesn't always mean using power words or the prompts that work like magic. Sometimes, it's about what word you eliminate.By cutting one simple word, you can exponentially increase the changes of better results. After testing hundreds of prompts, I've found that avoiding a single term consistently delivers sharper, more useful responses.Users should avoid starting their prompts with 'can.'
Because words like 'can you' or 'could you' introduce uncertainty from the start, it makes the AI interpret your request more like a polite question or suggestion rather than a clear instruction.
Here are a few examples of what users are currently doing: 'Can you rewrite this to sound more professional?' and 'Can you summarize this report and give me the top 5 bullet points?' The problem is, the chatbot may interpret "can" literally. Instead of doing the task, they often:
Drop "can" and command directly.
For example, prompting: 'Can you list the current marketing trends?' won't get as good of results as 'Create a list of current marketing trends.'
Leading with a strong verb ('Create,' 'Summarize,' 'List') helps the AI understand exactly what you want, resulting in faster, more accurate answers. It also saves you time; the AI won't waste words explaining whether it can do something, it will simply do it. The more direct your prompt, the better the result.
Pro tip: For complex tasks, add context, but always lead with a command. This is a great place for the 3-word-rule. Example: 'Act as a career coach. Draft a step-by-step plan to transition into AI engineering.'
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Avoid these "soft" openers that invite uncertainty:
'Try to…' This one suggests tentativeness and may confuse the chatbot
'Please…' (at the start) This one actually distracts from the command
'Maybe do…' / 'Could you…' Actually invites refusal and could lead to hallucinations
You'd be surprised how many users regularly use prompts like this in their daily chatbot use, particularly users that aren't quite sure what chatbots are capable of yet. I'm guilty of it when I try a new chatbot or am in a hurry. But it adds fluff without value
For better results, use short, direct prompts that tell the chatbot exactly what you want. Some good examples of solid prompts: Write a headline for this article Summarize this report in 3 bullet points Draft an email to my boss about project delays List 5 ideas for summer side dishes Explain this concept in simple terms
Skip soft openers and get straight to the action. The more direct your prompt, the clearer and more useful your AI results will be.
When you treat your chatbot like a skilled and able assistant (not a hesitant intern on their first day), you'll discover that you'll get confident, precise outputs that accomplish the task exactly (or at least far closer) to what you hoped. By starting prompts with verbs and ditching "can" and other similar polite fillers, you'll see better results.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
an hour ago
- Newsweek
Higher Ed's AI Panic Is Missing the Point
Senate Republicans are pushing a provision in a major tax-and-spending bill that would bar states from regulating AI for 10 years to avoid conflicting local rules. Colleges and universities are facing a parallel crisis: uncertainty over whether and how students should be allowed to use generative AI. In the absence of clear institutional guidance, individual instructors are left to make their own calls, leading to inconsistent expectations and student confusion. This policy vacuum has triggered a reactive response across higher education. Institutions are rolling out detection software, cracking down on AI use in syllabi, and encouraging faculty to read student work like forensic linguists. But the reality is that we cannot reliably detect AI writing. And if we're being honest, we never could detect effort, authorship, or intent with any precision in the first place. That's why I've stopped trying. In my classroom, it doesn't matter whether a student used ChatGPT, the campus library, or help from a roommate. My policy is simple: You, the author, are responsible for everything you submit. Google Gemini, OpenAI ChatGPT, and Microsoft Copilot app icons are displayed on a screen. Google Gemini, OpenAI ChatGPT, and Microsoft Copilot app icons are displayed on a screen. Getty Images That's not the same as insisting on authorial originality, some imagined notion that students should produce prose entirely on their own, in a vacuum, untouched by outside influence. Instead, I teach authorial responsibility. You are responsible for ensuring that your work isn't plagiarized, for knowing what your sources are, and for the quality, accuracy, and ethics of the writing you turn in, no matter what tools you used to produce it. This distinction is more important than ever in a world where large language models are readily accessible. We conflate linguistic polish with effort, or prose fluency with moral character. But as Adam Grant argued last year in The New York Times, we cannot grade effort; we can only grade outcome. This has always been true, but AI has made it undeniable. Instructors might believe they can tell when a student has put in "genuine effort," but those assumptions are often shaped by bias. Does a clean, structured paragraph indicate hard work? Or just access to better training, tutoring, or now, machine assistance? Does a clumsy but heartfelt draft reflect authenticity? Or limited exposure to academic writing? Our ability to detect effort has always been flawed. Now, it's virtually meaningless. That's why it doesn't matter if students use AI. What matters is whether they can demonstrate understanding, communicate effectively, and meet the goals of the assignment. If your grading depends on proving whether a sentence came from a chatbot or a person, then you don't know what the target learning outcome was in the first place. And if our assessments are built on presumed authorship, they're no longer evaluating learning. They're evaluating identity. There are already cracks in the AI-detection fantasy. Tools like GPTZero and Turnitin's AI checker routinely wrongly accuse multilingual students, disabled students, and those who write in non-standard dialects. In these systems, the less a student "sounds like a college student," the more likely they are to be accused of cheating. Meanwhile, many students, especially those who are first-generation, disabled, or from under-resourced schools, use AI tools to fill in gaps that the institution itself has failed to address. What looks like dishonesty is often an attempt to catch up. Insisting on originality as a condition of academic integrity also ignores how students actually write. The myth of the lone writer drafting in isolation has always been a fiction. Students draw from templates, search engines, notes from peers, and yes, now from generative AI. If we treat all of these as violations, we risk criminalizing the ordinary practices of learning. This requires a shift in mindset that embraces writing as a process rather than a product. It means designing assignments that can withstand AI involvement by asking students to revise, explain, synthesize, and critique. Whether a sentence was AI-generated matters far less than whether the student can engage with what it says, revise it, and place it in context. We should be teaching students how to write with AI, not how to hide from it. I'm not arguing for a free-for-all. I'm arguing for transparency, accountability, and educational clarity. In my courses, I don't treat AI use as taboo technology. I treat it as a new literacy. Students learn to engage critically with AI by revising in response to its suggestions, critiquing its assumptions, and making conscious choices about what to accept and what to reject. In other words, they take responsibility. We cannot force students to write "original" prose without any external help. But we can teach them to be responsible authors who understand the tools they use and the ideas they put into the world. That, to me, is a far more honest and useful version of academic integrity. Annie K. Lamar is an assistant professor of computational classics and, by courtesy, of linguistics at the University of California, Santa Barbara. She specializes in low-resource computational linguistics and machine learning. At UC Santa Barbara, she is the director of the Low-Resource Language (LOREL) Lab. Lamar holds a PhD in classics from Stanford University and an MA in education from the Stanford Graduate School of Education. Lamar is also a Public Voices fellow of The OpEd Project. The views expressed in this article are the writer's own.


Android Authority
an hour ago
- Android Authority
How much do you trust Google with the data you provide it?
Edgar Cervantes / Android Authority 🗣️ This is an open thread. We want to hear from you! Share your thoughts in the comments and vote in the poll below — your take might be featured in a future roundup. Google is a technology heavyweight — that's just a fact. Despite this, it still needs our data for many of its services to function effectively. Whether we're offering up our Gemini prompts, Google Photos, or Maps reviews, we're contributing to the company's growth little by little and the steady improvements of the features it provides. But are we silly to trust it with the amount of data we give it? A recent article by my colleague Adamya Sharma triggered this thought. A recently issued and vaguely worded email explained that a forthcoming July update to Gemini will 'help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off.' That's worrying, isn't it? Well, it turns out it was just a poorly drafted email (perhaps Google should've used Gemini to write it). In a statement, Google clarified that there's nothing to worry about, and that 'users can turn off Gemini's connection to apps at any time' through Gemini's apps portal. Users still have control over what data Gemini has access to. Still, given the noise surrounding this email, it's clear that users are in two minds about Google's (and its industry contemporaries') data privacy practices. With the importance of user data growing each day, I want to know where you stand. Here are the questions: Why don't you (or why do you) trust Google with your data? Are you comfortable giving Gemini and other AI tools more access to your data? Would you trade your data for more, freely available, or better-integrated AI features? How frequently do you review your Google account's privacy and security settings? Do you read data and privacy-related terms and conditions before signing up for a service or enabling a feature? Be sure to vote in the poll below, too! Do you trust Google with your data? 0 votes Yes, I do. NaN % No, I don't. NaN % 👇Sound off in the comments with your views about Google and your data.


Android Authority
an hour ago
- Android Authority
This hidden Gemini trick has completely changed how I use Google Calendar
Joe Maring / Android Authority Keeping up with Gemini's ever-growing list of capabilities is an impossible task. Between video generation tools like Veo 3, new features for Gemini Live, AI-powered web browsers, and more, Google's doing so much with Gemini and so quickly that knowing what Gemini can and can't do isn't as easy as it should be. Not only does this make Gemini feel overwhelming at times, but it can also cause many features to be easily overlooked, especially the less flashy ones. For example, having Gemini look at my phone screen to add events to my calendar. Is this a new feature? No. Is it as exciting or eye-catching as some of Gemini's other capabilities? Not at all. But it's something I've been using regularly over the last couple of months, and it's quickly become one of my favorite ways to interact with Google's AI assistant. Have you ever used 'Ask about screen' to create a Google Calendar event? 0 votes Yes NaN % No NaN % Adding events to my calendar has never been easier Joe Maring / Android Authority This feature hinges on Gemini's 'Ask about screen' mode, which enables the AI to view the content of your phone's display and answer questions/perform commands based on what it sees. 'Ask about screen' has been available since last August, though, because of the reasons outlined above, I've hardly used it until recently — either forgetting the capability existed or not knowing what to use it for. 'Ask about screen' is featured prominently each time you summon Gemini, though I've never been sure what to do with it. However, while looking at an order confirmation email for a movie ticket, I had an idea. I prompted Gemini, asked it to look at my screen, and simply typed 'add this to my calendar.' Just like that, Gemini added an entry to my calendar with the movie name, date, starting time, and location. Now, could I have manually created a calendar entry? Sure. Making a Google Calendar event isn't a complicated task, and it would have taken just a few seconds to do so. But there's no doubt that having Gemini do it for me is easier. I don't have to leave my email app, open Google Calendar, tap the plus icon, tap Event, and then fill out all the event information. Instead, I just hold down my phone's power button, ask Gemini to look at the screen, and tell it to add the movie to my calendar. That's all there is to it. I've now done this for numerous movie showtimes, plus a couple of upcoming concerts, too. And every single time, it feels a little bit like magic. It's far from the most technically impressive task Gemini is capable of, but it's something that makes my life easier and saves me time. And I love it. Google, this is what I want from Gemini Edgar Cervantes / Android Authority While there's not much else to say about this specific feature, it does make me wish that Google would focus on this type of thing more. Google announced a mountain of Gemini features at May's Google I/O keynote, and most of it was utterly depressing. Google spent so much stage time talking about using Gemini to write emails that sound like you, creating AI-generated videos, and even buying clothes for you. Maybe you feel differently, but I'm not sure who these features are helping. Do you really want artificial intelligence to write emails on your behalf to friends/family? Do you think the best use of Gemini is to create AI video slop? Does anyone actually, truthfully trust their AI assistant to buy stuff for them? I wish Google would devote more of its time and resources to smaller, more practical AI features. I understand why Google spends so much time developing and marketing these types of features. An AI that can generate a video clip with audio from a simple text prompt is, admittedly, a whole lot flashier than having it look at your phone screen to add an event to your calendar. But only one of those features is something I'll ever use in my day-to-day, and it's not the video generation stuff. It's unlikely, but I wish Google would devote more of its time and resources to smaller, more practical AI features, such as the calendar example. Features that may not be as exciting to talk about during a developer keynote, but ones that perform mundane tasks faster, so I can spend less time on my phone. That's what I want most out of Gemini.