logo
The ‘3-word rule' that makes ChatGPT give expert-level responses

The ‘3-word rule' that makes ChatGPT give expert-level responses

Tom's Guide27-05-2025

ChatGPT can do a lot. It can be used to write essays, summarize meetings and even act as your therapist in a pinch.
But if you're still struggling to get personalized responses that go beyond the bland or overly basic, the problem might not be the AI. It could be your prompt.
That's where the '3-word rule' comes in — a super-simple prompt tweak that can take your chatbot from surface-level answers to something that feels custom, thoughtful and expert-level.
No, the three little words are not 'I love you' — but it doesn't hurt to be polite.
The concept is simple: Add a short, three-word directive to your prompt that tells ChatGPT how to respond. These three words can instantly shape the tone, voice and depth of the output. You're not rewriting your whole question. You're just giving the AI a lens through which to answer.
Here are a few examples that work surprisingly well:
'Like a lawyer' — for structured, detailed and logical responses
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
'Be a teacher' — for simplified, clear and educational explanations
'Act like Hemingway' — for punchy, minimalist writing with impact
It's kind of like casting the AI in a role, and then you're directing the performance with the specifics in your prompts.
ChatGPT has been trained on a massive variety of writing styles and professional voices. When you give it a short cue — something like'like a professor" — it taps into that persona and shifts its tone accordingly.
That one tweak can turn a vague summary into something polished, insightful and audience-appropriate.
The 3-word rule is especially helpful when:
It's a lightweight form of prompt engineering that makes a big difference; no jargon or prompt-hacking required.
Need a few starter ideas? These formats work across writing, research, translation and even creative tasks:
You can swap in almost any role or voice that fits your needs: CEO, therapist, designer, coach, teenager or just about anyone else. The more specific, the better.
Each tweak helps ChatGPT interpret your request through a more useful lens, and the difference in quality is often immediate.
The 3-word rule isn't magic, but it's close. It's one of the easiest ways to upgrade your prompts and get smarter, more human-like responses from AI.
Whether you're writing emails, building lesson plans or brainstorming your next novel, three extra words could make all the difference.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?
OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?

Fast Company

time8 minutes ago

  • Fast Company

OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?

While the world and private enterprise are adopting AI rapidly in their workflows, government isn't far behind. The U.K. government has said early trials of AI-powered productivity tools can shave two weeks of labor off a year's work, and AI companies are adapting to that need. More than 1,700 AI use cases have been recorded in the U.S. government, long before Elon Musk's DOGE entered the equation and accelerated AI adoption throughout the public sector. Federal policies introduced in April on AI adoption and procurement have pushed this trend further. It's unsurprising that big tech companies are rolling out their own specialist models to meet that demand. Anthropic, the maker of the Claude chatbot, announced last week a series of models tailored for use by government employees. These include features such as the ability to handle classified materials and understand some of the bureaucratic language that plagues official documents. Anthropic has said its models are already deployed by agencies 'at the highest level of U.S. national security, and access to these models is limited to those who operate in such classified environments.' The announcement follows a similar one by OpenAI, the makers of ChatGPT, which released its own government-tailored AI models in January to 'streamline government agencies' access to OpenAI's frontier models.' But AI experts worry about governments becoming overly reliant on AI models, which can hallucinate information, inherit biases that discriminate against certain groups at scale, or steer policy in misguided directions. They also express concern over governments being locked into specific providers, who may later increase prices that taxpayers would be left to fund. 'I worry about governments using this kind of technology and relying on tech companies, and in particular, tech companies who have proven to be quite untrustworthy,' says Carissa Véliz, who researches AI ethics at the University of Oxford. She points out that the generative AI revolution so far, sparked by the November 2022 release of ChatGPT, has seen governments scrambling to retrofit rules and regulations in areas such as copyright to accommodate tech companies after they've bent those rules. 'It just shows a power relationship there that doesn't look good for government,' says Véliz. 'Government is supposed to be the legislator, the one making the rules and enforcing the rules.' Beyond those moral concerns, she also worries about the financial stakes involved. 'There's just a sheer dependency on a company that has financial interests, that is based in a different country, in a situation in which geopolitics is getting quite complicated,' says Véliz, explaining why countries outside the United States might hesitate to sign on to use ClaudeGov or ChatGPT Gov. It's the same argument the U.S. uses about overreliance on TikTok, which has Chinese ties, amid fears that figures like Donald Trump could pressure U.S.-based firms to act in politically motivated ways. OpenAI didn't respond to Fast Company 's request for comment. A spokesperson for Anthropic says the company is committed to transparency, citing published work on model risks, a detailed system card, and collaborations with the U.S. and U.K. governments to test AI systems. Some fear that AI companies are securing 'those big DoD bucks,' as programmer Ashe Dryden put it on Mastodon, and could perpetuate that revenue by fostering dependency on their specific models. The rollout of these models reflects broader shifts in the tech landscape that increasingly tie government, national security and technology together. For example, defense tech firm Anduril recently raised $5 billion in a new funding round that values the company at over $30 billion. Others have argued that the release of these government-specific models by AI companies 'isn't [about] national security. This is narrative laundering,' as one LinkedIn commenter put it. The idea is that these moves echo the norms already set by big government rather than challenging them, potentially reinforcing existing issues. 'I've always been a sceptic of a single supplier for IT services, and this is no exception,' says Andres Guadamuz, an AI researcher at the University of Sussex. Guadamuz believes the development of government-specific AI models is still in its early phase, and urges decisionmakers to pause before signing deals. 'Governments should keep their options open,' he says. 'Particularly with a crowded AI market, large entities such as the government can have a better negotiating position.'

iOS 26 — 3 Things To Know About Apple's Radical New iPhone Update
iOS 26 — 3 Things To Know About Apple's Radical New iPhone Update

Forbes

time8 minutes ago

  • Forbes

iOS 26 — 3 Things To Know About Apple's Radical New iPhone Update

CANADA - 2025/06/04: In this photo illustration, the Apple iOS 26 logo is seen displayed on a ... More smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images) At its Worldwide Developers Conference on June 9, Apple officially launched iOS 26, its biggest iPhone software update in over a decade. As well as coming with a new naming system, the iOS 26 upgrade includes a bunch of new features — many of which are powered by its AI Apple Intelligence — as well as a new design language dubbed Liquid Glass. So far, Apple's iOS 26 and Liquid Glass have been met with a mixed reception, and some users aren't that keen on the look and feel. It's possible Apple will dial back the look of the operating system before launch, but make no mistake, iOS 26 is coming and the updated software will launch this Fall, along with Apple's new iPhones. So, when it does arrive, what can you expect for security and privacy — and which devices can run the iPhone software? Apple has been criticised for being slow to launch AI, but it is ramping up features in iOS 26. For example, building on Apple Intelligence, visual intelligence extends to a user's iPhone screen so they can search and take action on anything they're viewing across apps, according to Apple's press release announcing iOS 26. Users can ask ChatGPT questions about what they're looking at onscreen to learn more, as well as search Google, Etsy, or other supported apps to find similar images and products. Visual intelligence also recognises when a user is looking at an event and suggests adding it to their calendar, repopulating key details such as date, time and location. Apple Intelligence can also automatically identify and summarise order tracking details from emails sent from merchants and delivery carriers. Additionally, a new Foundation Models framework opens up access for any app to tap directly into the on-device foundation model at the core of Apple Intelligence, according to Apple. This gives developers 'direct access to powerful intelligence that's fast, built with privacy at its core, and available offline, using AI inference that is free of cost,' Apple said. ChatGPT related features are available due to Apple's partnership with OpenAI. Apple processes data on the device or for more complex queries via its Private Cloud Compute encrypted cloud storage. However, if you choose to send data to ChatGPT when using Siri, OpenAI's privacy policy applies. And AI is pretty cool, but keep in mind it requires a lot of data to operate. Like all new apps and products currently, the new iOS 26 update seems to be heavily connected with AI, says Jake Moore, global cybersecurity advisor at ESET. 'But as we are still in such a relatively early stage of the technology, it would be wise to remain cautious of the amount of data you enter into it and hold off from using it if it seems gimmicky,' he advises. Meanwhile, opening AI up to developers 'could mean we are entering a new world of data entry where users don't really know the full extent of what is happening with their information,' Moore warns. Apple's iOS 26 will introduce call screening, building on the very useful iOS 18 feature Live Voicemail. In Messages, you can also screen your messages. 'Messages from unknown senders will appear in a dedicated folder where users can then mark the number as known, ask for more information, or delete,' the iPhone maker said. These messages will remain silenced until a user accepts them. It is 'hugely impressive' that Apple will now screen messages from unknown senders, says Moore. Additionally, it's a step in the right direction that messages from unknown senders will now appear in a dedicated folder, giving you more control, he says. Apple's iOS 26 software features will be available this fall as a free software update for iPhone 11 and later, according to Apple. This means Apple has dropped support for the iPhone XS, the iPhone XS Max and the iPhone XR. At the same time, Apple Intelligence features are only available on the newer iPhone 16 models, iPhone 15 Pro, iPhone 15 Pro Max, iPad mini (A17 Pro), and iPad and Mac models with M1 and later that have Apple Intelligence enabled. If your device can't be upgraded, it 'may sadly mean the beginning of the end,' says Moore. 'Although frustrating, security patches are not available on every device forever and therefore each iPhone essentially has a shelf life,' he says. Without ongoing updates, those devices will potentially become a risk if vulnerabilities are present but not patched, Moore says. Meanwhile, many newer apps will not even support older models as they won't be designed to incorporate them due to the extra amount of work required to keep all apps up to date, he warns. Apple's iOS 26 also comes with some more subtle privacy and security enhancements. For example, parents can now create or move kids into Child Accounts even more easily, taking advantage of a wide set of parental controls designed to keep kids safe. Meanwhile, browsing in Safari gets more private with advanced fingerprinting protection extending to all browsing by default, according to Apple. It's a big update and I'm looking forward to applying iOS 26 on my iPhone. Developers can start tweaking now, and the iOS 26 public beta will be available through the Apple Beta Software Program next month at Apple's iOS 26 is arriving in just a couple of months and it's possible the iPhone maker will introduce multiple changes before then. Teething issues and bugs will happen, but it's likely Apple will support iOS 18 with important security updates for as it launches iOS 26.

ChatGPT user creeped out after AI shares others' info — in yet another eerie episode: ‘Very scary'
ChatGPT user creeped out after AI shares others' info — in yet another eerie episode: ‘Very scary'

New York Post

time8 minutes ago

  • New York Post

ChatGPT user creeped out after AI shares others' info — in yet another eerie episode: ‘Very scary'

AI, yi, yi. With artificial intelligence becoming omnipresent in every sector of life, privacy has become a growing concern among users, wondering where the details they share with the machines are winding up. One woman, who recently used ChatGPT to make a grocery list, was shocked to see the bot get its wires crossed — delivering a message she thinks she wasn't meant to see. 3 'It seems like I mistakenly mixed up the context from a different conversation or account,' said the chatbot when confronted over the alleged leak. AlexPhotoStock – 'I'm having a really very scary and concerning moment with ChatGPT right now,' the TikTok user Liz — who goes by @wishmeluckliz ‚ confessed in a viral video detailing the eerie-sounding episode. Liz claimed that 'somebody else's conversation' infiltrated her thread — and that even the trendy tool told her that this is what had transpired, even though skeptics believe it could be a creepy coincidence. The Post has reached out to ChatGPT's parent company OpenAI for comment. According to the clip, the cybernetic eavesdropping occurred while the content creator was using the AI's voice mode — wherein users can converse with the service — to help facilitate food shopping. However, after rattling off her list of needs, Liz forgot to turn the recorder off and left it running even though she remained silent for a 'long time' afterward, per the clip. 3 Liz called the apparent slip-up 'very scary.' TikTok/@wishmeluckliz Despite the lack of input, the Chat service replied with a seemingly unrelated message that was so jarring that Liz had to double-check via the transcription to make sure she wasn't imagining it. The message read, per a screenshot: 'Hello, Lindsey and Robert, it seems like you're introducing a presentation or a symposium. Is there something specific you'd like assistance with regarding the content or perhaps help with structuring your talk or slides? Let me know how I can assist.'' Liz found the reply bizarre given that she 'never said anything leading up to this.' After retracing the transcript, she realized that the bot had somehow recorded her saying that she was a woman named Lindsey May, who claimed to be the Vice President of Google, and was giving a symposium with another man named Robert. Confused, she broached the issue to GPT in voice mode, saying, 'I was just randomly sitting here planning groceries, and you asked if Lindsey and Robert needed help with their symposium. I'm not Lindsey and Robert. Am I getting my wires crossed with another account right now?' The bot replied, 'It seems like I mistakenly mixed up the context from a different conversation or account. You're not Lindsey and Robert and that message was meant for someone else.' 3 A screenshot of the unrelated message generated by the chatbot. TikTok/@wishmeluckliz 'Thanks for pointing that out and I apologize for the confusion,' it added, seemingly confessing to leaking someone else's private information. Shaken by the apparent admission, Liz said that she hoped she was 'overreacting and that there's a simple explanation for this.' While some TikTok viewers shared her concern of a potential privacy breach, techsperts believe that the bot could've been hallucinating based on patterns in its training data, which is based in part in user input. 'This is spooky — but not unheard of,' assured one AI expert and programmer. 'When you leave voice mode on but don't speak, the model will attempt to extract language from the audio — in the absence of spoken word it will hallucinate.' They added, 'It also isn't crossing wires, but is oriented towards hallucinating in agreement, so you suggested that wires got crossed and it agreed with you in an attempt to successfully 'answer your query.'' On Reddit, AI aficionados cited multiple instances where the bot would reply unprompted. 'Why does it keep transcribing 'Thank you for watching!' when I use voice recorder but am not saying anything?' said one. While seemingly harmless in these cases, hallucinating AI Chatbots can offer dangerous disinformation to humans. Google's AI Overviews, designed to give quick answers to search queries, has been guilty of multiple slips of the technological tongue, including one instance where it advised adding glue to pizza sauce to help cheese stick better. During another case, the AI bot billed a fake phrase — 'You can't lick a badger twice' — as a legitimate idiom.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store