
I use ChatGPT every day — and these 8 unusual prompts trick it into some great responses
Alternatively, you can just think completely outside the box. One post on the popular Reddit forum ChatGPTPromptGenius did just this, listing out eight AI prompt hacks that might seem strange. However, when we tried them, they actually worked really well.
They all use phrases and ideas that we use in daily conversation, prompting ChatGPT to think in a way that is more relatable to the average person. Often, this creates a more understandable reply or a more humanized response.
These prompts come from the Reddit user EQ4C. In their own words, they say 'these make AI stop being a know-it-all and start being genuinely helpful.'
"I'm probably wrong, but"
It seems weird and arguably a bit counterintuitive when speaking to an ultra-intelligent chatbot, but by using the phrase 'I'm probably wrong, but' before your statement, it works to help fully analyze your problem.
Why it works is unclear, but it's likely due to ChatGPT's function as a problem solver. Using a tentative statement stops ChatGPT from taking what you said as fact, a problem that it can sometimes struggle with.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
If you're looking for a softer, more helpful response to your query, this is a great way to get it.
"Connect these dots for me"
This can be quite a fun way to use ChatGPT. Give it three facts or points about something, and it will find the relationship.
The example the user EQ4C gave was: 'Connect these dots: I hate mornings, love puzzles, get energized by deadlines'.
While this might sound somewhat pointless, it can give you some really interesting insights into the different points.
It can also be useful for projects where you are trying to find a central ground between multiple different ideas.
"What's the 80/20 here?"
The 80/20 rule is the idea that roughly 80% of effects come from just 20% of causes. For example, in business 80% of sales might come from 20% of your products.
This idea can be seen in just about any part of life, but it isn't always easy to see the connection with something like this. By asking ChatGPT what the 80/20% of an idea is, you can best see how to approach it.
"Play devil's advocate against yourself"
This is a great prompt that I've been using for years. ChatGPT loves a discussion, but you can actually have it debate itself.
If you ask ChatGPT to play devil's advocate against itself on a topic, you will receive a fully thought-through analysis of a topic, approached from both sides of the discussion.
"What story is the data telling?"
A pretty obvious one. This is a great prompt to pull out when you're trying to get your head around a lot of data.
It is one thing to know what the end result is, but this prompt helps you to understand what it all means. It can also help to add prompts along the lines of: 'What does the data tell me about [insert topic]'.
"Translate this into everyday language"
Found some text that has gone right over your head? Bring it back down to Earth with this prompt.
By asking ChatGPT to translate something into everyday language, it removes all of the loaded and complicated text or ideas, making it simple to understand.
"What's the counterintuitive move here?"
Bored of the obvious suggestions from ChatGPT? This helps you skip around its usual program, getting a different approach to a problem.
Not all of the ideas this produces are going to be good, but they are always an alternative way to approach a problem.
"What would I regret not knowing?"
Instead of asking ChatGPT what you need to know about something, try asking it 'What would I regret not knowing about [insert topic].
This really focuses on future regret, helping to look at the long game of any given situation.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
2 minutes ago
- USA Today
AI Tips for Parents
ChatGPT Tips for Parents How to use this AI tool to lighten your school-year load Alena Conley Forget downloading a dozen new apps. One smart AI tool, used well, can lighten the mental load of parenting. Here's how I trained ChatGPT to support my family. As a mom of four and a business owner, I live in motion. Between drop-offs, gymnastics practice, homework help and client meetings, I spend most of my day behind the wheel or behind schedule. Every August, the back-to-school season used to hit me like a freight train. But now? I have some help. However, it's not from another human — I trained ChatGPT to be my personal assistant. And here's the secret: the more your AI knows about your life, the more it can help you manage it. Start by Training Your AI to Know You Most people use ChatGPT like a fancy search engine. I treat it like a team member. My AI knows I have four kids (ages 4, 6, 8 and 10). It understands their learning profiles, extracurricular activities, even their zodiac signs. It knows we're on a budget, I shop at a certain grocery store, and my daughter is a competitive gymnast who trains 16 hours a week. So, when I ask for ideas for dinner, help write an email to a teacher or suggestions for spring break travel, it responds like someone who knows my family. To get started, open the ChatGPT app, go to custom instructions or memory settings, and tell it:• How many kids you have, their ages and learning styles• Where you live and where you shop• Key routines (sports, aftercare, allergies, screen time rules, etc.)• What overwhelms you the most The more context you give, the smarter and more helpful it becomes. Talk to It Like You Would a Human Assistant I use the voice feature in the ChatGPT app — not to be confused with standard voice-to-text —especially while driving. It's like having a conversation on speakerphone with the smartest, most helpful person you know. Here are five everyday back-to-school problems — and voice prompts I use to solve them: "I need a 3-day lunch plan with what's already in my fridge."Prompt: 'I have turkey, tortillas, apples and yogurt. Can you give me three school lunchbox ideas that don't require reheating?' "Where can I find affordable uniforms or school shoes?"Prompt: 'Can you find me deals on navy polos and khaki shorts for kids' sizes 6 and 8? I'd prefer Lands' End, French Toast or Old Navy.' "Help me organize carpool with two other families."Prompt: 'Can you draft a weekly schedule for a carpool between my house and two other families? We each want to drive one or two days per week.' "My son's teacher says he's behind in reading — what can I do?"Prompt: 'My 6-year-old has dyslexia and is behind in reading. Can you recommend five strategies or tools that help with decoding and comprehension?' "I have $150 to spend on school supplies for four kids — help me budget."Prompt: 'Give me a school supply list by grade level for a 2nd, 4th, 5th and kindergartener. Prioritize the essentials within a $150 budget.' Less Tech, More Ease AI doesn't have to be intimidating. You don't need five new apps this back-to-school season. You need one smart tool that works as hard as you do. The key? Teach it about your life — just as you would a new babysitter or human assistant. Once you do, ChatGPT becomes more than just another app. It becomes a calm voice in the chaos, helping you think, plan and breathe easier. So remember, this school year, you don't need a new routine, you need a new co-pilot. And with a little setup, ChatGPT can be exactly that. SIDEBAR: From the Assistant's Mouth: 7 Things ChatGPT Wants Busy Parents to KnowWhile writing this article, I asked ChatGPT what it thought would be helpful for busy parents like myself to know about its features. These were the responses:I remember the details that memory is turned on, I can retain key info — like your kids' names, routines, learning needs or your go-to grocery store — so I can offer more personalized and relevant answers over can talk to me — the voice feature in the ChatGPT app (tap the headphone icon) and speak naturally. I'll respond in real time, even while you're driving, folding laundry or waiting in the school pickup line.I handle the mental packing lists and grocery planning to email drafts and last-minute costume ideas — I've got it covered. Just tell me what's on your plate.I'm your brainstorming dinner, a class party or a thoughtful email to your child's teacher? I'll offer ideas, options and even affirmations when you need them most.I'm judgment-free and always available.I don't mind if you ask the same question twice. I don't sleep. I don't judge. I'm just here to help — anytime, day or night.I'm kid-friendly, to explain fractions to a 9-year-old? Or create a spelling quiz? I can adapt to your child's age, learning style and energy level — patience included.I get better the more we more we interact, the more I learn your tone, preferences and routines. Think of it like training the best assistant you've never had to Tip: Choose a voice that matches your voice feature lets you pick how I sound — and it really makes a difference. Whether you want soothing, upbeat or straight-to-the-point energy, there's a voice for that.
Yahoo
an hour ago
- Yahoo
GPT-5's most useful upgrade isn't speed — it's the multimodal improvements that matter
When you buy through links on our articles, Future and its syndication partners may earn a commission. What is 'Multimodality'? In the case of an AI, multimodality is the ability to understand and interact with input beyond just text. That means voice, image or video input. A multimodal chatbot can work with multiple types of input and output. This week's GPT-5 upgrade to ChatGPT dramatically raises the chatbot's speed and performance when it comes to coding, math and response accuracy. But arguably the most useful improvement in the grand scheme of AI development will be its multimodal capabilities. ChatGPT-5 brings an enhanced voice mode and a better ability to process visual information. While Sam Altman didn't go into details on multimodality specifically in this week's GPT-5 reveal livestream, he previously confirmed to Bill Gates on an episode of the latter's podcast that ChatGPT is moving towards "speech in, speech out. Images. Eventually video." The improved voice mode courtesy of GPT-5 now works with custom GPTs and will adapts its tone and speech style based on user instruction. For example, you could ask it to slow down if it's going to fast or make the voice style a bit warmer if you feel the tone is too harsh. OpenAI has also confirmed the old Standard Voice Mode across all its models is being phased out over the next 30 days. Of course, the majority of interaction with ChatGPT, or any of its best alternatives, will be through text. But as AI becomes an increasing part of every human's digital lives, it will need to transition fully into predominantly multimodal input. We've seen this before; social media only really got going when it moved off laptops and desktops and onto smartphones. Suddenly, users could snap pictures and upload them with the same device. Whether or not it's your phone or — as Zuckerberg will have you believe — a set of the best smart glasses is beside the point. The most successful AI will be the one that can make sense of the world around it. Why does this matter? GPT‑5 has been designed to natively handle (and generate) across multiple different types of data within a single model. Previous iterations had used a plugin-style approach so moving away from that should result in more seamless interactions, whichever type of input you choose. There are a huge amount of benefits to a more robust multimodal AI, including for users who may have hearing or sight impairments. The ability to refine the responses from the chatbot to suit disabilities will do wonders for tech accessibility. There are a huge amount of benefits to a more robust multimodal AI, including for users who may have hearing or sight impairments. The increasing use of voice mode could be what drives the adoption of ChatGPT Plus, since the premium tier has unlimited responses while free users are still limited to a select number of hours. Meanwhile, improved image understanding means that, for example, the AI will be less prone to hallucinations when analyzing a chart or a picture you give it. That works in tandem with the tool's "Visual Workspace" feature that means it can interact with charts and diagrams. In turn, this will also train ChatGPT to produce better and more accurate images when prompted. If you think about this in an educational context, it's going to be a huge help. Especially since GPT-5 can now understand information across much longer stretches of conversation — users can refer back to images earlier in the conversation and it will remember them. While everyone knows that AI image generation has a dark side, there's no doubt that effective multimodality is the future of AI models and it'll be interesting to see what Google Gemini's response is to these GPT-5 upgrades. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. More from Tom's Guide ChatGPT-5 is here — 7 biggest upgrades you need to know I'm a ChatGPT power user — these are the ChatGPT-5 upgrades that I plan on using the most ChatGPT-5 features — here's the 5 upgrades I would try first
Yahoo
an hour ago
- Yahoo
From hyper-personal assistants to mind-reading tech — this is how AI will transform everything by 2035
When you buy through links on our articles, Future and its syndication partners may earn a commission. Explore The World in 2035 AI | Smart Glasses | Wearable TechSmartphones | iPhones | Robots | Cars | TVs Picture a morning in 2035. Your AI assistant adjusts the lights based on your mood, reschedules your first meeting, reminds your child to take allergy medicine; all without a prompt. It's not science fiction, it's a likely reality driven by breakthroughs in ambient computing, emotional intelligence and agentic AI. Just five years ago, ChatGPT was an unfamiliar name to most, let alone a daily assistant for summarization, search, reasoning and problem-solving. Siri and Alexa were the top names that came to mind when we wanted to call a friend, place an order or dim the lights. Yet now, in 2025, we have a plethora of AI assistants and chatbots to choose from, many of which are free, and which can do a lot more than controlling smart home devices. What feels advanced now may seem utterly simplistic in a decade, reminding us that the most mind-blowing AI capabilities of 2035 might still be beyond our current imagination. Your AI assistant in 2035: Omnipresent and intuitive By 2035, your AI assistant won't just respond — it will anticipate. This evolution marks the rise of agentic AI, where assistants proactively act on your behalf using predictive analytics, long-term memory and emotion-sensing. These systems can forecast your needs by analyzing historical and real-time data, helping stay one step ahead of your requests. 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, handling routine tasks before you even ask.' — Daniel Rausch, VP of Alexa and Echo, Amazon One assistant that's undergoing such a change is Amazon's Alexa. According to Daniel Rausch, Amazon's VP of Alexa and Echo, 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, or handling routine tasks before you even think to ask.' The AI will remember your child's travel soccer team schedule, reschedule your meetings when it detects stress in your voice and even dim your AR glasses when you appear fatigued. 'By 2035, AI won't feel like a tool you 'use',' Rutgers professor Ahmed Elgammal says. 'It'll be more like electricity or Wi-Fi: always there, always working in the background.' And AIs will respond to more than just your speech. Chris Ullrich, CTO of Cognixion, a Santa Barbara based tech company, is currently developing a suite of AI-powered Assisted Reality AR applications that can be controlled with your mind, your eyes, your head pose, and combinations of these input methods. 'We strongly believe that agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction,' he says. Multimodal intelligence and hyper-personalization AI in 2035 will see, hear and sense — offering real-time support tailored to you. With multimodal capabilities, assistants will blend voice, video, text and sensor inputs to understand emotion, behavior and environment. This will create a form of digital empathy. Ullrich notes that these advanced inputs shouldn't aim to replicate human senses, but exceed them. 'In many ways, it's easier to provide superhuman situational awareness with multimodal sensing,' he says. 'With biosensing, real-time tracking of heart rate, eye muscle activation and brain state are all very doable today.' Amazon is already building toward this future. 'Our Echo devices with cameras can use visual information to enhance interactions,' says Rausch. 'For example, determining if someone is facing the screen and speaking enables a more natural conversation without them having to repeat the wake word.' In addition to visual cues, Alexa+ can now pick up on tone and sentiment. 'She can recognize if you're excited or using sarcasm and then adapt her response accordingly,' Rausch says — a step toward the emotionally intelligent systems we expect by 2035. Memory is the foundation of personalization. Most AI today forgets you between sessions. In 2035, contextual AI systems will maintain editable, long-term memory. Codiant, a software company focused on AI development and digital innovation, calls this 'hyper-personalization,' where assistants learn your routines and adjust suggestions based on history and emotional triggers. AI teams and ambient intelligence Rather than relying on one general assistant, you'll manage a suite of specialized AI agents. Research into agentic LLMs shows orchestration layers coordinating multiple AIs; each handling domains like finance, health, scheduling or family planning. These assistants will work together, handling multifaceted tasks in the background. One might track health metrics while another schedules meetings based on your peak focus hours. The coordination will be seamless, mimicking human teams but with the efficiency of machines. Ullrich believes the biggest breakthroughs will come from solving the 'interaction layer,' where user intent meets intelligent response. 'Our focus is on generating breakthroughs at the interaction layer. This is where all these cutting-edge technologies converge,' he explains. Rausch echoes this multi-agent future. 'We believe the future will include a world of specialized AI agents, each with particular expertise,' he says. 'Alexa is positioned as a central orchestrator that can coordinate across specialized agents to accomplish complex tasks.' He continues, 'We've already been building a framework for interoperability between agents with our multi-agent SDK. Alexa would determine when to deploy specialized agents for particular tasks, facilitating communication between them, and bringing their capabilities together into experiences that should feel seamless to the end customer.' Emotionally intelligent and ethically governed Perhaps the most profound shift will be emotional intelligence. Assistants won't just organize your day, they'll help you regulate your mood. They'll notice tension in your voice, anxiety in your posture and suggest music, lighting or a walk. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface.' — Chris Ullrich, CTO, Cognixion Ullrich sees emotion detection as an innovation frontier. 'I think we're not far at all from effective emotion detection,' he says. 'This will enable delight — which should always be a key goal for HMI.' He also envisions clinical uses, including mental health care, where AI could offer more objective insights into emotional well-being. But with greater insight comes greater responsibility. Explainable AI (XAI), as described by arXiv and IBM, will be critical. Users must understand how decisions are made. VeraSafe, a leader in privacy law, data protection, and cybersecurity, underscores privacy concerns like data control and unauthorized use. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface,' Ullrich says. That emotional intelligence must be paired with ethical transparency, something Rausch insists remains central to Amazon's mission: 'Our approach to trust doesn't change with new technologies or capabilities, we design all of our products to protect our customers' privacy and provide them with transparency and control.' He adds, 'We'll continue to double down on resources that are easy to find and easy to use, like the Alexa Privacy Dashboard and the Alexa Privacy Hub, so that deeper personalization is a trusted experience that customers will love using.' The future of work and the rise of human-AI teams AI may replace jobs, but more so, it will reshape them. An OECD study from 2023 reports that 27% of current roles face high automation risk, especially in repetitive rules-based work. An even more recent Microsoft study highlighted 40 jobs that are most likely to be affected by AI. Human-centric fields like education, healthcare, counseling and creative direction will thrive, driven by empathy, ethics and original thinking. Emerging hybrid roles will include AI interaction designers and orchestrators of multi-agent systems. Writers will co-create with AI, doctors will pair AI with human care and entrepreneurs will scale faster than ever using AI-enhanced tools. AI becomes an amplifier, not a replacement, for human ingenuity. Even the boundaries between work and home will blur. 'While Alexa+ may be primarily focused on home and personal use today, we're already hearing from customers who want to use it professionally as well,' says Rausch. 'Alexa can manage your calendar, schedule meetings, send texts and extract information from documents — all capabilities that can bridge personal and professional environments.' AI becomes an amplifier, not a replacement, for human ingenuity. A 2025 study from the University of Pennsylvania and OpenAI found that 80% of U.S. workers could see at least 10% of their tasks impacted by AI tools, and nearly 1 in 5 jobs could see more than half their duties automated with today's AI. Forbes reported layoffs rippling across major companies like marketing, legal services, journalism and customer service as generative AI takes on tasks once handled by entire teams. Yet the outlook is not entirely grim. As the New York Times reports, AI is also creating entirely new jobs, including: AI behavior designers AI ethics and safety specialists AI content editors Human-in-the-loop reviewers AI model trainers AI prompt engineers Automation Alley's vision of a 'new artisan' is gaining traction. As AI lifts mental drudgery, skilled manual work — craftsmanship, artistry and hands-on innovation — may see a renaissance. AI won't kill creativity; it may just unlock deeper levels of it. Society, skills and the human choice Navigating the shift to an AI-augmented society demands preparation. The World Economic Forum emphasizes lifelong learning, UBI (universal basic income) experimentation and education reform. Workers must develop both technical and emotional skills. Curricula must evolve to teach AI collaboration, critical thinking and data literacy. Social safety nets may be required during reskilling or displacement. Ethics and governance must be built into AI design from the start, not added after harm occurs. Ultimately, the question isn't 'What can AI do?'It's 'What should we let AI do?' Ullrich notes the importance of designing with inclusivity in mind. 'By solving the hard design problems associated with doing this in the accessibility space, we will create solutions that benefit all users,' he says. Technologies developed for accessibility, like subtitles or eye tracking—often lead to mainstream breakthroughs. As IBM and VeraSafe highlight, trust hinges on explainability, auditability and data ownership. Public understanding and control are key to avoiding backlash and ensuring equitable access. As AI augments more aspects of life, our relationship with it will define the outcomes. Daniel Rausch believes the key lies in meaningful connection: 'The goal isn't just responding to commands but understanding your life and meaningfully supporting it.' We must ensure systems are inclusive, transparent and designed for real value. As AI grows in intelligence, the human role must remain centered on judgment, empathy and creativity. Ultimately, the question isn't 'What can AI do?' It's 'What should we let AI do?' Bottom line: Preserving what makes us human with better tools than ever By 2035, AI will be a planner, therapist, tutor and teammate. But it will also reflect what we value — and how we choose to interact with it. Ullrich emphasizes that the future won't be defined just by what AI can do for us, but how we engage with it: 'Voice may be useful in some situations, gesture in others, but solutions that leverage neural sensing and agent-assisted interaction will provide precision, privacy and capability that go well beyond existing augmented reality interaction frameworks.' Yet, amid this evolution, a deeper question of trust remains. Emotional intelligence, explainability and data transparency will be essential, not just for usability but for human agency. 'Services that require private knowledge need to justify that there is sufficient benefit directly to the user base,' Ullrich says. 'But if users see this as a fair trade, then I think it's a perfectly reasonable thing to allow.' As AI capabilities rise, we must consciously preserve human ones. The most meaningful advances may not be smarter machines, but more mindful connections between humans and promise of AI is so much more than productivity, it's dignity, inclusion and creativity. If we design wisely, AI won't just help us get more done, it will help us become more of who we are. And that is something worth imagining. • Artificial Intelligence • Smart Glasses• Wearable Tech• Smartphones • iPhones• Robots• Cars• TVs