logo
7 GPT-5 prompts that finally made the new model click for me

7 GPT-5 prompts that finally made the new model click for me

Tom's Guide19 hours ago
GPT-5 is finally here, and depending on who you ask, it's either a massive leap forward or a disappointing model that doesn't deserve the hype.
If you've been using ChatGPT-5 this past week, you've probably noticed that the new model feels different from GPT-4o. It's faster, deeper and more intuitive, but also more likely to assume, overanalyze or completely shift the tone of your request.
Whether you're loving it or low-key missing GPT-4o, the way to get better results with GPT-5 is understanding how it thinks and prompting accordingly. These 7 prompts are designed to show off what GPT-5 does well and help you sidestep some of the growing pains. Think of them as a warm-up round for your new AI co-pilot.
Prompt: "Think step-by-step: Help me create a weekly schedule that balances work, family time and exercise. Keep it realistic for a parent with young kids."
My life is pretty hectic from morning until night, which is why this prompt works for me. If you have a similar chaotic family life or an incredibly busy work schedule, this type of prompt is where GPT-5 shines. When you ask it to think step-by-step you'll discover the deeper reasoning for realistic, multi-variable planning.
Prompt: "Organize my messy Google Drive into clear folders and subfolders. I'll paste a list of file names."
GPT-5 can recognize patterns, which makes it a pro at organization, categorization and decluttering. It can spot naming conventions, redundant versions and suggest file clean-up strategies (like archiving old drafts or grouping by year or project). You can even ask it to output the structure in a ready-to-copy format, which is perfect if you're rebuilding your folder tree manually or importing it into a tool.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
If digital clutter overwhelms you like it does for me, this prompt turns GPT-5 into a surprisingly sharp personal assistant.
Prompt: Plan a Sunday family day that works for kids ages 4, 8, and 10, includes one indoor and one outdoor activity and stays under $50."
Unlike a time-strapped human, GPT-5 can multi-task, juggling everything at once with ease. Prompts that give specifics are no problem for this AI, so go ahead and let it take all the leg work out of planning your social life and weekend activities.
Prompt: "Turn my grocery list into a healthy meal plan for the week, and rearrange it so the meals with fresh produce are earlier in the week."
OpenAI upgraded several aspects of ChatGPT with this model, including health and wellness. While AI should never take the place of a human doctor, this type of prompt can help you better balance your health and lifestyle to stay on track where it matters most.
Prompt: "I want to finish the summer off with another book. Recommend 5 books based on my favorite shows: [list 3–4 shows you love]."
As someone who always has at least two books going at once and regularly binges shows, this prompt is one of my favorites. It combines everything I love about entertainment while also channeling GPT-5's ability to brainstorm. Prompts that combine your favorite things lean into the upgraded cultural matching to connect media tastes across formats.
Prompt: "Give me 5 conversation starters for meeting new people at my kid's soccer game that aren't small talk about the weather."
GPT-5 has a knack for generating natural, situationally relevant ideas and dialogue. Don't be afraid to turn to this advanced chatbot next time you're at a loss for words when socializing.
This type of prompt is best done in the ChatGPT app when you're on-the-go and need immediate advice.
Prompt:"Explain the latest pop culture trend everyone's talking about as if I'm a busy parent who hasn't been on social media in a month."
The only thing that seems to move faster than the evolution of AI is the next pop culture trend. Stay in-the-know (and maybe use this in combination with the previous prompt) by asking GPT-5 to summarize and adjust tone for a specific audience.
Whether you're trying to come up with your next TikTok post or connect with your teen, this prompt is a good one to have on hand.
Whether you're still adjusting to GPT-5 or you've already made it your go-to assistant, one thing's clear: this model thrives when you give it structure, specificity and a little direction.
These prompts are so much more productivity hacks because they actually help you build a better understanding of how to use this upgraded model.
GPT-5 may not be perfect, but when you learn how to prompt it well, it's surprisingly great at helping you live, work and do just about everything else a little smarter (and easier).
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

4 Things Schools Need To Consider When Designing AI Policies
4 Things Schools Need To Consider When Designing AI Policies

Forbes

time3 minutes ago

  • Forbes

4 Things Schools Need To Consider When Designing AI Policies

Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Business Insider

time3 minutes ago

  • Business Insider

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed " AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges.

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.
I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Yahoo

time18 minutes ago

  • Yahoo

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Jigar Bhati transitioned from Twitter to OpenAI for a startup-like environment. He said his work experience in infrastructure and scaling helped him get hired at OpenAI. Bhati said OpenAI still feels like a startup with a focus on productivity and collaboration. This is an as-told-to essay based on a conversation with Jigar Bhati, a member of technical staff at OpenAI. He's worked at the company since 2023 and previously worked as a software engineer at Twitter. This story has been edited for length and clarity. I was working at Twitter for almost seven years and was looking for a change in general. Twitter was a big tech company, and it was handling millions of users already. I wanted to join a company that was a bit smaller scale and work in a startup-like environment. When ChatGPT launched, software engineers were in awe. So I was already using it before joining OpenAI. I was fascinated by the product itself. There was definitely that connection with the product that, as a software engineer, you often don't feel. I could also see there was a huge opportunity and huge impact to be created as part of joining OpenAI in general. And I think if I look back at the past two years of my journey, I think that's mostly true. It's been exciting seeing the user base grow over two years. I got to work and lead some critical projects, which was really great for my own professional development. And you're always working with smart people. OpenAI hires probably the best talent out there. You're contributing while also learning from different people, so there's exponential growth in your professional career as well. Building up the right work experience is key My career at Twitter and before that had been focused on infrastructure capabilities and handling large-scale distributed systems, which was something the team was looking for when I interviewed with OpenAI. When I joined OpenAI, there were a lot of challenges with respect to scaling infrastructure. So my expertise has helped steer some of those challenges in the right direction. Having that relevant experience helps when you're interviewing for any team. So one of the best things is to find the right team that has the closest match for you. Having public references, for me, helped. I had some discussions and conference talks and met a lot of people as part of that conference networking. That also helps you stay on top of state-of-the-art technological advancements, which helps in whatever work stream you're working with. I think OpenAI is probably one of the fastest-growing companies in the world, so you really need to move fast as part of that environment. You're shipping, or delivering, daily or weekly. And as part of that, I think you really need to have that cultural fit with the company. When you're interviewing, I think you need to be passionate about working at OpenAI and solving hard problems. In general for professionals, it's important to have a good career trajectory so that they can showcase how they have solved real, hard problems at a large scale. I think that goes a long way. Starting your career at a startup or a larger tech company are both fine. It's up to the students' interests and cultural fit as well. OpenAI is not just a place for experienced engineers. OpenAI also hires new grads and interns. I've definitely seen people entering as part of that pipeline. And it's amazing to see how they enjoy working at OpenAI and how they come up with new ideas and new capabilities and new suggestions. But whichever place you end up in, I think it's important to have good growth aspects professionally and also for you to ship products. At any company you can create impact for the company and yourself, and be able to have that career trajectory. OpenAI still feels like a startup One of the most exciting things is that I think OpenAI still operates like it operated two years ago. It still feels like a startup, even though we may have scaled the number of engineers, number of products, the number of user bases. It's still very much like a startup environment and there's a big push for productivity and collaboration. The velocity and the productivity you get working at OpenAI is definitely much higher than some of the other companies that I've worked with. That makes things really exciting because you get to ship products on a continuous basis, and you get into that habit of shipping daily rather than weekly or monthly or yearly. It feels great to be working in AI right now. In addition to having a connection with the product, it makes things very interesting when you are able to work from within the company, shaping the direction of a product that will be used by the entire world. With GPT-5, for instance, we had early access to it and could steer the direction of its launch. Every engineer is empowered to provide feedback that will help improve the model, add new capabilities, and make it more usable and user-friendly for all the different 700 million users out there. I think that's pretty amazing and speaks to the kind of impact you can make as part of the company. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store