
Is ChatGPT hurting our critical thinking skills?
An MIT study finds ChatGPT may be hurting critical thinking skills. How do you use AI tools while protecting your brain?
Are AI chatbots dulling our brains? A new MIT study suggests critical thinking skills are at risk from tools like ChatGPT. What does the science say happens to brains that rely on AI? And how can you use AI tools while protecting your ability to think for yourself?
Video Duration 22 minutes 45 seconds 22:45
Video Duration 22 minutes 02 seconds 22:02
Video Duration 21 minutes 00 seconds 21:00
Video Duration 22 minutes 49 seconds 22:49
Video Duration 21 minutes 18 seconds 21:18
Video Duration 21 minutes 49 seconds 21:49
Video Duration 24 minutes 29 seconds 24:29

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Qatar Tribune
a day ago
- Qatar Tribune
Don't blame the Bot: Master your AI prompts for better results
Agencies If you're using ChatGPT but getting mediocre results, don't blame the chatbot. Instead, try sharpening up your prompts. Generative AI chatbots such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude have become hugely popular and embedded into daily life for many users. They're powerful tools that can help us with so many different tasks. What you shouldn't overlook, however, is that a chatbot's output depends on what you tell it to do, and how. There's a lot you can do to improve the prompt — also known as the request or query — that you type in. Here are some tips for general users on how to get higher quality chatbot replies, based on tips from the AI model makers: ChatGPT can't read your mind. You need to give it clear and explicit instructions on what you need it to do. Unlike a standard Google search, you can't just ask for an answer based on some keywords. And you'll need to do more than just tell it to, say, 'design a logo' because you'll end up with a generic design. Flesh it out with details on the company that the logo is for, the industry it will be used in and the design style you're going for. 'Ensure your prompts are clear, specific, and provide enough context for the model to understand what you are asking,' ChatGPT maker OpenAI advises on its help page. 'Avoid ambiguity and be as precise as possible to get accurate and relevant responses.' Think of using a chatbot like holding a conversation with a friend. You probably wouldn't end your chat after the first answer. Ask follow-up questions or refine your original prompt. OpenAI's advice: 'Adjust the wording, add more context, or simplify the request as needed to improve the results.' You might have to have an extended back-and-forth that elicits better output. Google advises that you'll need to try a 'few different approaches' if you don't get what you're looking for the first time. 'Fine-tune your prompts if the results don't meet your expectations or if you believe there's room for improvement,' Google recommends in its prompting guide for Gemini. 'Use follow-up prompts and an iterative process of review and refinement to yield better results.' When making your request, you can also ask an AI large language model to respond in a specific voice or style. 'Words like formal, informal, friendly, professional, humorous, or serious can help guide the model,' OpenAI writes. You also tell the chatbot the type of person the response is aimed at. These parameters will help determine the chatbot's overall approach to its answer, as well as the tone, vocabulary and level of detail. For example, you could ask ChatGPT to describe quantum physics in the style of a distinguished professor talking to a class of graduate students. Or you could ask it to explain the same topic in the voice of a teacher talking to a group of schoolchildren. However, there's plenty of debate among AI experts about these methods. On one hand, they can make answers more precise and less generic. But an output that adopts an overly empathetic or authoritative tone raises concerns about the text sounding too manipulative. Give the chatbot all the background behind the reason for your request. Don't just ask: 'Help me plan a weeklong trip to London.' ChatGPT will respond with a generic list of London's greatest hits: historic sites on one day, museums and famous parks on another, trendy neighborhoods and optional excursions to Windsor Castle. It's nothing you couldn't get from a guidebook or travel website, but just a little better organized. But if, say, you're a theatre-loving family, try this: 'Help me plan a weeklong trip to London in July, for a family of four. We don't want too many historic sites, but want to see a lot of West End theatre shows. We don't drink alcohol so we can skip pubs. Can you recommend mid-range budget hotels where we can stay and cheap places to eat for dinner?' This prompt returns a more tailored and detailed answer: a list of four possible hotels within walking distance of the theater district, a seven-day itinerary with cheap or low-cost ideas for things to do during the day, suggested shows each evening, and places for an affordable family dinner. You can tell any of the chatbots just how extensive you want the answer to be. Sometimes, less is more. Try nudging the model to provide clear and succinct responses by imposing a limit. For example, tell the chatbot to reply with only 300 words.


Al Jazeera
2 days ago
- Al Jazeera
From Gaza to ICE raids, why is US firm Palantir under scrutiny?
From Gaza to ICE raids, why is US firm Palantir under scrutiny? NewsFeed From anti-immigration raids, Gaza kill lists and lucrative government contracts, US data firm Palantir has been under increasing scrutiny. Soraya Lennie breaks it down. Video Duration 01 minutes 08 seconds 01:08 Video Duration 01 minutes 43 seconds 01:43 Video Duration 01 minutes 36 seconds 01:36 Video Duration 01 minutes 59 seconds 01:59 Video Duration 02 minutes 53 seconds 02:53 Video Duration 02 minutes 25 seconds 02:25 Video Duration 03 minutes 08 seconds 03:08


Al Jazeera
3 days ago
- Al Jazeera
Is ChatGPT hurting our critical thinking skills?
The Take An MIT study finds ChatGPT may be hurting critical thinking skills. How do you use AI tools while protecting your brain? Are AI chatbots dulling our brains? A new MIT study suggests critical thinking skills are at risk from tools like ChatGPT. What does the science say happens to brains that rely on AI? And how can you use AI tools while protecting your ability to think for yourself? Video Duration 22 minutes 45 seconds 22:45 Video Duration 22 minutes 02 seconds 22:02 Video Duration 21 minutes 00 seconds 21:00 Video Duration 22 minutes 49 seconds 22:49 Video Duration 21 minutes 18 seconds 21:18 Video Duration 21 minutes 49 seconds 21:49 Video Duration 24 minutes 29 seconds 24:29