logo
Humans are just slightly better than a coin toss at spotting AI pics

Humans are just slightly better than a coin toss at spotting AI pics

As AI-generated images continue to improve every year, one of the key questions is how human beings can distinguish between real and generated images. And while many of us may think that it's fairly easy to spot images generated by AI tools like ChatGPT, Gemini and Claude, researchers think otherwise.
According to researchers from the Microsoft AI for Good Lab, the chances of being able to identify AI-generated images are ' just slightly better than flipping a coin.' Researchers say they collected data from the online game 'Real or Not Quiz', where participants were asked to distinguish AI-generated images from real ones and identify their authenticity.
The study, which involved the analysis of approximately 287,000 images by over 12,500 people from around the world, found that participants had an overall success rate of just 62 per cent, meaning they had a slightly higher chance than a coin flip when it came to detecting these artificially generated photos. For this, researchers say they used some of the best AI image generators available to create the quiz, and that the game was not designed to compare the photorealism of images generated by these models.
As it turns out, people who played this online quiz were fairly accurate at differentiating between real and AI-generated human portraits, but struggled when it came to natural and urban landscapes. For those wondering, humans had a success rate of around 65 per cent when it came to identifying people, but could only identify nature photos 59 per cent of the time.
Researchers noted that most of the time, people had trouble distinguishing 'images without obvious artifacts or stylistic cues', but when it came to human portraits, the percentage was much higher because of our brain's ability to recognise faces. These findings are in line with a recent study by the University of Surrey, which discussed how our brains are 'drawn to spot faces everywhere.'
The study also found that AI detection tools are fairly more reliable than humans at identifying AI-generated images, but they were also prone to mistakes. The team behind the study also emphasised the need for transparency tools like watermarks and robust AI detections to prevent the spread of misinformation and said they were working on a new AI image detection tool, which it claims has a success rate of over 95 per cent when it comes to both real and generated images.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's new study mode in ChatGPT is designed to help you learn, not cheat and get quick answers
OpenAI's new study mode in ChatGPT is designed to help you learn, not cheat and get quick answers

India Today

time24 minutes ago

  • India Today

OpenAI's new study mode in ChatGPT is designed to help you learn, not cheat and get quick answers

ChatGPT is becoming a go-to tool for many students to complete their projects. But more than learning, many students are simply using the AI to gather information and copy-paste it into their assignments. Now, in an attempt to help students study and learn as they work, OpenAI has introduced a new Study Mode in ChatGPT. The feature is now live and available for logged-in users on the Free, Plus, Pro, and Team plans, and it is expected to roll out to ChatGPT Edu users in the coming to OpenAI, Study Mode is designed to help students "build deeper understanding, not just finish something." Instead of offering direct solutions, the tool guides users step by step through interactive prompts, questions, and knowledge checks. 'Instead of doing the work for them, Study Mode encourages students to think critically about their learning,' said Robbie Torney, Senior Director of AI Programmes at Common Sense Media, in a statement shared by Study Mode works in ChatGPTOpenAI says that its new Study Mode has been developed in collaboration with teachers, scientists, and pedagogy experts to help the AI model reflect teaching practices grounded in learning science. The feature uses custom system instructions that encourage active participation by users, manage their cognitive load, promote self-reflection, foster curiosity, and deliver actionable feedback. Once activated, Study Mode responds to the user's queries according to their objectives and skill level. The tool also sections the lessons into easy-to-follow sections, using Socratic-style questioning, hints, and self-reflection prompts to encourage user engagement. OpenAI highlights that the new tool also uses scaffolded responses, a teaching method that organises information in a structured way, helping learners see how different concepts connect without becoming to make it more personalized, Study Mode also adjusts lessons based on the user's prior interactions and understanding of the subject matter. It also includes Built-in knowledge checks, including quizzes and open-ended questions, to offer personalised feedback and help students measure their progress over Study Mode key featuresSome of the key highlights of Study Mode are:Interactive prompts: The AI tool uses questions and hints to promote active learning rather than delivering answers learning: It breaks down complex topics into easy-to-digest support: The tool adjusts responses to each student's needs and learning checks: Incorporates quizzes and open-ended questions to track flexibility: Students can switch Study Mode on and off at any time in a believes that these new features will not only make learning more engaging for students but also reduce the temptation to rely on ChatGPT purely for quick Study Mode limitationsOpenAI acknowledges that Study Mode is still in its early stages. Since it currently relies on custom system instructions, students may experience inconsistent behaviour and occasional mistakes. The company plans to integrate these behaviours directly into its core AI models once it has gathered enough feedback.- Ends

Beyond ChatGPT – Some incredible AI tools that you might have not heard of
Beyond ChatGPT – Some incredible AI tools that you might have not heard of

Hindustan Times

time24 minutes ago

  • Hindustan Times

Beyond ChatGPT – Some incredible AI tools that you might have not heard of

While it does feel like ChatGPT is the new Google and some of us have moved to using ChatGPT for every question that pops up in our mind instead of doing a regular Google search, there are many more interesting AI tools out there that are a bit more under the radar and shall intrigue you equally or more. Go ahead, try these tools today, and see which one suits your tastes and preferences more.(REUTERS) Grok Grok, the AI chatbot developed by Elon Musk's xAI, has recently gained traction, especially after going viral with the Ghibli Pictures trend. While earlier Grok models were open-source, the latest version — Grok-3 — remains proprietary and serves as xAI's flagship model. Integrated directly into X (formerly Twitter), Grok-3 was trained using ten times more computing power than its predecessor, Grok-2. xAI offers both Grok-3, which emphasizes accuracy, and Grok-3 Mini, optimised for speed — though most users may not notice a significant difference in day-to-day use. Founded in 2023, xAI aims to "understand the true nature of the universe" — a mission Elon Musk set forth after parting ways with OpenAI. Musk's departure stemmed from disagreements over OpenAI's growing emphasis on political correctness, its shift away from a non-profit model, and what he perceived as increasing opacity. Grok's interface sets it apart with features like 'Deep Search,' 'Deeper Search,' and various 'Think Modes.' It supports a wide range of tasks, including research, image generation, how-to queries, data analysis, and code creation. According to a recent Counterpoint report, Grok-3 was trained using a staggering 200,000 NVIDIA H100 GPUs — showcasing its scale — while rivals like Deepseek R1 achieve similar performance with just 2,000 Nvidia H800 GPUs, highlighting efficiency over brute force. Manus Manus is one of the earliest full-fledged AI agents designed to perform a wide range of tasks with minimal human input. Unlike traditional chatbots that rely on continuous prompting, Manus is built to function largely autonomously. Its advertised capabilities include trip planning, schedule management, stock analysis, video course creation, comparative analysis, and podcast editing, among others. Developed in China, Manus has sparked conversations about the country's rapid strides in AI innovation. However, the platform is not without its limitations — users have reported occasional inaccuracies and execution issues. Access was limited with a long waitlist but has recently opened up as the company has expanded capacity. There is a free plan which does give you some decent level of access although you might have to spend around $20 a month for the basic paid plan. Out of all the AI tools I have used recently, Manus is one of the most fascinating. I just built an application to calculate USD to INR currency conversion rates in Python within minutes. It even deployed to a public website for me to access and I didn't even have to write a line of code. This might be paving the way for AGI or Artificial Generational Intelligence. Mistral AI Mistral AI, the French company behind Le Chat—a local alternative to ChatGPT and Grok—positions itself as the world's greenest and leading independent AI lab. It remains the most prominent European contender in the generative AI space. Mistral offers both free and premium models with tiered pricing and recently partnered with Microsoft to distribute its models via the Azure platform. A notable feature in Le Chat is the ability to connect your Google account, allowing the AI to summarize your emails and meetings. The company asserts that this personal data will not be used for training purposes. Additionally, Le Chat supports the creation of custom AI agents to help automate a variety of tasks. Claude AI Claude, developed by Anthropic AI, is a key competitor to OpenAI's ChatGPT series, with a strong emphasis on AI safety and research. Anthropic was founded by former OpenAI members who left the company over concerns about the direction and safety of AI development. While the free version of Claude offers capabilities comparable to ChatGPT's free tier, the Pro version offers more usage and access to more Claude models. seems to be a favourite amongst developers as it offers code generation, Github support and help with refactoring. Qwen Qwen is Alibaba Cloud's large language model and a growing competitor to ChatGPT. While several of its earlier models have been open-sourced, the most advanced versions remain proprietary—a strategy that's increasingly common among leading AI labs balancing openness with commercial advantage. LM Studio LM Studio is a free user-friendly app that lets anyone run AI chatbots on their own computer without needing the internet. You can try popular open-source models like Mistral or Llama securely and privately. It's great for exploring AI locally, whether you're curious about chatbots or just want to use AI without sending data to the cloud. Go ahead, try the above tools today, and see which one suits your tastes and preferences more. There is a platter of LLMs to choose from these days! Authored by: Varun Krishnan

Google Glass 2.0 is here, and this time, AI makes it worth wearing
Google Glass 2.0 is here, and this time, AI makes it worth wearing

Hindustan Times

time24 minutes ago

  • Hindustan Times

Google Glass 2.0 is here, and this time, AI makes it worth wearing

It is 2012, and Google has come up with this weird yet interesting spectacle-like wearable product that looks kind of like a safety glass for factories. Many were confused about why I was so excited about it then. I clicked many pictures with it and proclaimed to anyone who would listen that this is the future of the wearables industry, and that Google is underrated for the visionary work they do. To top it the reveal was nothing short of spectacular. Google unveiled its Google Glass 1.0 with a couple of skydivers live streaming themselves while parachuting on the San Francisco skyline. But then, this turned out to be a classic case of the product being ready, but the market not as much. No one would have realized it was a case of premature execution then. Google quietly pivoted to making this available enterprise only. The enterprise edition in 2017 found only a few takers in the medical and the industrial field. And Google finally stopped selling and supporting the enterprise edition too in 2023. One of the most important differences from the Meta glasses though is the in-lens display which Google says aims not to escape reality but to enhance it.(Google) In the meanwhile, in 2021, another quiet development was taking place. Meta had developed a wearable whose form factor looked eerily similar to Google Glass, in partnership with Essilor Luxottica. Meta then upgraded the product in 2023 and then again in 2024. Cut to today, it has become so common to see people wearing chunky Ray-Ban Meta glasses clicking away in Silicon Valley. This must have made Google feel vindicated about its product and choice of form factor back in 2012. Google has made some significant announcements regarding Android XR glasses in its Google I/O in May 2025. To be sure, Google isn't doing this alone this time around. They are partnering with Samsung, Qualcomm and Xreal. There seem to be a number of glass-like products that are in several stages of development including one with Samsung – Project Moohan and one in-house Google glass. The one with Xreal is more developer-focussed. But one thing is for sure that all of it will be using Google's Operating system for headsets and glasses – Android XR. The most exciting thing about the Google glass 2.0 is the Gemini AI integration, obviously. You can now hold live bilingual conversations, ask Gemini questions about everything you see and of course you can ask for directions. There are also two ways how your directions are displayed – either projected in front of you or on the embedded in lens display depending on where you are looking. Hands-free photos, video and note capturing are a given as it is with the Meta glasses. One of the most important differences from the Meta glasses though is the in-lens display which Google says aims not to escape reality but to enhance it. You can for instance see the directions overlayed on where you are walking and from the videos Google has shared it looks quite vivid and useful rather than intrusive. Another difference from the Meta glasses is seeing the result of the picture that you just took on the lens. This in itself feels like one step ahead of the Meta glasses. Although this is currently a prototype, the Google glasses seem to be lighter than the Meta glasses according to feedback from those who tried it at Google I/O. The product isn't out for sale yet and the price point is still unclear too. But Google has done well this time by partnering with luxury eyewear firms like Gentle Monster and the direct to consumer vendor Warby Parker. It can be argued that the Meta Ray bans wouldn't really have been as successful as they are now had they not been Ray bans. Although in 2012, Google wanted to market the product as an early adopter item, there were concerns regarding its functionality, high price, safety and privacy that ultimately led to the untimely demise of the product. Today, society and technology have evolved and the bar for many of those parameters look very different. So, with some modifications from Google it does look like the world may be ready for its Glass (and Android XR based glasses and headsets) and lap it up given the right price point. I still have the old Google Glass. I am definitely getting the 2.0 version as well. Authored by: Varun Krishnan

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store