logo
ChatGPT, brain rot and who should use AI and who should not

ChatGPT, brain rot and who should use AI and who should not

India Today5 hours ago

There was a time when almost everyone had a few phone numbers stored in the back of their mind. We would just pick up our old Nokia, or a cordless, and dial in a number. Nowadays, most people remember just one phone number — their own. And in some cases, not even that. It is the same with birthdates, trivia like who the prime minister of Finland is, or the accurate route to this famous bakery in that corner of the city.advertisementHumans are no longer memory machines, something which often leads to hilarious videos on social media. Young students are asked on camera to name the first prime minister of India and all of them look bewildered. Maybe Gandhi, some of them gingerly say. We all laugh a good bit at their expense.But it's not the fault of the kids. It's a different world. The idea of memorising stuff is a 20th-century concept. Memory has lost its value because now we can recall anything or everything with the help of Google. We can store information outside our brain and into our phones and access it anytime we want. Because memory has lost its value, we have also lost our ability to memorise things. Is it good? Is it bad? That is not what this piece is about. Instead, it is about what we are going to lose next.advertisement
Next, say in 10 to 15 years, we may end up losing our ability to think and analyse, just the way we have lost the ability to memorise. And that would be because of ChatGPT and its ilks.So far, we had suspected something like this. Now, research is beginning to trace it in graphs and charts. Around a week ago, researchers at MIT Media Lab ran some experiments on what happens inside the brain of people when they use ChatGPT. As part of the experiment, the researchers divided 54 people in three groups: people using only the brain to work, people using brain and Google search, and people using brain and ChatGPT. The work was writing an essay and as the participants in the research went about doing it, their brains were scanned using EEG.The findings were clear. 'EEG revealed significant differences in brain connectivity,' wrote MIT Lab researchers. 'Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.'The research was carried out across four months and in the last phase, participants who were part of the brain-only group were asked to also use ChatGPT, whereas the ChatGPT group was told to not use it at all. 'Over four months, LLM (ChatGPT) users consistently underperformed at neural, linguistic, and behavioural levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning,' wrote MIT Labs researchers.advertisementWhat is the big takeaway? Quite simple. Like anything cerebral — for example, it is well-established that reading changes and rewires the brain — the use of something like ChatGPT impacts our brain in some fundamental ways. The brain, just like a muscle, can atrophy when not used. And, we have started seeing signs in labs that when people rely too much on AI tools like ChatGPT to do their thinking, writing, analysing, our brains may lose some of this functionality.Of course, there could be the other side of the story too. If in some areas, the mind is getting a break, it is possible in some other parts that neurons might light up more frequently. If we lose our ability to analyse an Excel sheet with just a quick glance, maybe we will get the ability to spot bigger ideas faster after looking at the ChatGPT analysis of 10 financial statements.advertisementBut, I am not certain. On the whole, and if we include everyone, the impact of information abundance that tools like Google and Wikipedia have brought has not resulted in smarter or savant-like people. There is often a crude joke on the internet — we believed that earlier, people were stupid because they did not have access to information. Oh, just naive we were.It is possible that, at least on the human mind, the impact of tools like ChatGPT may not end up being a net positive. And that brings me to my next question. So, who should or who should not use ChatGPT? The current AI tools are undoubtedly powerful. They have the potential to crash through all the gate-keeping that happens within the world. They can make everyone feel superhuman.When this much power is available, it would be a waste to not use it. So, everyone should use AI tools like ChatGPT. But I do feel that there has to be a way to go about it. If we don't want AI to wreck our minds, we will have to be smart about how we use them. In formative years — in schools and colleges or at work when you are learning the ropes of the trade — it would be unwise to use ChatGPT and similar tools. The idea is that you should use ChatGPT like a bicycle, which makes you more efficient and faster, instead of as a crutch. The idea is that before you use ChatGPT, you should already have a brain that has figured out a way to learn and connect dots.advertisementThis is probably the reason why, in recent months again and again, top AI experts have highlighted that the use of AI tools must be accompanied by an emphasis on learning the basics. DeepMind CEO Demis Hassabis put it best last month when he was speaking at Cambridge. Answering a question about how students should deal with AI, he said, 'It's important to use the time you have as an undergraduate to understand yourself better and learn how to learn.'In other words, Hassabis believes that before you jump onto ChatGPT or other AI tools, you should first have the fundamental ability to analyse, adapt and learn quickly without them. In the future, this, I think, is going to be key to using AI tools in a better way. Or else, they may end up rotting our brains, similar to what we have done to our memory and attention span due to Instagram, Google and all the information overload.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)(Views expressed in this opinion piece are those of the author)Trending Reel

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Intellipaat launches India's First DevSecOps Program with Generative & Agentic AI
Intellipaat launches India's First DevSecOps Program with Generative & Agentic AI

Business Standard

timean hour ago

  • Business Standard

Intellipaat launches India's First DevSecOps Program with Generative & Agentic AI

PRNewswire Bangalore (Karnataka) [India], June 23: Intellipaat, a global leader in professional upskilling, has launched a groundbreaking transformation of its flagship DevOps program, becoming the first in India to integrate Agentic AI into a structured DevOps curriculum. The program now also includes advanced modules on DevSecOps and Generative AI, ensuring professionals are equipped for the next decade of intelligent automation and secure-by-design infrastructure. With over 8,000+ professionals trained over the past decade and corporate skilling delivered to industry leaders like Societe Generale, Wipro, TCS, and HCL, Intellipaat has long been a trusted name in DevOps education. This latest evolution addresses a rising demand in job descriptions for professionals skilled in AI-powered DevOps, security automation, and self-healing infrastructure systems. "DevOps is evolving -- it's no longer just about CI/CD and scripting. The future lies in intelligent, autonomous systems that are secure by default. This enhancement helps professionals stay ahead of that shift," said Diwakar Chittora, Founder & CEO of Intellipaat. Why This Matters Now From startups to Fortune 500 companies, businesses are rapidly transitioning to AI-integrated operations. Security breaches, complex infrastructure, and the speed of change demand DevOps professionals who can: * Embed security at every phase of the development lifecycle * Use LLMs like ChatGPT and GitHub Copilot for infrastructure automation * Work with Agentic AI that can observe, reason, and act -- reducing response time, improving uptime, and ensuring compliance A scan of current job listings shows that DevSecOps, AI for Ops, and autonomous incident management are no longer emerging skills -- they're expected. What's New in the Curriculum DevSecOps Hands-on training with devsecops tools such as: * Gitleaks, DefectDojo, Software Composition Analysis * Open Policy Agent (OPA), AWS Secrets Manager, and Vault Generative AI for DevOps Learners use GenAI to: * Generate Infrastructure-as-Code (IaC) * Automate CI/CD pipeline creation and documentation * Query cloud platforms and monitoring tools using natural language Agentic AI in DevOps Explore how agentic frameworks and tools like: * LangChain, ReAct, and OpenDevin can manage infrastructure, auto-resolve incidents, and deploy environments -- all with minimal human input. Career Transitions That Inspire Thousands of Intellipaat learners have successfully transitioned into high-demand roles in Cloud and DevOps. Among them are freshers who landed their first DevOps jobs right after completing the program -- including learners who secured a DevOps role within just three months of course completion. About Intellipaat Intellipaat is a trusted global provider of industry-aligned professional education in DevOps, cloud computing, data science, cybersecurity, and AI/ML. With a community of over 2 million learners, Intellipaat collaborates with top universities and global enterprises to deliver outcome-driven learning for tomorrow's workforce. Media Contact: deepak@

Elon Musk Vows to 'Rewrite Human Knowledge' Using Grok AI, Slams Existing AI Data as ‘Garbage'
Elon Musk Vows to 'Rewrite Human Knowledge' Using Grok AI, Slams Existing AI Data as ‘Garbage'

Hans India

timean hour ago

  • Hans India

Elon Musk Vows to 'Rewrite Human Knowledge' Using Grok AI, Slams Existing AI Data as ‘Garbage'

Billionaire entrepreneur Elon Musk is setting his sights on an ambitious new goal for his AI company, xAI: rebuilding the entire corpus of human knowledge using the latest version of its AI chatbot, Grok. In a series of posts on X (formerly Twitter), Musk criticized current AI models for being trained on what he called 'garbage' data and unveiled his plan to retrain Grok using a revised dataset. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' Musk shared. 'Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data.' Musk's goal is not just about refining Grok's capabilities—he wants to reshape how AI models are built, trained, and aligned with truth. Launched earlier this year, Grok 3 was introduced as the 'smartest AI on Earth,' boasting performance ten times stronger than its predecessor. The model is accessible via xAI's platforms, the Grok app, and to X Premium Plus subscribers. One of the more controversial elements of Musk's announcement involves his call for user input. In an appeal to the X community, he invited followers to contribute 'divisive facts' to help train Grok—facts that may be politically incorrect but, as Musk emphasized, are 'nonetheless factually true.' Musk founded xAI in 2023 to challenge established AI giants like OpenAI. He has often accused leading models, including ChatGPT, of harboring 'woke biases' and distorting facts to fit certain ideological perspectives. With Grok, Musk wants to break away from that mold and create an AI assistant grounded in what he considers cleaner, more accurate information. At the core of Grok's development is xAI's Colossus supercomputer, a powerful system built in less than nine months using more than 100,000 hours of Nvidia GPU processing. Grok 3 uses synthetic data, reinforcement learning, and logic-driven techniques to minimize hallucinations—a common flaw where AI chatbots fabricate responses. Now, as Musk and his team prepare to roll out Grok 3.5—or Grok 4—by the end of 2025, the focus is shifting toward using advanced reasoning and curated content to create a more reliable foundation for machine learning. With this bold move, Musk is not just tweaking another chatbot. He's trying to challenge the entire approach the tech industry has taken toward artificial intelligence—and possibly redefine what AI knows as 'truth.'

Gen AI in cybersecurity: Will help defenders with better counter measures; India ahead of other nations
Gen AI in cybersecurity: Will help defenders with better counter measures; India ahead of other nations

Time of India

timean hour ago

  • Time of India

Gen AI in cybersecurity: Will help defenders with better counter measures; India ahead of other nations

Generative artificial intelligence, while being increasingly exploited by cyber criminals to fuel their attacks, is also empowering defenders with faster and smarter responses to online threats, according to Heather Adkins, global VP of engineering at Google Security. Adkins, who has spent more than 20 years at Google, said, generative AI will give "defenders" a "leg up" over the threat actors.' 'We will be able to leverage Gen AI to protect infrastructure in new ways that we've never thought of before and also at a speed that we've never been able to achieve before,' she said, quoted by TNN. She said that the same technology being used to plan sophisticated cyberattacks can also help strengthen defence systems. Talking about cyberattacks in India, the Google security VP pointed out that the government is "very engaged" and has been ahead of many other nations in tackling these threats. 'It's a hot topic. They've done a very good job in getting involved quickly and partnering with companies. The workforce here and education levels in India are pretty high. There are parts of the world I go where they're just now starting to think about cyber security and they're much further behind India.' Google Security now plans to set up an engineering centre in India. She further warned of the growing threat posed by state-sponsored cyberattacks, particularly as geopolitical tensions continue to rise, putting the world at risk. 'It's a question of who has more time. And, if you think about a well-funded nation state, may be they'll create a project, put 100 people on it, and they just work on that project throughout the day... So, they often know more because they have more time, not because they're smarter. I would say they're more likely to be successful.' Adkins highlighted the need to educate users alongside building tools, stating that digital instincts must be developed to spot malicious content online. 'Unlike the physical world where you have instincts and senses to identify something dangerous, the online world does not have a parallel. We have to build that,' she said. Despite the rising tempo and complexity of attacks, Adkins believed that the cybersecurity landscape is in a better place today. 'There's no doubt that we're seeing an increase in the tempo and sophistication of attacks. But today, more than ever before, enterprises have better tools.' Cybersecurity looked 'primitive' 23 years ago, while now, most solutions have security built into them, she added. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store