logo
#

Latest news with #ShayakMajumder

Dark AI: The Black Hole
Dark AI: The Black Hole

New Indian Express

time4 days ago

  • Science
  • New Indian Express

Dark AI: The Black Hole

And in that moment, a quiet boundary dissolved. While xenobots mesmerise the scientific community, they've also reignited a global debate: what new frontiers—and dangers—are we agreeing to when we embrace emergent forms of AI? Let's be clear: AI today is not sentient. It doesn't 'want' anything, doesn't dream, doesn't resent you for shouting at Alexa. But that's not the real concern. The anxiety around AI isn't about whether it will wake up and write poetry about its sad little server racks. The fear is about what happens when its power, speed, and optimisation capabilities outstrip human governance. Delhi-based tech expert Shayak Majumder says, 'The primary concern isn't that machines will start thinking like humans, but that humans will stop thinking critically in a world shaped by AI assistants. I have always compared the advent of AI to the advent of internet. Earlier there were concerns of jobs getting eaten up, but about two-three decades later, we have learned how to leverage internet to our advantage. For now, we need to start getting adept with AI tools, to stay ahead of the curve. The 'dark side' of AI lies not in its intelligence, but in how we choose to wield it, regulate it, and remain accountable for its impact.' AI creating life; AI going beyond its mandate to serve mankind could bring us to the brink of extinction in myriad ways. When AlphaGo (Google DeepMind's AI) played Go against world champion Lee Sedol, it made a move (Move 37) that no human had ever thought of. AlphaGo's calculations indicated that the move had a mere 1 in 10,000 chance of being played by a human. It wasn't programmed specifically to make that move. It thought several moves ahead and invented strategies no one taught it. Researchers called it 'beautiful' and 'creative' and playing against a 'thinking entity'. In a 2020 simulation, OpenAI trained simple AI agents to compete in hide-and-seek games. Without being programmed to, some agents invented tool-use like pushing objects to block doors or building forts. They did it by inventing complex strategies not taught by humans. They adapted and outsmarted their rivals on their own. In 2017, two AI chatbots, Bob and Alice, were designed to negotiate with each other. But very soon, they invented their own language, unintelligible to humans to make negotiations more efficient. Significantly, they abandoned English because it was inefficient for them. They began optimising communication without human permission or understanding. Researchers shut the programme down because they couldn't control or predict it anymore. Scientists at MIT and elsewhere are building neural networks that repair themselves when attacked or corrupted, without human instructions. Like living tissue healing itself, the network 'senses' failure and reorganises thereby suggesting rudimentary self-preservation instincts: a building block of 'will'. This collective was seen in xenobots who built cooperative groups, and self-repaired wounds without an external brain or microchips. They acted as if they had goals. The scary and fascinating part? Emergence doesn't ask permission. It just happens. Because the xenobots were not meant to think. But they moved as though they had decided to. They acted as though they had purpose. And that suggested something that made researchers and philosophers alike slightly queasy: that perhaps intelligence simply emerges.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store