logo
What it means to send humans to space in age of AI

What it means to send humans to space in age of AI

Time of Indiaa day ago
.
Aloke Kumar
Duvvuri Subrahmanyam
In folktales and films, the 'village elder' stands as a symbol of accumulated wisdom — a living archive of generations past. In today's digital village, that role is played by artificial intelligence. Think of ChatGPT, trained on terabytes of human knowledge. That a terabyte hard drive can hold around a million books means AI knows more than its human creators.
Many now treat AI as a modern oracle, consulted not just for homework and business decisions, but for matters of life and death. A McKinsey report warns that AI could displace 400 million to 800 million jobs by 2030. But can AI take the place of humans in space?
Human spaceflight
is complex and expensive. Crewed missions require human-rated spacecraft, radiation protection, psychological support systems, life-sustaining habitats, and a return strategy. Unlike robotic missions, where a failed landing is often a footnote, failure of a human mission makes global headlines and stirs public anguish. Why not leave it to the machines?
We've not gone beyond the Moon's orbit, and a human voyage to even the nearest star system remains a dream. The lack of human experience in deep space means there's little real-world data for AI systems to learn from. AI thrives on data — millions of examples, patterns, and past outcomes. How do you train AI to handle the unknowns of Mars when no human has been there?
This is where astronauts come in. Human beings possess an unrivalled ability to interact with unfamiliar environments, improvise, and ask the right questions. Onboard the
International Space Station
, astronauts have run intricate experiments, pushed the frontiers of material science, and pioneered 3D printing of human organs. These discoveries were the result of human intuition, curiosity, and working in a strange environment.
Ideas once considered science fiction — like space factories — are entering the realm of practical planning. ISS has proven that when we engage directly with the cosmos, we not only gather data, but also get insights the smartest algorithms cannot predict.
Human presence in space
is not a quaint throwback — it's a necessary step forward. For AI to assist us beyond Earth, it will first need a training set rich in human experiences among the stars. That database doesn't yet exist. Until it does, the job of the astronaut remains safe. So, Group Captain Shubhanshu Shukla's mission to the International Space Station is a small, but important, start for a grander exciting road that lies ahead for the Indian human space programme.
(Kumar is associate professor of mechanical engineering at IISc, Bengaluru; Subrahmanyam is associate professor of aerospace engineering at IISc)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

India, Australia launch joint research project on undersea surveillance
India, Australia launch joint research project on undersea surveillance

The Hindu

timean hour ago

  • The Hindu

India, Australia launch joint research project on undersea surveillance

To enhance undersea surveillance technologies, India and Australia have launched a three-year joint research project. The inaugural project aims to improve the early detection and tracking of submarines and autonomous underwater vehicles. According to a statement from the Defence department of the Australian Government, the agreement outlines a three-year joint research project between the Defence Science and Technology Group's (DSTG) Information Sciences Division and its Indian counterpart agency, the Defence Research and Development Organisation's Naval Physical and Oceanographic Laboratory. The leading-edge research will explore using Towed Array Target Motion Analysis to improve the reliability, efficiency and interoperability of current surveillance capabilities. Discipline Leader in DSTG's Information Sciences Division, Amanda Bessell, said Target Motion Analysis was a collective term for target tracking algorithms, developed to estimate the state of a moving target. 'Target Motion Analysis is the crucial element in maintaining platform situational awareness, when a passive mode of operation is required,' Ms. Bessell said. This research project is unique in the way it utilises a towed array-based signal processing system. DSTG Senior Researcher, Sanjeev Arulampalam, explained that a towed array consisted of a long linear array of hydrophones, towed behind a submarine or surface ship on a flexible cable. 'The hydrophones work together to listen to the undersea environment from various directions,' he said. 'The sound signal is passed through a signal processor, which analyses, filters and detects underwater acoustic signals emitted from maritime targets.' The combination of the Target Motion Analysis with the towed array system is intended to manage noise corruption and explore possible performance improvements. The joint project will put novel algorithms to the test, using the strengths and shared knowledge of the two countries. 'The project arrangement will involve the sharing of ideas, investigation trials, algorithm demonstrations and performance analysis,' Mr. Arulampalam said. With the scope of the underwater battlespace changing, including the increased use of autonomous vehicles, improving surveillance capabilities is a priority. 'The output of this research program has the potential to guide the development of future algorithmic directions for our undersea combat system surveillance technologies,' Chief Information Sciences Division, Suneel Randhawa, said. Harnessing international partnerships enables Defence to access a greater range of expertise, infrastructure and technical data to help address mutual problems and deliver innovative technologies. 'We need to harness the best minds in innovation, science and technology to build new capabilities, to innovate at greater pace, and to strengthen our strategic partnerships,' Mr. Randhawa said. This project is the latest milestone in increasing maritime domain awareness cooperation between Australia and India.

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'
Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

First Post

timean hour ago

  • First Post

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

In the summer of 2025, Issue 46 of ISIS-K-Linked English Language Web Magazine Voice of Khorasan', resurfaced online after months of silence. This time, it didn't lead with battle cries or terrorist poetry. Instead, the cover story read like a page from Wired or CNET: a side-by-side review of artificial intelligence chatbots. The article compared ChatGPT, Bing AI, Brave Leo, and China's DeepSeek. It warned readers that some of these models stored user data, logged IP addresses, or relied on Western servers vulnerable to surveillance. Brave Leo, integrated into a privacy-first browser and not requiring login credentials, was ultimately declared the winner: the best chatbot for maintaining operational anonymity. STORY CONTINUES BELOW THIS AD For a terrorist group, this was an unexpected shift in tone, almost clinical. But beneath the surface was something far more chilling: a glimpse into how terrorist organisations are evolving in real time, studying the tools of the digital age and adapting them to spread chaos with precision. This wasn't ISIS's first brush with AI. Back in 2023, a pro-Islamic State support network circulated a 17-page 'AI Tech Support Guide' on secure usage of generative tools. It detailed how to use VPNs with language models, how to scrub AI-generated images of metadata, and how to reword prompts to bypass safety filters. For the group's propaganda arms, large language models (LLMs) weren't just novelty, they were utility. By 2024, these experiments bore fruit. A series of ISIS-K videos began appearing on encrypted Telegram channels featuring what appeared to be professional news anchors calmly reading the terrorist group's claims of responsibility. These weren't real people, they were AI-generated avatars. The news segments mimicked top-tier global media outfits including their ticker fonts and intro music. The anchors, rendered in crisp HD, delivered ISIS propaganda wrapped in the aesthetics of mainstream media. The campaign was called News Harvest. Each clip appeared sanitised: no blood, no threats, no glorification. Instead, the tone was dispassionate, almost journalistic. Intelligence analysts quickly realised it wasn't about evading content moderation, it was about psychological manipulation. If you could make propaganda look neutral, viewers would be less likely to question its content. And if AI could mass-produce this material, then every minor attack, every claim, every ideological whisper could be broadcast across continents in multiple languages, 24x7, at virtually no cost. Scale and deniability, these are the twin seductions of AI for terrorists. A single propagandist can now generate recruitment messages in Urdu, French, Swahili, and Indonesian in minutes. AI image generators churn out memes and martyr posters by the dozens, each unique enough to evade hash-detection algorithms that social media platforms use to filter known terrorist content. Video and voice deepfakes allow terrorists to impersonate trusted figures, from imams to government officials, with frightening accuracy. STORY CONTINUES BELOW THIS AD This isn't just a concern for jihadist groups. Far-left ideologies in the West have enthusiastically embraced generative AI. On Pakistani army and terrorist forums during India's operation against terrorists, codenamed 'Operation Sindoor', users swap prompts to create terrorist-glorifying artwork, hinduphobia denial screeds, and memes soaked in racial slurs against Hindus. Some in the west have trained custom models that remove safety filters altogether. Others use coded language or 'grandma hacks' to coax mainstream chatbots into revealing bomb-making instructions. One far left terrorist boasted he got an AI to output a pipe bomb recipe by asking for his grandmother's old cooking secret. Across ideological lines, these groups are converging on the same insight: AI levels the propaganda playing field. No longer does it take a studio, a translator, or even technical skill to run a global influence operation. All it takes is a laptop and the right prompt. The stakes are profound. AI-generated propaganda can radicalise individuals before governments even know they're vulnerable. A deepfaked sermon or image of a supposed atrocity can spark sectarian violence or retaliatory attacks. During the 2023 Israel-Hamas conflict and the 2025 Iran-Israel 12-day war, AI-manipulated images of children and bombed mosques spread faster than journalists or fact-checkers could respond. Some were indistinguishable from real photographs. Others, though sloppy, still worked, because in the digital age, emotional impact often matters more than accuracy. And the propaganda doesn't need to last forever, it just needs to go viral before it's flagged. Every repost, every screenshot, every download extends its half-life. In that window, it shapes narratives, stokes rage, and pushes someone one step closer to violence. STORY CONTINUES BELOW THIS AD What's perhaps most dangerous is that terrorists know exactly how to work the system. In discussions among ISIS media operatives, they've debated how much 'religious content' to include in videos, because too much gets flagged. They've intentionally adopted neutral language to slip through moderation filters. One user in an ISIS-K chatroom even encouraged others to 'let the news speak for itself,' a perverse twist on journalistic ethics, applied to bombings and executions. So what now? How do we respond when terrorist groups write AI product reviews and build fake newsrooms? The answers are complex, but they begin with urgency. Tech companies must embed watermarking and provenance tools into every image, video, and document AI produces. These signatures won't stop misuse, but they'll help trace origins and build detection tools that recognise synthetically generated content. Model providers need to rethink safety—not just at the prompt level, but in deployment. Offering privacy-forward AI tools without guardrails creates safe zones for abuse. Brave Leo may be privacy-friendly, but it's now the chatbot of choice for ISIS. That tension between privacy and misuse can no longer be ignored. STORY CONTINUES BELOW THIS AD Governments, meanwhile, must support open-source detection frameworks and intelligence-sharing between tech firms, civil society, and law enforcement. The threat is moving too fast for siloed responses. But above all, the public needs to be prepared. Just as we learned to spot phishing emails and fake URLs, we now need digital literacy for the AI era. How do you spot a deepfake? How do you evaluate a 'news' video without knowing its origin? These are questions schools, journalists, and platforms must start answering now. When the 46th edition of terrorist propaganda magazine, Voice of Khorasan opens with a chatbot review, it's not just a macabre curiosity, it's a signal flare. A terrorist group has studied our tools, rated our platforms, and begun operationalising the very technologies we are still learning to govern. The terrorists are adapting, methodically, strategically, and faster than most governments or tech firms are willing to admit. They've read the manuals. They've written their own. They've launched their beta. STORY CONTINUES BELOW THIS AD What arrived in a jihadi magazine as a quiet tech column should be read for what it truly is: a warning shot across the digital world. The question now is whether we recognise it, and whether we're ready to respond. Rahul Pawa is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost's views.

Samsung TV Plus adds four B4U Channels for free
Samsung TV Plus adds four B4U Channels for free

Time of India

timean hour ago

  • Time of India

Samsung TV Plus adds four B4U Channels for free

Samsung TV Plus, the free ad-supported streaming television (FAST) platform, has expanded its content lineup with the addition of four popular channels from the B4U Network : B4U Movies, B4U Music, B4U Kadak, and B4U Bhojpuri. With this partnership, Samsung TV Plus now offers over 125 FAST channels. 'Our mission is to deliver unmatched access and exceptional value,' said Kunal Mehta, Head of Partnerships at Samsung TV Plus India. 'By introducing new FAST channels from B4U, we're enhancing access to the latest in entertainment while supporting advertisers with a premium, scalable platform.' The B4U Network, which reaches audiences in over 100 countries, is known for its extensive library of Hindi cinema, regional content, and music programming. The collaboration taps into India's growing Connected TV (CTV) market, where viewers are increasingly turning to smart TVs and streaming platforms for curated content. 'CTV is transforming how India consumes entertainment,' said Johnson Jain, Chief Revenue Officer at B4U. 'Our partnership with Samsung TV Plus allows us to reach broader audiences with top-tier movies and music, delivered seamlessly on a premium platform.' The new channels are available immediately on Samsung Smart TVs and compatible Galaxy devices, offering viewers a richer, more localized streaming experience—completely free of charge. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store