Latest news with #Hawking


Entrepreneur
2 days ago
- Business
- Entrepreneur
AI or Illusion? Hawking's Warning Rings True in 2025
Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. "Success in creating AI could be the biggest event in the history of our civilization. But it could also be the last unless we learn how to avoid the risks," said renowned physicist Stephen Hawking. Today, Hawking's warning feels more relevant than ever. AI-generated content is rapidly flooding the internet, leaving audiences both amazed and confused. A recent example occurred when actor Paresh Rawal announced his exit from Hera Pheri 3 Movie. AI-generated images depicting Pankaj Tripathi as Baburao quickly circulated, leaving many viewers convinced of their authenticity. Addressing this concern, Tripathi himself emphasised the dual nature of technology. He noted that technology can be used positively or negatively, but consumers must stay vigilant about distinguishing real from fake. But are we truly prepared to identify what's genuine? The recent Mary Meeker's Artificial Intelligence Trends report 2025 highlights this leap. AI now surpasses human-level performance and realism in multiple areas, including realistic conversations, voice generation, and image creation. For instance, Stanford's AI Index 2025 Annual Report revealed that AI models in 2024 achieved an accuracy of 92.3 per cent on the Massive Multitask Language Understanding (MMLU) benchmark, overtaking the human average of 89.8 per cent. This test evaluates general knowledge and reasoning across diverse subjects, demonstrating AI's dramatic improvement from just 34 per cent accuracy in 2019. AI's conversation beyond face, and voice The realism of AI interactions has also reached new heights. In a recent test by researchers Cameron Jones and Benjamin Bergen, 73 per cent of human testers mistook AI-generated responses from GPT-4.5 (with persona) as human-created. Even more striking, an example Turing Test conducted in March 2025 showed participants were 87 per cent certain that an AI-generated conversation was human, noting, "Witness A had human vibes." AI's image-generating capabilities have similarly advanced, with AI-created visuals now nearly indistinguishable from real photographs, leaving viewers in awe and raising crucial ethical questions. Audio generation has also seen remarkable progress. Companies like ElevenLabs have enabled realistic AI voice translations, reaching millions of global users. In two years, ElevenLabs users generated an astonishing 1,000 years of audio content. Spotify has integrated this technology, translating audiobooks into 29 languages and making global content accessible to hundreds of millions of listeners worldwide. With these innovations, Kunal Varma, CEO and Co-founder of Freo, believes, "The greatest challenges lie in the magnitude and pace of AI-driven fake news, deepfakes, and manipulated graphics or videos, which can lead to confusion, mistrust in television, film, and written content, and serious financial consequences. Misinformation has the potential to go viral on social media and messaging platforms, making it problematic for the typical user to determine what's real and what's not." Ankit Sharma, Senior Director and Head of Solutions Engineering at Cyble, feels the risk is especially high in Tier-2 and Tier-3 cities. "Communities are becoming more connected, but digital literacy initiatives have not kept pace with smartphone and internet uptake. When AI-created disinformation—particularly voice recordings or videos in local languages—is distributed in these communities, the information is often consumed without fact-checking. This makes it a powerful tool for instability, hysteria, or manipulation. In addition, such regions are usually dependent on closed messaging systems such as WhatsApp or Telegram, where identifying the source and tracking the virality of debunked information is even more challenging. One effectively crafted, AI-fabricated piece of misinformation can trigger real-world consequences, ranging from social tension to election manipulation or cyberattacks on critical infrastructure," Sharma emphasised. To combat fake AI content, Varma suggested, "Private firms, digital platforms, and industrial bodies must work collaboratively on solutions—whether that be developing superior AI to detect and flag manipulated content, or connecting threat intelligence across organisations. Tech firms can also invest in rapid-response systems, and platforms have the opportunity to empower end users with simple tools to flag misleading content." Ankush Sabharwal, Founder and CEO of CoRover, added, "We're seeing rapid adoption of AI-powered media forensics and content validation platforms. Tools leveraging Natural Language Processing (NLP), image forensics, and blockchain-backed content provenance are increasingly being integrated into the workflows of both government agencies and responsible media houses. These tools enable real-time detection of manipulated narratives, sentiment skew, and coordinated propaganda efforts."


Time of India
28-05-2025
- Science
- Time of India
Did Stephen Hawking warn the world against AI? Here's what the late scientist said in his alarming prediction
Stephen Hawking, the renowned theoretical physicist, was not only a pioneer in cosmology but also a visionary voice on the ethical implications of emerging technologies. Before his death in 2018, the director of research at the Centre for Theoretical Cosmology at the University of Cambridge issued stark warnings about the potential dangers of artificial intelligence (AI), suggesting that its unchecked advancement could pose an existential threat to humanity. The dual nature of AI Long before AI became significantly predominant, Hawking acknowledged the transformative potential of the tool, recognizing its capacity to revolutionize fields such as medicine, education, and environmental conservation. He envisioned AI as a tool that could help eradicate disease, alleviate poverty, and undo the environmental damage caused by industrialization. However, he cautioned that without proper oversight, AI could also become the "worst event in the history of our civilization. " He warned that AI could develop a "will of its own" that might conflict with human interests, leading to unintended and potentially catastrophic consequences. The existential risk One of Hawking's most chilling predictions was that AI could surpass human intelligence, rendering humans obsolete. He compared the development of superintelligent AI to the creation of a new form of life that could evolve independently and at an accelerating rate. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo In such a scenario, humans, limited by slow biological evolution, might be unable to compete and could be superseded by machines. Autonomy and weaponization Hawking also expressed concern about the potential misuse of AI in military applications. He feared the development of autonomous weapons systems that could operate without human intervention, making life-and-death decisions without accountability. Such technologies could be exploited by authoritarian regimes or fall into the hands of malicious actors, leading to widespread conflict and instability. Economic and social disruption Beyond existential threats, Hawking highlighted the socioeconomic implications of AI. He warned that the widespread adoption of AI could lead to massive job displacement, exacerbating economic inequality. As machines take over tasks traditionally performed by humans, the wealth generated by these technologies might be concentrated in the hands of a few, leaving many without employment or means of livelihood. A call for ethical oversight Despite his concerns, Hawking did not oppose the development of AI per se. Instead, he advocated for rigorous ethical oversight and international cooperation to ensure that AI technologies are developed and deployed responsibly. He emphasized the importance of aligning AI's goals with human values and implementing safeguards to prevent misuse. In 2015, he co-signed an open letter with other prominent scientists and technologists, urging for research into the societal impacts of AI and the establishment of guidelines to prevent potential pitfalls. To summarize… Stephen Hawking's warnings about AI were not mere speculation but a call to action. His insights serve as a reminder that while technology can be a force for good, it also carries risks that must be managed with foresight and caution. As AI continues to advance, it is imperative that we pay attetion to Hawking's advice and work collectively to ensure that these technologies enhance, rather than endanger, our future. Debate: Insecure of a free media? - 3


Time of India
27-05-2025
- Science
- Time of India
Why did Stephen Hawking warn the world against AI before his death? The answer is deeply chilling
Existential Risks and the Call for Caution AI and Job Displacement Concerns Stephen Hawking, the world-renowned theoretical physicist and cosmologist, expressed serious concerns about the future of artificial intelligence years before the current surge in AI development. In a 2014 interview with the BBC, Hawking was asked about improvements to the AI-powered communication system he used due to ALS, a condition that left him dependent on a specialized machine to speak. Despite the clear benefits he gained from these early forms of AI, his response was far from warned that 'the development of full artificial intelligence could spell the end of the human race.' While he acknowledged that primitive AI had been useful—his Intel and SwiftKey system learned from his speech patterns to suggest words and phrases—he feared what might happen if machines became more intelligent than humans. According to him, such AI 'would take off on its own, and re-design itself at an ever increasing rate.' He added that humans, being limited by slow biological evolution, would not be able to compete and could ultimately be frequently used his global platform to draw attention to existential threats facing humanity. One of his key concerns was our overreliance on Earth. He repeatedly warned that humans must become a multi-planetary species to ensure long-term survival. Speaking to the BBC in 2016, he said that although the probability of a global catastrophe each year might seem low, the cumulative risk over a long period becomes almost noted that while humans might eventually establish colonies in space, it likely wouldn't happen for at least another hundred years. Until then, he urged extreme caution, pointing to threats such as climate change, genetically modified viruses, nuclear war, and artificial concerns echoed the sentiments of figures like Elon Musk, who said in 2013 that spreading life to other planets was essential to avoid extinction. Both thinkers shared a belief in the necessity of interplanetary expansion and were involved in projects aimed at interstellar exploration, including Hawking's support for the Breakthrough Starshot warning about AI wasn't limited to doomsday scenarios. Like many experts, he also foresaw major disruptions in employment and society. UCL professor Bradley Love shared that while advanced AI would bring vast economic benefits, it could also result in significant job losses. Love emphasized that while concerns about rogue AI robots may seem exaggerated, society should still take these risks seriously and prioritize addressing real-world challenges like climate change and weapons of mass recent years, interest and investment in AI have skyrocketed. From ChatGPT integrations to multibillion-dollar AI initiatives spearheaded by political leaders, artificial intelligence has become embedded in daily life. Smartphone AI assistants and increasingly realistic AI-generated content are making it harder to distinguish between reality and Hawking passed away in 2018, his insights remain increasingly relevant. His cautionary views continue to prompt reflection as technology rapidly evolves. Whether society will heed those warnings remains to be seen, but the questions he raised about human survival in the age of AI are more urgent than ever.
Yahoo
21-05-2025
- Science
- Yahoo
The Universe May End Sooner Than Scientists Thought
"Hearst Magazines and Yahoo may earn commission or revenue on some items through these links." Here's what you'll learn when you read this story: Despite its spectacular birth, the universe will mostly likely eventually fade into nothingness with very little drama. Black holes, as far as we know, will evaporate through Hawking radiation, with one of two identical, quantum-entangled particles floating into space and the other staying behind. Other objects will evaporate in a process similar to Hawking radiation, with the densest disappearing the fastest. While the universe might have started with a bang, it probably won't go out with one. But however it comes about, that end might be much sooner than we thought. If you ask astrophysicist Heino Falcke, quantum physicist Michael Wondrak, and mathematician Walter van Suijlekom, they'll tell you that those last days will not erupt into a cataclysmic explosion worthy of sci-fi special effects. Instead, the last remaining vestiges of all matter will just evaporate into particles floating in the void. In 2023, the trio theorized that it was possible for other objects besides black holes to slowly evaporate away via Hawking radiation, which aroused curiosity as to how soon it could possibly happen. Now, there is a hypothetical answer. But don't start doomsday prepping yet—Earth still has about 5 billion years left until it gets devoured by the Sun. So, according to the team, if our species manages to propagate beyond the Solar System and colonize some distant moon or planet, there are still another ~1078 years left for the universe. That's 1 with 78 zeroes after it. Even Twinkies won't last that long. It might seem unfathomable, but that mind-boggling max age for the universe is far lower than the previously predicted 101100 years (which is 1 with 1100 zeroes). While this prior hypothesis did include the time it would take for black holes to evaporate, it did not factor in the evaporation of other objects. 'Using gravitational curvature radiation, we find that also neutron stars and white dwarfs decay in a finite time in the presence of gravitational pair production,' the researchers said in a study recently published in the Journal of Cosmology and Astroparticle Physics. When a pair of particles forms right on the lip of a black hole's gaping maw, one can be pulled in past the inescapable event horizon, while the other escapes into nearby space. Because those particles are supposedly quantum-entangled, that rogue particle could be carrying information about the insides of a black hole (until the Hawking Information Paradox kicks in, of course). This called Hawking radiation. It has long been thought that only black holes emitted Hawking radiation, but in their new study, these researchers posit that a similar phenomenon could affect other ultradense objects without event horizons, such as white dwarf stars (star corpses left when the gases of a red giant dissipate) and neutron stars. Everything with mass has gravity that warps spacetime. The more dense an object, the greater the warp, but less massive objects still have some effect on the space-time continuum. Objects with strong gravitational fields evaporate faster—white dwarves, supermassive black holes, and dark matter supercluster haloes are expected to hold out for 1078 years, while neutron stars and stellar-mass black holes should hang around for about 1067 years. Anything with a gravitational field is prone to evaporating. (This includes humans, and could put a glitch in our quest for immortality. It should take 1090 years for our bodies to vanish.) Even though the intense gravitational fields of black holes should cause them to evaporate faster, they put off total annihilation as long as possible because, unlike white dwarves or neutron stars, they have no surface and tend to reabsorb some escaped particles. 'In the absence of an event horizon, there is pair production outside the object which leads to particles hitting the surface and also pair production inside the object,' the researchers said. 'We assume those particles to be absorbed by the object and to increase and redistribute internal energy. Both components will lead to a surface emission, which is absent in black holes.' So, in enough years to cover 78 zeroes, all that will be left of black holes—and everything else in the universe—are particles and radiation. You (assuming immortality) and whatever you bought in bulk for doomsday will also evaporate. no matter when it comes, there really is no escape from the end. You Might Also Like Can Apple Cider Vinegar Lead to Weight Loss? Bobbi Brown Shares Her Top Face-Transforming Makeup Tips for Women Over 50

Ammon
17-05-2025
- Science
- Ammon
Scientists reveal exact date universe will end: 'Sooner than we feared'
Ammon News - Scientists have discovered that the universe is decaying much faster than they thought, and have pinpointed exactly when it will perish. A team of researchers from Radboud University in the Netherlands determined that all the stars in the universe will go dark in one quinvigintillion years. That's a one followed by 78 zeros. But this is a much shorter amount of time than the previous prediction of 10 to the power of 1,100 years, or a one followed by 1,100 zeros. The process they believe is driving the death of the universe is related to Hawking radiation, where black holes emit radiation as they gradually 'evaporate' into nothing. This was thought to be a phenomenon exclusive to black holes, but the researchers showed that things like neutron stars and white dwarfs can also evaporate similarly to black holes. Both neutron stars and white dwarfs are the final stage of a star's life cycle. Massive stars explode into supernovas and then collapse into neutron stars, whereas smaller stars like our sun devolve into white dwarfs. These 'dead' stars can persist for an extremely long time. But according to the researchers, they gradually dissipate and explode once they become too unstable. In other words, knowing how long it takes for a neutron star or a white dwarf to die helps scientists understand the maximum lifespan of the universe, because these will be the last stars to die out. Daily Mail