
Why did Stephen Hawking warn the world against AI before his death? The answer is deeply chilling
Existential Risks and the Call for Caution
AI and Job Displacement Concerns
Stephen Hawking, the world-renowned theoretical physicist and cosmologist, expressed serious concerns about the future of artificial intelligence years before the current surge in AI development. In a 2014 interview with the BBC, Hawking was asked about improvements to the AI-powered communication system he used due to ALS, a condition that left him dependent on a specialized machine to speak. Despite the clear benefits he gained from these early forms of AI, his response was far from optimistic.Hawking warned that 'the development of full artificial intelligence could spell the end of the human race.' While he acknowledged that primitive AI had been useful—his Intel and SwiftKey system learned from his speech patterns to suggest words and phrases—he feared what might happen if machines became more intelligent than humans. According to him, such AI 'would take off on its own, and re-design itself at an ever increasing rate.' He added that humans, being limited by slow biological evolution, would not be able to compete and could ultimately be overtaken.Hawking frequently used his global platform to draw attention to existential threats facing humanity. One of his key concerns was our overreliance on Earth. He repeatedly warned that humans must become a multi-planetary species to ensure long-term survival. Speaking to the BBC in 2016, he said that although the probability of a global catastrophe each year might seem low, the cumulative risk over a long period becomes almost inevitable.He noted that while humans might eventually establish colonies in space, it likely wouldn't happen for at least another hundred years. Until then, he urged extreme caution, pointing to threats such as climate change, genetically modified viruses, nuclear war, and artificial intelligence.These concerns echoed the sentiments of figures like Elon Musk, who said in 2013 that spreading life to other planets was essential to avoid extinction. Both thinkers shared a belief in the necessity of interplanetary expansion and were involved in projects aimed at interstellar exploration, including Hawking's support for the Breakthrough Starshot initiative.Hawking's warning about AI wasn't limited to doomsday scenarios. Like many experts, he also foresaw major disruptions in employment and society. UCL professor Bradley Love shared that while advanced AI would bring vast economic benefits, it could also result in significant job losses. Love emphasized that while concerns about rogue AI robots may seem exaggerated, society should still take these risks seriously and prioritize addressing real-world challenges like climate change and weapons of mass destruction.In recent years, interest and investment in AI have skyrocketed. From ChatGPT integrations to multibillion-dollar AI initiatives spearheaded by political leaders, artificial intelligence has become embedded in daily life. Smartphone AI assistants and increasingly realistic AI-generated content are making it harder to distinguish between reality and simulation.Although Hawking passed away in 2018, his insights remain increasingly relevant. His cautionary views continue to prompt reflection as technology rapidly evolves. Whether society will heed those warnings remains to be seen, but the questions he raised about human survival in the age of AI are more urgent than ever.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
4 hours ago
- Time of India
Cornelis Networks releases tech to speed up AI datacenter connections
By Stephen Nellis SAN FRANCISCO: Cornelis Networks on Tuesday released a suite of networking hardware and software aimed at linking together up to half a million artificial intelligence chips. Cornelis, which was spun out of Intel in 2020 and is still backed by the chipmaker's venture capital fund, is targeting a problem that has bedeviled AI datacenters for much of the past decade: AI computing chips are very fast, but when many of those chips are strung together to work on big computing problems, the network links between the chips are not fast enough to keep the chips supplied with data. Nvidia took aim at that problem with its $6.9 billion purchase in 2020 of networking chip firm Mellanox, which made networking gear with a network protocol called InfiniBand, which was created in the 1990s specifically for supercomputers. Networking chip giants such as Broadcom and Cisco Systems are working to solve the same set of technical issues with Ethernet technology, which has connected most of the internet since the 1980s and is an open technology standard. The Cornelis "CN5000" networking chips use a new network technology created by Cornelis called OmniPath. The chips will ship to initial customers such as the U.S. Department of Energy in the third quarter of this year, Cornelis CEO Lisa Spelman told Reuters on May 30. Although Cornelis has backing from Intel, its chips are designed to work with AI computing chips from Nvidia, Advanced Micro Devices or any other maker using open-source software, Spelman said. She said that the next version of Cornelis chips in 2026 will also be compatible with Ethernet networks, aiming to alleviate any customer concerns that buying Cornelis chips would leave a data center locked into its technology. "There's 45-year-old architecture and a 25-year-old architecture working to solve these problems," Spelman said. "We like to offer a new way and a new path for customers that delivers you both the (computing chip) performance and excellent economic performance as well."


Time of India
5 hours ago
- Time of India
Meta's nuclear deal signals AI's growing energy needs
Meta's deal to help revive an Illinois nuclear power plant was one way of signaling that the parent company of Facebook and Instagram is preparing for a future built with artificial intelligence. Meta's 20-year deal with Constellation Energy follows similar maneuvers from Amazon, Google and Microsoft, but it will take years before nuclear energy can meet the tech industry's insatiable demand for new sources of electricity. AI uses vast amounts of energy, much of which comes from burning fossil fuels, which causes climate change. The unexpected popularity of generative AI products over the past few years has disrupted many tech companies' carefully laid plans to supply their technology with energy sources that don't contribute to climate change. Even as Meta anticipates more nuclear in the future, its more immediate plans rely on natural gas. Entergy, one of the nation's largest utility providers, has been fast-tracking plans to build gas-fired power plants in Louisiana to prepare for a massive Meta data center complex. Is the US ready for nuclear-powered AI? France has touted its ample nuclear power - which produces about 75% of the nation's electricity, the highest level in the world - as a key element in its pitch to be an AI leader. Hosting an AI summit in Paris earlier this year, French President Emmanuel Macron cited President Donald Trump's "drill baby drill" slogan and offered another: "Here there's no need to drill, it's just plug baby plug." Live Events In the US, however, most of the electricity consumed by data centers relies on fossil fuels - burning natural gas and sometimes coal - according to an April report from the International Energy Agency. As AI demand rises, the main source of new supply over the coming years is expected to be from gas-fired plants, a cheand reliable source of power but one that produces planet-warming emissions. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Renewable energy sources such as solar and wind account for about 24% of data center power in the US, while nuclear comprises about 15%, according to the IEA. It will take years before enough climate-friendlier power sources, including nuclear, could start slowing the expansion of fossil fuel power generation. A report released by the US Department of Energy late last year estimated that the electricity needed for data centers in the US tripled over the past decade and is projected to double or triple again by 2028 when it could consume up to 12% of the nation's electricity. Why does AI need so much energy? It takes a lot of computing power to make an AI chatbot and the systems they're built on, such as Meta's Llama. It starts with a process called training or pretraining - the "P" in ChatGPT - that involves AI systems "learning" from the patterns of huge troves of data. To do that, they need specialized computer chips - usually graphics processors, or GPUs - that can run many calculations at a time on a network of devices in communication with each other. Once trained, a generative AI tool still needs electricity to do the work, such as when you ask a chatbot to compose a document or generate an image. That process is called inferencing. A trained AI model must take in new information and make inferences from what it already knows to produce a response. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool enough to work properly, data centers need air conditioning. That can require even more electricity, so most data center operators look for other cooling techniques that usually involve pumping in water.


Time of India
7 hours ago
- Time of India
AI 'vibe coding' startups burst onto scene with sky-high valuations
By Anna Tong, Krystal Hu NEW YORK: Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or "code-gen" startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor , a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf , a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called "vibe coding," which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. "AI has automated all the repetitive, tedious work," said Scott Wu, CEO of code gen startup Cognition. "The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore." Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google , Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that "well over 30%" of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved "the equivalent of 4,500 developer-years" by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. "We're focused on creating AI that empowers developers to be more productive, creative, and save time," a Microsoft spokesperson said. "This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle." MOUNTING LOSSES Some "vibe-coding" platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. "The prices people are paying for coding assistants are going to get more expensive," Quinn Slack, CEO at coding startup Sourcegraph , told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. "I haven't seen people working this hard since the first Internet boom," said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. "In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others," said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment.