Latest news with #StephenHawking


Time of India
5 days ago
- Science
- Time of India
Did Stephen Hawking warn the world against AI? Here's what the late scientist said in his alarming prediction
Stephen Hawking, the renowned theoretical physicist, was not only a pioneer in cosmology but also a visionary voice on the ethical implications of emerging technologies. Before his death in 2018, the director of research at the Centre for Theoretical Cosmology at the University of Cambridge issued stark warnings about the potential dangers of artificial intelligence (AI), suggesting that its unchecked advancement could pose an existential threat to humanity. The dual nature of AI Long before AI became significantly predominant, Hawking acknowledged the transformative potential of the tool, recognizing its capacity to revolutionize fields such as medicine, education, and environmental conservation. He envisioned AI as a tool that could help eradicate disease, alleviate poverty, and undo the environmental damage caused by industrialization. However, he cautioned that without proper oversight, AI could also become the "worst event in the history of our civilization. " He warned that AI could develop a "will of its own" that might conflict with human interests, leading to unintended and potentially catastrophic consequences. The existential risk One of Hawking's most chilling predictions was that AI could surpass human intelligence, rendering humans obsolete. He compared the development of superintelligent AI to the creation of a new form of life that could evolve independently and at an accelerating rate. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo In such a scenario, humans, limited by slow biological evolution, might be unable to compete and could be superseded by machines. Autonomy and weaponization Hawking also expressed concern about the potential misuse of AI in military applications. He feared the development of autonomous weapons systems that could operate without human intervention, making life-and-death decisions without accountability. Such technologies could be exploited by authoritarian regimes or fall into the hands of malicious actors, leading to widespread conflict and instability. Economic and social disruption Beyond existential threats, Hawking highlighted the socioeconomic implications of AI. He warned that the widespread adoption of AI could lead to massive job displacement, exacerbating economic inequality. As machines take over tasks traditionally performed by humans, the wealth generated by these technologies might be concentrated in the hands of a few, leaving many without employment or means of livelihood. A call for ethical oversight Despite his concerns, Hawking did not oppose the development of AI per se. Instead, he advocated for rigorous ethical oversight and international cooperation to ensure that AI technologies are developed and deployed responsibly. He emphasized the importance of aligning AI's goals with human values and implementing safeguards to prevent misuse. In 2015, he co-signed an open letter with other prominent scientists and technologists, urging for research into the societal impacts of AI and the establishment of guidelines to prevent potential pitfalls. To summarize… Stephen Hawking's warnings about AI were not mere speculation but a call to action. His insights serve as a reminder that while technology can be a force for good, it also carries risks that must be managed with foresight and caution. As AI continues to advance, it is imperative that we pay attetion to Hawking's advice and work collectively to ensure that these technologies enhance, rather than endanger, our future. Debate: Insecure of a free media? - 3


Time of India
6 days ago
- Science
- Time of India
Why did Stephen Hawking warn the world against AI before his death? The answer is deeply chilling
Existential Risks and the Call for Caution AI and Job Displacement Concerns Stephen Hawking, the world-renowned theoretical physicist and cosmologist, expressed serious concerns about the future of artificial intelligence years before the current surge in AI development. In a 2014 interview with the BBC, Hawking was asked about improvements to the AI-powered communication system he used due to ALS, a condition that left him dependent on a specialized machine to speak. Despite the clear benefits he gained from these early forms of AI, his response was far from warned that 'the development of full artificial intelligence could spell the end of the human race.' While he acknowledged that primitive AI had been useful—his Intel and SwiftKey system learned from his speech patterns to suggest words and phrases—he feared what might happen if machines became more intelligent than humans. According to him, such AI 'would take off on its own, and re-design itself at an ever increasing rate.' He added that humans, being limited by slow biological evolution, would not be able to compete and could ultimately be frequently used his global platform to draw attention to existential threats facing humanity. One of his key concerns was our overreliance on Earth. He repeatedly warned that humans must become a multi-planetary species to ensure long-term survival. Speaking to the BBC in 2016, he said that although the probability of a global catastrophe each year might seem low, the cumulative risk over a long period becomes almost noted that while humans might eventually establish colonies in space, it likely wouldn't happen for at least another hundred years. Until then, he urged extreme caution, pointing to threats such as climate change, genetically modified viruses, nuclear war, and artificial concerns echoed the sentiments of figures like Elon Musk, who said in 2013 that spreading life to other planets was essential to avoid extinction. Both thinkers shared a belief in the necessity of interplanetary expansion and were involved in projects aimed at interstellar exploration, including Hawking's support for the Breakthrough Starshot warning about AI wasn't limited to doomsday scenarios. Like many experts, he also foresaw major disruptions in employment and society. UCL professor Bradley Love shared that while advanced AI would bring vast economic benefits, it could also result in significant job losses. Love emphasized that while concerns about rogue AI robots may seem exaggerated, society should still take these risks seriously and prioritize addressing real-world challenges like climate change and weapons of mass recent years, interest and investment in AI have skyrocketed. From ChatGPT integrations to multibillion-dollar AI initiatives spearheaded by political leaders, artificial intelligence has become embedded in daily life. Smartphone AI assistants and increasingly realistic AI-generated content are making it harder to distinguish between reality and Hawking passed away in 2018, his insights remain increasingly relevant. His cautionary views continue to prompt reflection as technology rapidly evolves. Whether society will heed those warnings remains to be seen, but the questions he raised about human survival in the age of AI are more urgent than ever.


Newsweek
19-05-2025
- Newsweek
'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety
You will hear about "super intelligence," at an increasing rate over the coming months. Though it is the most advanced AI technology ever created, its definition is simple. Superintelligence is the point at which AI intelligence passes human intelligence in general cognitive and analytic functions. As the world competes to create a true superintelligence, the United States government has begun removing previously implemented guardrails and regulation. The National Institute of Standards and Technology sent updated orders to the U.S. Artificial Intelligence Safety Institute (AISI). They state to remove any mention of the phrases "AI safety," "responsible AI," and "AI fairness." In the wake of this change, Google's Gemini 2.5 Flash AI model increased in its likelihood to generate text that violates its safety guidelines in the areas of "text-to-text safety" and "image-to-text safety." If Superintelligence Goes Rouge We are nearing the Turing horizon, where machines can think and surpass human intelligence. Think about that for a moment, machines outsmarting and being cleverer than humans. We must consider all worst-case scenarios so we can plan and prepare to prevent that from ever occurring. If we leave superintelligence to its own devices, Stephen Hawking's prediction of it being the final invention of man could come true. AI apps are pictured. AI apps are pictured. Getty Images Imagine if any AI or superintelligence were to be coded and deployed with no moral guidelines. It would then act only in the interest of its end goal, no matter the damage it could do. Without these morals set and input by human engineers the AI would act with unmitigated biases. If this AI was deployed with the purpose of maximizing profit on flights from London to New York, what would be the unintended consequences? Not selling tickets to anyone in a wheelchair? Only selling tickets to the people that weigh the least? Not selling to anyone that has food allergies or anxiety disorders? It would maximize profits without taking into account any other factors than who can pay the most, take up the least time in boarding and deplaning, and cause the least amount of fuel use. Secondarily, what if we allow an AI superintelligence to be placed in charge of all government spending to maximize savings and cut expenses? Would it look to take spend away from people or entities that don't supply tax revenue? That could mean removing spending from public school meal programs for impoverished children, removing access to health care to people with developmental disabilities, or cutting Social Security payments to even the deficit. Guardrails and guidelines must be written and encoded by people to ensure no potential harm is done by AI. A Modern Approach Is Needed for Modern Technology The law is lagging behind technology globally. The European Union (EU) has ploughed ahead with the EU AI Act, which at a surface glance appears to be positive, but 90 percent of this iceberg lurks beneath the surface, potentially rife with danger. Its onerous regulations put every single EU company at a disadvantage globally with technological competitors. It offers little in the way of protections for marginalized groups and presents a lack of transparency in the fields of policing and immigration. Europe cannot continue on this path and expect to stay ahead of countries that are willing to win at any cost. What needs to happen? AI needs to regulate AI. The inspection body cannot be humans. Using payment card industry (PCI) compliance as a model, there needs to be a global board of AI compliance that meets on a regular basis to discuss the most effective and safe ways AI is used and deployed. Those guidelines are then the basis for any company to have their software deemed AI Compliant (AIC). The guidelines are written by humans, but enforced by AI itself. Humans need to write the configuration parameters for the AI program and the AI program itself needs to certify the technology meets all guidelines, or report back vulnerabilities and wait for a resubmission. Once all guidelines are met a technology will be passed as AIC. This technology cannot be spot checked like container ships coming to port—every single line of code must be examined. Humans cannot do this, AI must. We are on the precipice of two equally possible futures. One is a world where bad actors globally are left to use AI as a rogue agent to destabilize the global economy and rig the world to their advantage. The other is one where commonsense compliance is demanded of any company wanting to sell technology by a global body of humans using AI as the tool to monitor and inspect all tech. This levels the field globally and ensures that those who win are those that are smartest, most ethical, and the ones that deserve to get ahead. Chetan Dube is an AI pioneer and founder and CEO of Quant. The views expressed in this article are the writer's own.
Yahoo
16-05-2025
- Science
- Yahoo
New research says our universe only has a quinvigintillion years left, so make ‘em good ones
The universe, everything in space and time, has a shorter life expectancy than previously thought. Previously believed to die at an estimated 10 to the power of 1,100 years, the universe is now believed to die at an estimated 10 to the power of 78 years, according to a new study from Radboud University, published in the Journal of Cosmology and Astroparticle Physics on Monday. For context, that's 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 — or 1 quinvigintillion — years from now. 'The final end of the universe is coming much sooner than expected, but fortunately it still takes a very long time,' lead author of the study, Heino Falcke, professor of astrophysics at Radboud University in the Netherlands, told CBS News. Researchers looked closely at when white dwarf stars — when a star dies, creates a nebula and leaves behind its hot core — die, according to the study. They also looked at when larger stars die, causing a supernova and leaving behind small, dense cores that collapse on themselves and become black holes. The authors followed a principle of general relativity called Hawking radiation, named after famed astrophysicist Stephen Hawking, to determine when white dwarf stars could inevitably disintegrate. In roughly 1 quinvigintillion years, these white dwarf stars should decay at a point when no other matter from other galaxies exists in the universe. Well before that point, in 17 trillion years, most of the stars in the known universe will be extinguished before being reduced to white dwarf stars, Falcke told Live Science. The solar system's sun is expected to die much sooner than the rest of the universe, according to NASA. Five billion years from now, the sun's expected to become a red giant when it expels all of its hydrogen and increase in size, gobbling up the planets in the inner solar system, including Earth. The sun could then collapse on itself after another billion years and become a white dwarf. Video: Erupting volcanoes cause 'dancing' light show in space Massive solar flare erupts, causing radio blackouts across Earth Where will failed '70s Soviet probe land after it crashes back to Earth? Nobody knows Sorry, Pluto: The solar system could have a 9th planet after all, astronomers say Failed '70s-era Soviet spacecraft bound for Venus could soon crash back to Earth Read the original article on MassLive.


The Irish Sun
15-05-2025
- Sport
- The Irish Sun
Champions League, you're having a laugh? Wild Uefa changes almost make it harder for Premier League sides not to qualify
CHAMPIONS LEAGUE? You're having a laugh. We've all heard it. The derisory chant from opposition fans when one of the so-called 'big guns' is having an off day. 4 Six English teams will qualify for next season's Champions League Credit: Getty 4 Either Tottenham or Man Utd will earn Champions League qualification this season despite finishing 17th or 16th in the Premier League Credit: Getty For example, Only now what was once a mildly amusing terrace jibe sums up perfectly what the leading club competition in the world has become. A joke. Next season there will be a record SIX English teams in the Champions League. Almost one third of the entire Premier League will be waved straight into the bizarre league phase by Uefa's welcoming doormen at an empty small town disco on a wet Tuesday night. READ MORE IN FOOTBALL Anyone can come in. From It is time to officially ban the phrase 'elite competition' whenever the Champions League is mentioned on TV and radio or written in print. There was a time when you had to win your domestic league to progress into the highest level of European football the following season. Simple as that. Most read in Champions League CASINO SPECIAL - BEST CASINO BONUSES FROM £10 DEPOSITS 4 Chelsea are set to qualify for the Champions League despite being 20 points adrift of Liverpool Credit: Getty We are now at a point where English Premier League teams have to work harder to stay out of the modern Champions League than to get in. Spurs and Man United, regular participants in the past, have done spectacularly well trying to cock up their seasons. Yet by virtue of winning the Europa League one of them will be alongside Liverpool, 'I'd dive head first through a brick wall for him' - Ange Postecoglou's rallying speech reduces Tottenham fans to tears Erratic From winning five Premier League games in a row, they went winless in the next five and couldn't string a pass together. They lag 20 POINTS behind the bona fide champions of England from Anfield and are fifth. Don't bet against them being in next season's Champions League. The constant tinkering and chiselling away at a once simple game has led to Uefa getting its knickers in a right old twist. Fifth in this year's Premier League grants a free pass into the treasure trove of the Champions League thanks to the coefficients which measure success where once it was about winning. A whole page is devoted to thrill-a-minute 'coefficients' on the governing body's website to explain how a system that would baffle Stephen Hawking's much cleverer cousin actually works: 'Uefa calculates the coefficient of each club each season based on the clubs' results in the Uefa Champions League, Uefa Europa League and Uefa Conference League. 'The season coefficients from the five most recent seasons are used to rank the clubs for seeding purposes (sporting club coefficient). "In addition, the season coefficients from the ten most recent seasons are used to calculate revenue club coefficients for revenue distribution purposes only.' And that's just the overview. There's a gag in there somewhere about how many coefficients does it take to ruin a game of football? Only I can't see a funny punch line. There was a time back when the world was black and white in the 1950s when two imaginative French journalists took inspiration from South America and came up with the idea of the best clubs from each country competing for a trophy on our continent. Ironically, it wasn't called the Champions League back then. It was the plain old European Cup. A cup fought over by teams in Europe. Simple eh? Liverpool's first steps into the European Cup came in 1964, our sole representatives having won the league the previous season under Bill Shankly. Next season they share the honour with five other English teams and some of them are pretty ordinary. If Spurs win the Europa League and follow it up by winning the Champions League next year, the champions of Europe will come from a team currently 17th in England's top division. You can argue it won't happen. Yet somehow a side which has lost more league games than it has won this season is in a European final next week. That's cup football for you and it's a wonderful lottery. Qualification for the Champions League is not. It's a boring carve up. 4 The top five teams in the Premier League will qualify for the Champions League due to European coefficients Credit: AFP