Latest news with #MaxTegmark


Bloomberg
20-05-2025
- Science
- Bloomberg
Why the AI Future Is Unfolding Faster Than Anyone Expected
AI is improving more quickly than we realize. The economic and societal impact could be massive. By Brad Stone May 20, 2025 at 8:30 AM EDT Share this article When OpenAI introduced ChatGPT in 2022, people could instantly see that the field of artificial intelligence had dramatically advanced. We all speak a language, after all, and could appreciate how the chatbot answered questions in a fluid, close-to-human style. AI has made immense strides since then, but many of us are—and let me put this delicately—too unsophisticated to notice. Max Tegmark, a professor of physics at the Massachusetts Institute of Technology, says our limited ability to gather specialized knowledge makes it much harder for us to recognize the disconcerting pace of improvements in technology. Most people aren't high-level mathematicians and may not know that, just in the past few years, AI's mastery has progressed from high-school-level algebra to ninja-level calculus. Similarly, there are relatively few musical virtuosos in the world, but AI has recently become adept at reading sheet music, understanding musical theory, even creating new music in major genres. 'What a lot of people are underestimating is just how much has happened in a very short amount of time,' Tegmark says. 'Things are going very fast now.' In San Francisco, still for now the center of the AI action, one can track these advances in the waves of new computer learning methods, chatbot features and podcast-propagated buzzwords. In February, OpenAI unveiled a tool called Deep Research that functions like a resourceful colleague, responding to in-depth queries by digging up facts on the web, synthesizing information and generating chart-filled reports. In another major development, both OpenAI and Anthropic—co-founded by Chief Executive Officer Dario Amodei and a breakaway group of former OpenAI engineers—developed tools that let users control whether a chatbot engages in 'reasoning': They can direct it to deliberate over a query for an extended period to arrive at more accurate or thorough answers. Another fashionable trend is called agentic AI —autonomous programs that can (theoretically) perform tasks for a user without supervision, such as sending emails or booking restaurant reservations. Techies are also buzzing about 'vibe coding'—not a new West Coast meditation practice but the art of positing general ideas and letting popular coding assistants like Microsoft Corp.'s GitHub Copilot or Cursor, made by the startup Anysphere Inc., take it from there. As developers blissfully vibe code, there's also been an unmistakable vibe shift in Silicon Valley. Just a year ago, breakthroughs in AI were usually accompanied by furrowed brows and wringing hands, as tech and political leaders fretted about the safety implications. That changed sometime around February, when US Vice President JD Vance, speaking at a global summit in Paris focused on mitigating harms from AI, inveighed against any regulation that might impede progress. 'I'm not here this morning to talk about AI safety,' he said. 'I'm here to talk about AI opportunity.' When Vance and President Donald Trump took office, they dashed any hope of new government rules that might slow the AI juggernauts. On his third day in office, Trump rescinded an executive order from his predecessor, Joe Biden, that set AI safety standards and asked tech companies to submit safety reports for new products. At the same time, AI startups have softened their calls for regulation. In 2023, OpenAI CEO Sam Altman told Congress that the possibility AI could run amok and hurt humans was among his ' areas of greatest concern ' and that companies should have to get licenses from the government to operate new models. At the TED Conference in Vancouver this April, he said he no longer favored that approach, because he'd ' learned more about how the government works.' It's not unusual in Silicon Valley to see tech companies and their leaders contort their ideologies to fit the shifting political winds. Still, the intensity over the past few months has been startling to watch. Many tech companies have stopped highlighting existential AI safety concerns, shed employees focused on the issue (along with diversity, sustainability and other Biden-era priorities) and become less apologetic about doing business with militaries at home and abroad, bypassing concerns from staff about placing deadly weapons in the hands of AI. Rob Reich, a professor of political science and senior fellow at the Institute for Human-Centered AI at Stanford University, says 'there's a shift to explicitly talking about American advantage. AI security and sovereignty are the watchwords of the day, and the geopolitical implications of building powerful AI systems are stronger than ever.' If Trump's policies are one reason for the change, another is the emergence of DeepSeek and its talented, enigmatic CEO, Liang Wenfeng. When the Chinese AI startup released its R1 model in the US in January, analysts marveled at the quality of a product from a company that had raised far less capital than its US rivals and was supposedly using data centers with less powerful Nvidia Corp. chips. DeepSeek's chatbot shot to the top of the charts on app stores, and US tech stocks promptly cratered on the possibility that the upstart had figured out a more efficient way to reap AI's gains. The uproar has quieted since then, but Trump has further restricted the sale of powerful American AI chips to China, and Silicon Valley now watches DeepSeek and its Chinese peers with a sense of urgency. 'Everyone has to think very carefully about what is at stake if we cede leadership,' says Alex Kotran, CEO of the AI Education Project. Losing to China isn't the only potential downside, though. AI-generated content is becoming so pervasive online that it could soon sap the web of any practical utility, and the Pentagon is using machine learning to hasten humanity's possible contact with alien life. Let's hope they like us. Nor has this geopolitical footrace calmed the widespread fear of economic damage and job losses. Take just one field: computer programming. Sundar Pichai, CEO of Alphabet Inc., said on an earnings call in April that AI now generates 'well over 30%' of all new code for the company's products. Garry Tan, CEO of startup program Y Combinator, said on a podcast that for a quarter of the startups in his winter program, 95% of their lines of code were AI-generated. MIT's Tegmark, who's also president of an AI safety advocacy organization called the Future of Life Institute, finds solace in his belief that a human instinct for self-preservation will ultimately kick in: Pro-AI business leaders and politicians 'don't want someone to build an AI that will overthrow the government any more than they want plutonium to be legalized.' He remains concerned, though, that the inexorable acceleration of AI development is occurring just outside the visible spectrum of most people on Earth, and that it could have economic and societal consequences beyond our current imagination. 'It sounds like sci-fi,' Tegmark says, 'but I remind you that ChatGPT also sounded like sci-fi as recently as a few years ago.' More from the AI Issue DeepSeek's 'Tech Madman' Founder Is Threatening US Dominance in AI Race The company's sudden emergence illustrates how China's industry is thriving despite Washington's efforts to slow it down. Microsoft's CEO on How AI Will Remake Every Company, Including His Nervous customers and a volatile partnership with OpenAI are complicating things for Satya Nadella and the world's most valuable company. America's Leading Alien Hunters Depend on AI to Speed Their Search Harvard's Galileo Project has brought high-end academic research to a once-fringe pursuit, and the Pentagon is watching. How AI Has Already Changed My Job Workers from different industries talk about the ways they're adapting. Maybe AI Slop Is Killing the Internet, After All The assertion that bots are choking off human life online has never seemed more true. Anthropic Is Trying to Win the AI Race Without Losing Its Soul Dario Amodei has transformed himself from an academic into the CEO of a $61 billion startup. Why Apple Still Hasn't Cracked AI Insiders say continued failure to get artificial intelligence right threatens everything from the iPhone's dominance to plans for robots and other futuristic products. 10 People to Watch in Tech: From AI Startups to Venture Capital A guide to the people you'll be hearing more about in the near future.


Forbes
12-05-2025
- Science
- Forbes
Calculating The Risk Of ASI Starts With Human Minds
Count per minute scale for radiation contamination and microSIevert per hour scale for radiation ... More dose rate on Dial display of Radiation survey meter Wishful thinking is not enough, especially when it comes to Artificial Intelligence. On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer's Trinity-test calculus before releasing Artificial Super-Intelligence. 'My assessment is that the 'Compton constant', the probability that a race to AGI culminates in loss of control of Earth, is >90%. 1/10: In our new paper, we develop scaling laws for scalable oversight: oversight and deception ability predictably scale as a function of LLM intelligence! The resulting conclusion is (or should be) straightforward: optimism is not a policy; quantified risk is. Tegmark is not a voice in the wild. In 2024, more than 1,000 researchers and CEOs — including Sam Altman, Demis Hassabis and Geoffrey Hinton — signed the one-sentence Safe AI declaration stating that 'mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.' Over the past two years, the question of artificial super intelligence has migrated from science fiction to the board agenda. Ironically, many of those who called for the moratorium followed the approach 'wash me but don't use water'. They publicly claimed the need to delay further development of AI, while at the same time pouring billions into exactly that. One might be excused for perceiving a misalignment of words and works. Turning dread into numbers is possible. Philosopher-analyst Joe Carlsmith decomposes the danger into six testable premises in his report Is Power-Seeking AI an Existential Risk? Feed your own probabilities into the model and it delivers a live risk register; Carlsmith's own guess is 'roughly ten per cent' that misaligned systems cause civilizational collapse before 2070. That's in 45 years… Corporate labs are starting to internalize such arithmetic. OpenAI's updated Preparedness Framework defines capability thresholds in biology, cybersecurity and self-improvement; in theory no model that breaches a 'High-Risk' line ships until counter-measures push the residual hazard below a documented ceiling. Numbers matter because AI capabilities are already outrunning human gut feel. A peer-reviewed study covered by TIME shows today's best language models outperforming PhD virologists at troubleshooting wet-lab protocols, doubling the promise of rapid vaccine discovery and the peril of DIY bioweapons. Risk, however, is only half the ledger. A December 2024 Nature editorial argues that achieving Artificial General Intelligence safely will require joint academic-industry oversight, not paralysis. The upside — decarbonisation breakthroughs, personalised education, drug pipelines measured in days rather than decades — is too vast to abandon. Research into how to reap that upside without Russian-roulette odds is accelerating: Constitutional AI. Anthropic's paper Constitutional AI: Harmlessness from AI Feedback shows how large models can self-criticise against a transparent rule-set, reducing toxic outputs without heavy human labelling. Yet at the same time, their own research shows that their model, Claude, is actively deceiving users. Cooperative AI. The Cooperative AI Foundation now funds benchmarks that reward agents for collaboration by default, shifting incentives from zero-sum to win-win. The challenge is that these approaches are exceptional. Overall, the majority of models mirror the standard that rules human society. Still, these strands of research converge on a radical design target: ProSocial ASI — systems whose organising principle is altruistic value creation. Here lies the interesting insight: even a super-intelligence will mirror the mindset of its makers. Aspirations shape algorithms. Build under a paradigm of competition and short-term profit, and you risk spawning a digital Machiavelli. Built under a paradigm of cooperation and long-term stewardship, the same transformer stack can become a planetary ally. Individual aspirations are, therefore, the analogue counterpart of machine intentions. The most important 'AI hardware' remains the synaptic network inside every developer's skull. Risk assessment must flow seamlessly into risk reduction and into value alignment. Think of the journey in four integrated moves, more narrative than a technological checklist: Notice how each move binds the digital to the analogue. Governance paperwork without culture change is theatre; culture change without quantitative checkpoints is wishful thinking. Three moves — align, scrutinize, incentivize — distill intuition into insight, and panic into preparation. Alignment is literally the 'A' in Artificial Super-Intelligence: without an explicit moral compass, raw capability magnifies whatever incentives it finds. What it looks like in practice : Draft a concise, public constitution that states the prosocial goals and red lines of the system. Bake it into training objectives and evals. Transparency lets outsiders audit whether the 'S' (super-intelligence) remains safe, turning trust into verifiable science. What it looks like in practice : Measure what matters—capability thresholds, residual risk, cooperation scores—and publish the numbers with every release. Proper incentives ensure the 'I' (intelligence) scales collective flourishing rather than zero-sum dominance. What it looks like in practice : Reward collaboration and teach humility inside the dev team; tie bonuses, citations, and promotions to cooperative benchmarks, not just raw performance. This full ASI contingency workflow fits onto a single coffee mug. It may flip ASI from an existential dice-roll into a cooperative engine and remind us that the intelligence that people and planet need now more than ever is, at its core, no-tech and analogue: clear purpose, shared evidence, and ethical culture. Silicon merely amplifies the human mindset we embed in it. The Compton constant turns existential anxiety into a number on a whiteboard. But numbers alone will not save us. Whether ASI learns to cure disease or cultivate disinformation depends less on its gradients than on our goals. Design for narrow advantage and we may well get the dystopias we fear. Design for shared flourishing — guided by transparent equations and an analogue conscience — and super-intelligence can become our partner on a journey that takes us to a space where people and planet flourish. In the end, the future of AI is not about machines outgrowing humanity; it is about humanity growing into the values we want machines to scale. Measured rigorously, aligned early and governed by the best in us, ASI can help humans thrive. The blueprint is already in our hands — and, more importantly, in our minds and hearts.
Yahoo
10-05-2025
- Science
- Yahoo
AI firms warned to calculate threat of super intelligence or risk it escaping human control
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer's first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity. In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the 'Compton constant' – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be 'slightly less' than one in three million. Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control. 'The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,' he said. 'It's not enough to say 'we feel good about it'. They have to calculate the percentage.' Tegmark said a Compton constant consensus calculated by multiple companies would create the 'political will' to agree global safety regimes for AIs. Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute – and Steve Wozniak, the co-founder of Apple. The letter, produced months after the release of ChatGPT launched a new era of AI development, warned that AI labs were locked in an 'out-of-control race' to deploy 'ever more powerful digital minds' that no one can 'understand, predict, or reliably control'. Tegmark spoke to the Guardian as a group of AI experts including tech industry professionals, representatives of state-backed safety bodies and academics drew up a new approach for developing AI safely. The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind. It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system's behaviour. Referring to the report, Tegmark said the argument for safe development in AI had recovered its footing after the most recent governmental AI summit in Paris, when the US vice-president, JD Vance, said the AI future was 'not going to be won by hand-wringing about safety'. Tegmark said: 'It really feels the gloom from Paris has gone and international collaboration has come roaring back.'


The Guardian
10-05-2025
- Science
- The Guardian
AI firms urged to calculate existential threat amid fears it could escape human control
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer's first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity. In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the 'Compton constant' – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be 'slightly less' than one in three million. Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control. 'The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,' he said. 'It's not enough to say 'we feel good about it'. They have to calculate the percentage.' Tegmark said a Compton constant consensus calculated by multiple companies would create the 'political will' to agree global safety regimes for AIs. Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute – and Steve Wozniak, the co-founder of Apple. The letter, produced months after the release of ChatGPT launched a new era of AI development, warned that AI labs were locked in an 'out-of-control race' to deploy 'ever more powerful digital minds' that no one can 'understand, predict, or reliably control'. Tegmark spoke to the Guardian as a group of AI experts including tech industry professionals, representatives of state-backed safety bodies and academics drew up a new approach for developing AI safely. The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind. It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system's behaviour. Referring to the report, Tegmark said the argument for safe development in AI had recovered its footing after the most recent governmental AI summit in Paris, when the US vice-president, JD Vance, said the AI future was 'not going to be won by hand-wringing about safety'. Tegmark said: 'It really feels the gloom from Paris has gone and international collaboration has come roaring back.'


Euronews
10-05-2025
- Science
- Euronews
Spain moves forward with plans to shorten the 40-hour working week
The last global gathering on artificial intelligence (AI) at the Paris AI Action Summit in February saw countries divided, notably after the US and UK refused to sign a joint declaration for AI that is "open, inclusive, transparent, ethical, safe, secure, and trustworthy". AI experts at the time criticised the declaration for not going far enough and being "devoid of any meaning," the reason countries cited for not signing the pact, as opposed to their being against AI safety. The next global AI summit will be held in India next year, but rather than wait until then, Singapore's government held a conference called the International Scientific Exchange on AI Safety on April 26. "Paris [AI Summit] left a misimpression that people don't agree about AI safety," said Max Tegmark, MIT professor and contributor to the Singapore report. "The Singapore government was clever to say yes, there is an agreement,' he told Euronews Next. Representatives from leading AI companies, such as OpenAI, Meta, Google DeepMind, and Anthropic, as well as leaders from 11 countries, including the US, China, and the EU, attended. The result of the conference was published in a paper released on Thursday called 'The Singapore Consensus on Global AI Safety Research Priorities'. The document lists research proposals to ensure that AI does not become dangerous to humanity. It identifies three aspects to promote a safe AI: assessing, developing trustworthiness, and controlling AI systems, which include large language models (LLMs), multimodal models that can work with multiple types of data, often including text, images, video, and lastly, AI agents. The main research that the document argues should be assessed is the development of risk thresholds to determine when intervention is needed, techniques for studying current impacts and forecasting future implications, and methods for rigorous testing and evaluation of AI systems. Some of the key areas of research listed include improving the validity and precision of AI model assessments and finding methods for testing dangerous behaviours, which include scenarios where AI operates outside human control. The paper calls for a definition of boundaries between acceptable and unacceptable behaviours. It also says that when building AI systems, they should be developed with truthful and honest systems and datasets. And once built, these AI systems should be checked to ensure they meet agreed safety standards, such as tests against jailbreaking. The final area the paper advocates for is the control and societal resilience of AI systems. This includes monitoring, kill switches, and non-agentic AI serving as guardrails for agentic systems. It also calls for human-centric oversight frameworks. As for societal resilience, the paper said that infrastructure against AI-enabled disruptions should be strengthened, and it argued that coordination mechanisms for incident responses should be developed. The release of the report comes as the geopolitical race for AI intensifies and AI companies thrash out their latest models to beat their competition. However, Xue Lan, Dean of Tsinghua University, who attended the conference, said: "In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future". Tegmark added that there is a consensus for AI safety between governments and tech firms, as it is in everyone's interest. "OpenAI, Antropic, and all these companies sent people to the Singapore conference; they want to share their safety concerns, and they don't have to share their secret sauce," he said. "Rival governments also don't want nuclear blow-ups in opposing countries, it's not in their interest," he added. Tegmark hopes that before the next AI summit in India, governments will treat AI like any other powerful tech industry, such as biotech, whereby there are safety standards in each country and new drugs are required to pass certain trials. "I'm feeling much more optimistic about the next summit now than after Paris," Tegmark said. Spain may soon move to a shorter week with workers enjoying 2.5 hours more rest after the government on Tuesday approved a bill that would reduce official working hours from 40 hours to 37.5 hours. If enacted, the bill, which will now go through the Spanish parliament, would benefit 12.5 million full-time and part-time private sector workers and is expected to improve productivity and reduce absenteeism, according to the country's Ministry of Labour. "Today, we are modernising the world of labour and helping people to be a little happier," said Labour Minister Yolanda Díaz, who heads the party Sumar that forms part of the current left-wing coalition government. The measure, which already applies to civil servants and some other sectors, would mainly affect retail, manufacturing, hospitality, and construction, Díaz added. Prime Minister Pedro Sánchez's government does not have a clear majority in parliament, where the bill must be approved for it to become law. The main trade unions have expressed support for the proposal, unlike business associations. Sumar, the hard-left minority partner of Sánchez's Socialist Party, proposed the bill. The Catalan nationalist party Junts, an occasional ally of Sánchez's coalition, expressed concern over what it said would be negative consequences for small companies and the self-employed under a shorter working week. The coalition will have to balance the demands of Junts and other smaller parties to get the bill passed. Spain has had a 40-hour workweek since 1983, when it was reduced from 48 hours. In the wake of the COVID-19 pandemic, there have been moves to change working habits with various pilot schemes launched in Spain to potentially introduce a four-day workweek, including a smaller trial in Valencia. The results of the month-long programme suggested that workers had benefited from longer weekends, developing healthier habits such as taking up sports, as well as reducing their stress levels. The European Commission has taken Czechia, Cyprus, Poland, Portugal and Spain to the EU's highest court for failing to correctly apply the Digital Services Act (DSA), it said on Wednesday. The DSA – which aims to protect users against illegal content and products online – entered fully into force in February last year: by then member states had to appoint a national authority tasked with overseeing the rules in their respective countries. Those watchdogs must cooperate with the Commission, which by itself oversees the largest batch of platforms that have more than 45 million users each month. The countries were also required to give their regulators enough means to carry out their tasks as well as to draft rules on penalties for infringements of the DSA. Poland failed to designate and empower its authority to carry out its tasks under the DSA, the Commission's statement said. Czechia, Cyprus, Spain and Portugal – which each designated a watchdog – did not give them the necessary powers to carry out their tasks under the regulation, the Commission found. The EU executive began its infringement procedure by sending letters of formal notice to the five countries in early 2024. None of the countries took the necessary measures in the meantime. In a separate case, the Commission said it stepped up its procedure against Bulgaria for also failing to empower a national regulator under the DSA and for failing to lay down the rules on penalties. If the country does not address the shortcomings in two months, the Commission could also take Bulgaria to court. Since late 2023, when the DSA entered into force for the largest group of online platforms, the Commission began several investigations into potential breaches. None of these probes, including into X, TikTok and Meta, have yet been wrapped up.