Latest news with #Tegmark


Time of India
08-05-2025
- Science
- Time of India
Researchers reboot push for AI safety after Paris summit bust
Experts researching threats stemming from artificial intelligence agreed on key work areas needed to contain dangers like loss of human control or easily accessible bioweapons in a report published safety-focused scientists were disappointed by February's Paris AI summit, where the French hosts largely left aside threats to home in on hoped-for economic "the mood was exactly the opposite of Paris" at a gathering of experts in Singapore in late April, said MIT researcher and conference organiser Max Tegmark, president of the Future of Life Institute that charts existential risks. "A lot of people came up to me and said that they had gotten their mojo back now... there's hope again," he told AFP. In a report put together at the conference, the experts name three overlapping work areas to focus on faced with ever-more-capable AIs: assessing risk from AI and its applications; developing AI that is safe and trustworthy by design; and monitoring deployed AI -- ready to intervene if alert signals flash. There is "global convergence around the technical challenges in AI safety ", said leading researcher Yoshua Bengio, who helped compile the " Singapore Consensus on Global AI Safety Research Priorities" report. "We have work to do that everybody agrees should be done. The Americans and the Chinese agree," Tegmark added. The AI safety community can be a gloomy place, with dire predictions of AI escaping human control altogether or proferring step-by-step instructions to build biological weapons -- even as tech giants plough hundreds of billions into building more powerful intelligences. In "AI 2027", a widely-read scenario recently published online by a small group of researchers, competition between the United States and China drives Washington to cede control over its economy and military to a rogue AI, ultimately resulting in human extinction. Online discussions pore over almost weekly hints that the latest AI models from major companies such as OpenAI or Anthropic could be trying to outwit researchers probing their capabilities and inner workings, which remain largely impenetrable even to their creators. Next year's governmental AI summit in India is widely expected to echo the optimistic tone of Paris. But Tegmark said that even running in parallel to politicians' quest for economic payoffs, experts' research can influence policy towards enforcing safety on those building and deploying AI. "The easiest way to get the political will is to do the nerd research. We've never had a nuclear winter. We didn't need to have one in order for (Soviet leader Mikhail) Gorbachev and (US President Ronald) Reagan to take it seriously" -- and agree on nuclear arms restraint, he said. Researchers' conversations in Singapore were just as impactful as the Paris summit was, "but with the impact going in a very, very different direction," Tegmark said.

The Star
08-05-2025
- Science
- The Star
Researchers reboot push for AI safety after Paris summit bust
Many safety-focused scientists were disappointed by February's Paris AI summit, where the French hosts largely left aside threats to home in on hoped-for economic boons. — Reuters PARIS: Experts researching threats stemming from artificial intelligence agreed on key work areas needed to contain dangers like loss of human control or easily accessible bioweapons in a report published on May 8. Many safety-focused scientists were disappointed by February's Paris AI summit, where the French hosts largely left aside threats to home in on hoped-for economic boons. But "the mood was exactly the opposite of Paris" at a gathering of experts in Singapore in late April, said MIT researcher and conference organiser Max Tegmark, president of the Future of Life Institute that charts existential risks. "A lot of people came up to me and said that they had gotten their mojo back now... there's hope again," he told AFP. In a report put together at the conference, the experts name three overlapping work areas to focus on faced with ever-more-capable AIs: assessing risk from AI and its applications; developing AI that is safe and trustworthy by design; and monitoring deployed AI – ready to intervene if alert signals flash. There is "global convergence around the technical challenges in AI safety", said leading researcher Yoshua Bengio, who helped compile the "Singapore Consensus on Global AI Safety Research Priorities" report. "We have work to do that everybody agrees should be done. The Americans and the Chinese agree," Tegmark added. The AI safety community can be a gloomy place, with dire predictions of AI escaping human control altogether or proferring step-by-step instructions to build biological weapons – even as tech giants plough hundreds of billions into building more powerful intelligences. In "AI 2027", a widely-read scenario recently published online by a small group of researchers, competition between the United States and China drives Washington to cede control over its economy and military to a rogue AI, ultimately resulting in human extinction. Online discussions pore over almost weekly hints that the latest AI models from major companies such as OpenAI or Anthropic could be trying to outwit researchers probing their capabilities and inner workings, which remain largely impenetrable even to their creators. Next year's governmental AI summit in India is widely expected to echo the optimistic tone of Paris. But Tegmark said that even running in parallel to politicians' quest for economic payoffs, experts' research can influence policy towards enforcing safety on those building and deploying AI. "The easiest way to get the political will is to do the nerd research. We've never had a nuclear winter. We didn't need to have one in order for (Soviet leader Mikhail) Gorbachev and (US President Ronald) Reagan to take it seriously" – and agree on nuclear arms restraint, he said. Researchers' conversations in Singapore were just as impactful as the Paris summit was, "but with the impact going in a very, very different direction," Tegmark said. – AFP


The Guardian
14-02-2025
- The Guardian
I met the ‘godfathers of AI' in Paris – here's what they told me to really worry about
I was a technophile in my early teenage days, sometimes wishing that I had been born in 2090, rather than 1990, so that I could see all the incredible technology of the future. Lately, though, I've become far more sceptical about whether the technology that we interact with most is really serving us – or whether we are serving it. So when I got an invitation to attend a conference on developing safe and ethical AI in the lead-up to the Paris AI summit, I was fully prepared to hear Maria Ressa, the Filipino journalist and 2021 Nobel peace prize laureate, talk about how big tech has, with impunity, allowed its networks to be flooded with disinformation, hate and manipulation in ways that have had very real, negative, impact on elections. But I wasn't prepared to hear some of the 'godfathers of AI', such as Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talk about how things might go much farther off the rails. At the centre of their concerns was the race towards AGI (artificial general intelligence, though Tegmark believes the 'A' should refer to 'autonomous') which would mean that for the first time in the history of life on Earth, there would be an entity other than human beings simultaneously possessing high autonomy, high generality and high intelligence, and that might develop objectives that are 'misaligned' with human wellbeing. Perhaps it will come about as the result of a nation state's security strategy, or the search for corporate profits at all costs, or perhaps all on its own. 'It's not today's AI we need to worry about, it's next year's,' Tegmark told me. 'It's like if you were interviewing me in 1942, and you asked me: 'Why aren't people worried about a nuclear arms race?' Except they think they are in an arms race, but it's actually a suicide race.' It brought to mind Ronald D Moore's 2003 reimagining of Battlestar Galactica, in which a public relations official shows journalists: 'things that look odd, or even antiquated, to modern eyes, like phones with cords, awkward manual valves, computers that barely deserve the name'. 'It was all designed to operate against an enemy that could infiltrate and disrupt all but the most basic computer systems … we were so frightened by our enemies that we literally looked backwards for protection.' Perhaps we need a new acronym, I thought. Instead of mutually assured destruction, we should be talking about 'self-assured destruction' with an extra emphasis: SAD! An acronym that might even break through to Donald Trump. The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update. When breakthroughs in human cloning were within scientists' reach, biologists came together and agreed not to pursue it, says Stuart Russell, who literally wrote the textbook on AI. Similarly, both Tegmark and Russell favour a moratorium on the pursuit of AGI, and a tiered risk approach – stricter than the EU's AI Act – where, just as with the drug approval process, AI systems in the higher-risk tiers would have to demonstrate to a regulator that they don't cross certain red lines, such as being able to copy themselves on to other computers. But even if the conference seemed weighted towards these future-driven fears, there was a fairly evident split among the leading AI safety and ethics experts from industry, academia and government in attendance. If the 'godfathers' were worried about AGI, a younger and more diverse demographic were pushing to put an equivalent focus on the dangers that AIs already pose to climate and democracy. We don't have to wait for an AGI to decide, on its own, to flood the world with datacentres to evolve itself more quickly – Microsoft, Meta, Alphabet, OpenAI and their Chinese counterparts are already doing it. Or for an AGI to decide, on its own, to manipulate voters en masse in order to put politicians with a deregulation agenda into office – which, again, Donald Trump and Elon Musk are already pursuing. And even in AI's current, early stages, its energy use is catastrophic: according to Kate Crawford, visiting chair of AI and justice at the École Normale Supérieur, data centres already account for more than 6% of all electricity consumption in the US and China, and demand is only going to keep surging. 'Rather than treating the topics as mutually exclusive, we need policymakers and governments to account for both,' Sacha Alanoca, a PhD researcher in AI governance at Stanford, told me. 'And we should give priority to empirically driven issues like environmental harms, which already have tangible solutions.' To that end, Sasha Luccioni, AI and climate lead at Hugging Face – a collaborative platform for open source AI models – announced this week that the startup has rolled out an AI energy score, ranking 166 models on their energy consumption when completing different tasks. The startup will also offer a one- to five-star rating system, comparable with the EU's energy label for household appliances, to guide users towards sustainable choices. Sign up to This is Europe The most pressing stories and debates for Europeans – from identity to economics to the environment after newsletter promotion 'There's the science budget of the world, and there's the money we're spending on AI,' says Russell. 'We could have done something useful, and instead we're pouring resources into this race to go off the edge of a cliff.' He didn't specify what alternatives, but just two months into the year, roughly $1tn in AI investments have been announced, all while the world is still falling far short of what is needed to stay even within 2C of heating, much less 1.5C. It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, 'to make sure there is a culture of participation embedded in AI development in general', as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it. At the close of the conference, I said to Russell that we seemed to be using an incredible amount of energy and other natural resources to race headlong into something we probably shouldn't be creating in the first place, and which the relatively benign versions of are already, in many ways, misaligned with the kinds of societies that we actually want to live in. 'Yup,' he replied. Alexander Hurst is a Guardian Europe columnist


The Guardian
10-02-2025
- Entertainment
- The Guardian
‘Engine of inequality': fears over AI's global impact dominate Paris summit
The impact of artificial intelligence on the environment and inequality has dominated the opening exchanges of a global summit in Paris attended by political leaders, tech executives and experts. Emmanuel Macron's AI envoy, Anne Bouverot, opened the two-day gathering at the Grand Palais in the heart of the French capital with a speech referring to the environmental impact of AI, which requires vast amounts of energy and resource to develop and operate. 'We know that AI can help mitigate climate change, but we also know that its current trajectory is unsustainable,' Bouverot said. Sustainable development of the technology would be on the agenda, she added. The general secretary of the UNI Global Union, Christy Hoffman, warned that without worker involvement in the use of AI, the technology risked increasing inequality. The UNI represents about 20 million workers worldwide in industries including retail, finance and entertainment. 'Without worker representation, AI-driven productivity gains risk turning the technology into yet another engine of inequality, further straining our democracies,' she told attenders. On Sunday Macron promoted the event by posting a montage of deepfake images of himself on Instagram, including a video of 'him' dancing in a disco with various 80s hairstyles, in a tongue-in-cheek reference to the technology's capabilities. Although safety has been downplayed on the conference agenda, some in attendance were concerned about the pace of development. Max Tegmark, the scientist behind a 2023 letter calling for a pause in producing powerful AI systems, cautioned that governments and tech companies were inadvertently re-enacting the ending of the Netflix climate crisis satire Don't Look Up. The film starring Leonardo DiCaprio and Jennifer Lawrence uses a looming comet, and the refusal by the political and media establishment to acknowledge the existential threat, as a metaphor for the climate emergency – with the meteor ultimately wiping out the planet. 'I feel like I have been living that movie,' Tegmark told the Guardian in an interview. 'But now it feels l like we've reached the part of the film where you can see the asteroid in the sky. And people are still saying that it doesn't exist. It really feels like life imitating art.' Tegmark said the promising work at the inaugural summit at Bletchley Park in the UK in November 2023 had been partly undone. 'Basically, asteroid denial is back in full swing,' he said. The Paris gathering has been badged as the AI action summit, whereas its UK cousin was the AI safety summit. Macron is co-chairing the summit with India's prime minister, Narendra Modi. The US vice-president, JD Vance, and Chinese vice premier, Zhang Guoqing, are among the other political attenders, although UK prime minister Sir Keir Starmer is not attending. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Existential concerns about AI focus on the development of artificial general intelligence, the term for systems that can match or exceed human intellectual capabilities at nearly all cognitive tasks. Estimates of when, and if, AGI will be reached vary but Tegmark said based on statements from industry figures 'the asteroid is going to strike … somewhere between one and five years from now.' Developments in AI have accelerated since 2023, with the emergence of so-called reasoning models pushing the capabilities of systems even further. The release of a freely available reasoning model by the Chinese company DeepSeek has also intensified the competitive rivalry between China and the US, which has led AI breakthroughs. The head of Google's AI efforts, Demis Hassabis, said on Sunday the tech industry was 'perhaps five years away' from achieving AGI and safety conversations needed to continue. 'Society needs to get ready for that and … the implications that will have.' Speaking in Paris before the summit, Hassabis added that AGI carried 'inherent risk', particularly in the field of autonomous 'agents', which carry out tasks without human intervention, but those concerns could be assuaged. 'I'm a big believer in human ingenuity. I think if we put the best brains on it, and with enough time and enough care … then I think we'll get it right.'
Yahoo
08-02-2025
- Business
- Yahoo
Oracle Corporation (ORCL): Among the Game-Changing Stocks for AI Revolution
We recently compiled a list of the . In this article, we are going to take a look at where Oracle Corporation (NYSE:ORCL) stands against the other stocks. Artificial intelligence that is smarter than humans could be more dangerous than helpful. That's the sentiment echoed by some of the world's most prominent AI scientists. Max Tegmark, a professor at the Massachusetts Institute of Technology and Yoshua Bengio, "godfather of AI" and a professor at the Université de Montréal, have raised concerns about the proliferation of AI agents without guardrails. According to Bengio, there is a greater risk in developing artificial intelligence agents without safeguards or knowing how they will behave. 'Do we want to be in competition with entities that are smarter than us? It's not a very reassuring gamble, right? So we have to understand how self-preservation can emerge as a goal in AI,' Bengio said in a podcast on CNBC. Tegmark believes there is a need for safety standards to govern how AI tools operate. The ultimate goal is to have powerful AI agents or tools that are simultaneously under human control. "I think, on an optimistic note here, we can have almost everything that we're excited about with AI ... if we simply insist on having some basic safety standards before people can sell powerful AI systems," Tegmark said. In 2023, Tegmark's Future of Life Institute recommended halting the creation of AI systems that could rival humans in intelligence. Although that hasn't happened, Tegmark stated that the topic is being discussed and that it's time to act to determine how to implement safeguards to regulate AGI. Sentiments by the two AI scientists come on the heels of US President Donald Trump repealing former President Joe Biden's guardrails that sought to govern the development of artificial intelligence. One key provision under the previous order was the requirement that tech companies develop the most advanced AI models and share details about their work with the government before releasing them to the public. While tech giants had welcomed the AI safety measure, there was disquiet among some big players insisting that the order, which invoked the Defense Production Act, had the potential to derail the nascent industry. Venture capitalist Marc Andreessen had already warned before Trump came to office that the Biden order would deliberately affect AI development given the onerous regulations in play. Trump had always been vocal against the AI safety measure, reiterating during the campaigns that it hindered innovation and imposed radical leftwing ideas on technology development. However, free speech was not restricted by the Biden order itself. Certain provisions of the Biden AI measure sought standards for watermarking AI-generated content in an effort to lessen the risks of impersonation and abusive sexual deepfake imagery. Several federal agencies were also instructed to protect against the possible negative effects of AI applications, cautioning against careless applications that reproduced and intensified existing inequities. For this article, we selected AI stocks by going through news articles, stock analysis, and press releases. These stocks are also popular among hedge funds. Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletter's strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here). A team of IT professionals meticulously crafting a large-scale enterprise performance management Corporation (NYSE:ORCL) is a software infrastructure company that offers various cloud software applications. The company is taking a different approach, even as other software giants focus their resources on developing general-purpose virtual assistants. Instead, the company focuses on integrating AI features into software applications to speed up every day but tedious tasks. Consequently, on February 6th, Oracle Corporation (NYSE:ORCL) confirmed integrating a new set of artificial intelligence tools into NetSuite, its corporate finance software offerings. The integration seeks to make it easier for consumers to get price quotes on various purchases. The feature can compile a quote via a conversation with a chatbot or upon asking what a customer wants. Its primary goal is to speed up the purchase process in the e-commerce business. Overall ORCL ranks 1st on our list of the game-changing stocks for AI revolution. While we acknowledge the potential of ORCL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and doing so within a shorter time frame. If you are looking for an AI stock that is more promising than ORCL but that trades at less than 5 times its earnings, check out our report about the . READ NEXT: 20 Best AI Stocks To Buy Now and Complete List of 59 AI Companies Under $2 Billion in Market Cap. Disclosure: None. This article was originally published at Insider Monkey.