‘AI godfather' sounds the alarm on growing risks in the AI race
HANOI, VIETNAM - Media OutReach Newswire - 28 February 2025 - The rise of powerful artificial intelligence (AI) like DeepSeek is transforming the world at an unprecedented pace, sparking enthusiasm and deep concerns about its potential risks. On that subject, 'AI godfather' Yoshua Bengio – Laureate of the 2024 VinFuture Grand Prize – highlighted the urgent need for national and global efforts to regulate AI and ensure equitable benefits for all.
'AI Godfather' and 2024 VinFuture Grand Prize Laureate Yoshua Bengio (left) warns global superpowers about the risks AI poses to humanity.
Double-edged sword
Since the beginning of 2025, the rise of DeepSeek has been described as a 'black swan' moment creating a game-changing shift in an AI landscape almost overnight. It is a wake-up call showing that powerful AI can be achieved without exorbitant costs, challenging the prevailing 'money equals progress' model.
Regarding this, Yoshua Bengio, often regarded as 'one of the godfathers of modern AI,' warned that its breakthrough in AI affordability could pose serious risks.
'If open-weight AI models, like DeepSeek, are distributed completely, terrorists may exploit them for disinformation campaigns, cyberattacks, or even bioweapon development,' he stated in an interview with VinFuture Foundation. 'This is a double-edged sword because while these systems become more available, cheaper, and more powerful, they also lower the barrier to misuse.'
Yoshua Bengio, a pioneer in neural networks and deep learning algorithms, has been recognized with numerous prestigious international awards, including the 2018 A.M. Turing Award, the 2024 VinFuture Grand Prize, and most recently, the 2025 Queen Elizabeth Prize for Engineering. He emphasized that AI is evolving toward greater autonomy, with systems capable of planning and acting in pursuit of a goal. 'Today, AI already surpasses humans in certain domains. It can master hundreds of languages and pass PhD-level exams across multiple disciplines', he explained.
Despite these current limitations in long-term planning abilities, major technology corporations have thrown billions of dollars into developing AI agents capable of autonomous decision-making over extended periods. While this promises efficiency gains, it raises concerns about large-scale job displacement.
Beyond economic shifts, a far more critical issue looms - the loss of human control over AI. In controlled experiments, some AI systems have even engaged in deceptive behavior to prevent being shut down - a troubling sign of self-preservation tendencies.
'This is alarming because we don't want machines that will compete with us,' he emphasized.
According to Bengio, while they are not yet intelligent enough to pose a major threat, this trajectory is concerning.
'In a few years, they might be sufficiently smarter and we need to start paying attention before it is too late,' Bengio warned.
Coupled with technical risks, AI presents a profound threat to privacy and civil liberties. Recently, a comprehensive International AI Safety report, chaired by Yoshua Bengio and compiled by 96 experts from 30 countries and organizations (including the UN, EU, and OECD) to guide policymakers on AI safety, revealed the growing potential for AI misuse in malicious activities.
Bengio noted that AI's ability to process vast amounts of data can empower individuals, corporations, or governments with unprecedented control. Given AI's uncertain future, he shared that the way humans manage AIs in the future will be central to preventing this scenario. 'We need to make sure that no single person, no single corporation, and no single government can have total power over super intelligent AI,' he emphasized.
Advances by the Chinese startup DeepSeek could further intensify the AI race among superpowers, raising a worrying development in a field dominated by the Silicon Valley and large Western tech companies in recent years.
'The danger here is that in their race to outpace each other, safety issues might be overlooked. We can be all the victims of this race if we are not careful enough,' Bengio cautioned.
Moreover, the intensifying race is expected to drive profound environmental consequences, particularly in energy consumption. Major AI companies, pushed by the prospect of massive profits, are willing to absorb high energy costs. This surge in demand will inevitably drive-up energy prices across the board, including electricity, oil, and other resources, affecting not just tech firms but households and industries worldwide.
This is where unchecked market forces and national competition could lead to global losses. 'That is why government intervention is crucial. Policymakers must negotiate agreements that cap energy consumption at sustainable levels. Otherwise, the forces of competition between companies will only accelerate AI expansion in ways that are not just unsustainable but potentially dangerous,' Bengio urged.
Bridging the AI divide
The godfather of AI has raised urgent calls to establish robust ethical frameworks and regulatory measures to ensure responsible development and deployment.
'Currently, there is essentially no regulatory framework almost anywhere in the countries where these systems are being developed. I think the governments have a responsibility to at least require a kind of reporting to them,' he said.
Responsibility is another key aspect. In many countries, legal principles hold companies accountable for products that cause harm. However, when it comes to software, liability remains a grey area, according to Bengio. 'Clarifying liability laws would be a simple but effective step. If companies knew they could face lawsuits for negligence, they would have stronger incentives to manage risks properly,' he asserted.
He also emphasized that it would require a concerted effort from individuals and institutions who recognize the existential risks, like catastrophic malicious use. Elsewhere, concerns over job security and future employment opportunities loom. 'The timeline for this shift is uncertain, but we could see radical transformations within five to ten years,' Bengio predicted.
While some jobs will inevitably be replaced by automation, Bengio emphasized that not all professions are equally at risk. 'Expanding digital and AI education is essential, but it will not be a universal solution. Not everyone can become an AI engineer,' he noted. Instead, roles that require emotional intelligence and human interaction, including healthcare professionals, therapists, and managers, are more likely to endure. Rather than individual adaptation, Bengio poses a larger question: Can AI deployment be deliberately shaped to minimize disruption?
'Again, this is something that has to be done globally, which is very challenging. We should do it in a way that does not create radical disruptions in the social fabric,' he concluded.
Beyond national regulations, Bengio stressed the need for global coordination. He highlighted eventually, humans should aim for global agreements and treaties, similar to how we handle other scientific and technological risks. As AI rapidly reshapes industries, new divides in wealth, job displacement, or political power could deepen unless proactive measures are taken. Bengio warned that AI is currently concentrated in the hands of a few corporations and nations.
He took Vietnam, a country with a strong industrial sector, as an example. If widespread automation shifts production to AI-powered facilities in wealthier nations like the US, it could lead to significant job losses and economic hardship in countries dependent on manufacturing exports.
Therefore, Bengio suggested establishing global negotiations - a form of exchange in which countries developing advanced AI might ask other countries to refrain from creating potentially dangerous AI. In return, the wealth generated by these AI systems, like new technologies and medical advancements, should be shared globally.
'Of course, we are very far from this, but we need to start those discussions at a global level,' he emphasized.
The first step toward bridging the AI divide is fostering collaboration between emerging economies and technologically advanced nations. Bengio highlighted the importance of initiatives like the VinFuture Prize, which draws global attention to scientific advancements in regions outside the traditional tech powerhouses.
'A big prize like the VinFuture Prize can make leading scientists far more aware of what is happening in Vietnam and other developing countries,' he explained.
Countries such as Vietnam, India, and Brazil already possess strong talent pools and growing expertise in AI. By forming strategic partnerships with resource-rich nations like Canada and European countries, they can develop competitive AI projects on a global scale. Such collaborations, if carefully structured, could ensure a more equitable distribution of technological power, according to Bengio.
Moreover, Bengio stressed the importance of bridging the gap between academia and industry. 'By recognizing and supporting breakthrough innovations, VinFuture Prize encourages deeper collaboration between scientists, industry leaders, and policymakers, as well as fosters global dialogue on responsible AI,' he said.
The VinFuture Foundation, established on International Human Solidarity Day on December 20th, 2020, is a non-profit organization co-founded by billionaire Mr. Pham Nhat Vuong and his wife, Mrs. Pham Thu Huong. The Foundation's core activity is awarding the annual VinFuture Prize, which recognizes transformative scientific and technological innovations capable of making significant positive changes in the lives of millions of people worldwide.
The nomination period for the 2025 VinFuture Prize will close at 2:00 PM on April 17, 2025 (Vietnam time, GMT+7).
The VinFuture Prize consists of four prestigious awards presented each year. The most esteemed is the VinFuture Grand Prize, valued at US$3 million, making it one of the largest annual prizes globally. Additionally, there are three Special Prizes, each valued at US$500,000, specifically dedicated to honoring women innovators, innovators from developing countries, and innovators with outstanding achievements in emerging fields.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
6 minutes ago
- Yahoo
AI startups revolutionize coding industry, leading to sky-high valuations
By Anna Tong and Krystal Hu SAN FRANCISCO (Reuters) -Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment.


Forbes
38 minutes ago
- Forbes
The Wiretap: Trump Says Goodbye To The AI Safety Institute
The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here. (Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images) The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI's ChatGPT or Anthropic's Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency's inaugural director Elizabeth Kelly had stepped down earlier in the month. The Commerce Department's announcement marking the change is thin on details about the reorganization, but it appears the aim is to favor innovation over red tape. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,' said Secretary of Commerce Howard Lutnick. What could be gleaned from Lutnick's paradoxical phrasing – national security-focused standards are limiting, but America needs national security-focused standards – is that it's very difficult to tell just how much the new body will differ from the old one. The announcement goes on to state that CAISI will 'assist industry to develop voluntary standards' in AI, which summed up much of what the old body did. Similarly, just as the AI Safety Institute was tasked with assessing risks in artificial intelligence, CAISI will 'lead unclassified evaluations of AI capabilities that may pose risks to national security.' CAISI will also still be a part of NIST. And, despite Lutnick's apparent disdain for standards, the Commerce press release concludes that CAISI will 'ensure U.S. dominance of international AI standards.' That there's little obvious change between the Institute and CAISI might alleviate any immediate concerns the U.S. is abandoning commitments to keep AI safe. Just earlier this year, a coalition of companies, nonprofits and academics called on Congress to codify the Institute's existence before the year was up. That included major players like OpenAI and Anthropic, both of which had agreements to work with the agency on research projects. What happens to those is now up in the air. The Commerce Department hadn't responded to a series of questions at the time of publication, and NIST declined to comment. Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. (Photo by Melina Mara-Pool/Getty Images) Unknown individuals have impersonated President Trump's chief of staff Susie Wiles in calls and texts to Republican lawmakers and business executives. Investigators suspect the perpetrators used artificial intelligence to clone Wiles' voice. One lawmaker was asked by the impersonator to assemble a list of individuals for potential presidential pardons, according to the Wall Street Journal. It's unclear that motives lay behind the impersonation, or how they pulled the stunt off. Wiles had told confidantes that some of her contacts from her personal phone had been stolen by a hacker. A Texas police officer searched Flock Safety's AI-powered surveillance camera network to track down a woman who had carried out a self-administered abortion, 404 Media reports. Because the search was conducted across different states, experts raised concerns about police using Flock to track down individuals getting abortions in states where it's legal before going back home to a state where it's illegal. The cops said they were simply worried about the woman's safety. Nathan Vilas Laatsch, a 28-year-old IT specialist at the Defense Intelligence Agency, has been arrested and charged with leaking state secrets after becoming upset at the Trump administration. The DOJ did not specify to which country Laatsch allegedly tried to pass secrets, sources told the Washington Post it was Germany. He was caught out by undercover agents posing as interested parties, according to the DOJ. Europol announced it had identified more than 2,000 links 'pointing to jihadist and right-wing violent extremist and terrorist propaganda targeting minors.' The agency warned that it had seen terrorists using AI to generate content like short videos and memes 'designed to resonate with younger audiences.' A 63-year-old British man, John Miller, was charged alongside a Chinese national by the Department of Justice with conspiring to ship missiles, air defense radar, drones and unspecified 'cryptographic devices' to China. They're also charged with trying to stalk and harass an individual who was planning protests against Chinese president Xi.
Yahoo
43 minutes ago
- Yahoo
Will new nuclear energy deals generate FOMO mentality in Big Tech?
Constellation Energy (CEG) has inked a 20-year deal to provide tech giant Meta Platforms (META) with power from its clean nuclear energy plant starting in 2027. Mizuho Americas managing director and senior analyst of utilities Anthony Crowdell discusses the Constellation-Meta deal, as well as other energy agreements as grid demand soars to power AI data centers, and the regulation around nuclear plants after President Trump signed an executive order in May to ease restrictions around nuclear reactor development. Catch Yahoo Finance's interview with Nano Nuclear Energy founder, executive chairman, and president Jay Yu on the nuclear energy landscape. To watch more expert insights and analysis on the latest market action, check out more Market Domination here. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data