Latest news with #GladstoneAI


Daily Mail
01-05-2025
- Science
- Daily Mail
AI experts reveal how the world could collapse under the control of smart technology
Experts in artificial intelligence left podcaster Joe Rogan stunned as they spoke about how quickly the technology is evolving. The UFC commentator invited AI experts Jeremie and Eduoard Harris - the CEO and CTO, respectively, of Gladstone AI - onto his popular The Joe Rogan Show. Gladstone AI is dedicated to promoting the responsible development and adoption of artificial intelligence. Rogan got into his concerns about the emerging technology with his guests right off the bat, asking: 'If there's a doomsday clock for AI... what time is it?' Jeremie said 'there's disagreement' among experts in the field, but concluded that AI 'could hit human-level... capabilities across the board' by 2027 or 2028. He quipped: 'You'll be able to have AI on your show and ask it what the doomsday clock is like by then.' But Rogan lamented that the AI probably would not laugh at his jokes. Jeremie explained that a lab called METR recently conducted a study that showed how quickly AI models are improving. In that study, researchers gave the models a realistic task to carry out and compared the times of completion between AI and humans. They found that the technology completed tasks that would take humans less than four minutes with a nearly 100 per cent success rate and a 50 per cent success rate for tasks that take humans an hour to complete. Eduoard then noted that the rate is increasing almost every four months, especially for tasks involving research and software engineering. By 2027, they concluded, the kind of work an AI researcher does in a month could be done by the AI itself with a 50 per cent success rate. At that point, Eduoard joked that Rogan could have an AI system as a guest and ask it itself what a doomsday clock looks like. The interview comes after Demis Hassabis, the CEO of DeepMind at Google, shared his belief that scientists are on track to create Artificial General Intelligence (AGI) within the next five to ten years. AGI refers to artificial intelligence with software that rivals the cognitive ability of a human. It's a hypothetical stage of AI advancement as, currently, the technology can only pull from existing content and doesn't have the curiosity or imagination to create new concepts like humans do. However, Hassabis believes that AGI is on track to become 'embedded' in our daily lives by 2035. He also said that today's technology hasn't yet achieved 'consciousness' and revealed that it's a possibility that AI bots may eventually develop self-awareness. When that does happen, though, Hassabis said it may be difficult for humans to recognize. He said: 'With machines - they're running on silicon, so even if they exhibit the same behaviors, and even if they say the same things, it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.' If the AI models continue to progress, Hassabis said it could be the end of human diseases and the climate crisis, he told Time Magazine in a separate interview. He said: 'I think some of the biggest problems that face us today as a society, whether that's climate or disease, will be helped by AI solutions. 'I'd be very worried about society today if I didn't know that something as transformative as AI was coming down the line.' At the same time though, Hassabis acknowledged that AI could cause destruction - especially if international cooperation isn't achieved. He argued that the AI models need to be tested for dangerous capabilities, and legal guardrails need to be implemented to prevent the technology from falling into the wrong hands. He noted that it is 'extremely difficult' to monitor AI advancement to prevent systems from acting autonomously. Other AI experts have offered a more pessimistic perspective on the future. Microsoft founder Bill Gates expressed his concerns in a recent interview with Jimmy Fallon. He said: 'I love the way it'll drive innovation forward, but I think it's a little bit unknown if we'll be able to shape it. And so, legitimately, people are like "wow, this is a bit scary." It's completely new territory.' Elsewhere, computer scientist Geoffrey Hinton has predicted that AI will wipe out the human race. Hinton is known as the 'Godfather of AI' for his work creating the foundations for machine learning. He was a recipient of the Nobel Prize in Physics, but recently resigned from his job at Google out of concern that AI advancement was going in a dangerous direction. 'The situation we're in now is that most of the experts in the field think that sometime, within probably the next 20 years, we're going to develop AIs that are smarter than people,' Hinton revealed in an interview with the BBC. 'And that's a very scary thought,' he added. Hinton also said that whether AI helps or hurts society in the future is dependent on how the government regulates it. He said: 'We need regulations to stop people using it for bad things, and we don't appear to have those kinds of political systems in place at present.'


Daily Mail
01-05-2025
- Science
- Daily Mail
Doomsday preppers stun Joe Rogan with grim breakdown of how the world could collapse under AI's control
Experts in artificial intelligence left podcaster Joe Rogan stunned as they spoke about how quickly the technology is evolving. The UFC commentator invited AI experts Jeremie and Eduoard Harris - the CEO and CTO, respectively, of of Gladstone AI - onto his popular The Joe Rogan Show. Gladstone AI is dedicated to promoting the responsible development and adoption of artificial intelligence. Rogan got into his concerns about the emerging technology with the guests right off the bat. 'If there's a doomsday clock for AI... what time is it?' he asked the duo. Jeremie soon replied that 'there's a bunch of disagreement' amongst experts in the field, but concluded that AI 'could hit human-level... capabilities across the board' by 2027 or 2028. 'You'll be able to have AI on your show and ask it what the doomsday clock is like by then,' he quipped. But Rogan lamented that the AI probably would not laugh at his jokes. Jeremie explained that a lab called METR recently conducted a study that showed how quickly AI models are improving. In that study, researchers gave the AI models a task that a human could do and compared the time it takes the artificial intelligence to complete the task to how quickly humans can complete it. Rogan hosted Jeremie and Eduoard Harris - the CEO and CTO, respectively, of of Gladstone AI , a company dedicated to promoting the responsible development and adoption of artificial intelligence - onto his popular The Joe Rogan Show on Friday. They found that the technology completed tasks that would take humans less than four minutes to complete with a nearly 100 percent success rate and a 50 percent success rate for tasks that take humans an hour to complete. Eduoard then noted that the rate is increasing almost every four months, especially for tasks involving research and software engineering. By 2027, they concluded, the kind of work an AI researcher does in a month could be done by the AI itself with a 50 percent success rate. At that point, Eduoard joked, Rogan could have an AI system as a guest and ask it itself what a doomsday clock looks like. The troubling interview came as Demis Hassabis, the CEO of DeepMind at Google, shared his belief that scientists are on track to create Artificial General Intelligence (AGI) within the next five to ten years. AGI refers to artificial intelligence with software that rivals the cognitive ability of a human. It's a hypothetical stage of AI advancement, as currently the technology can only pull from existing content and doesn't have the curiosity or imagination to create new concepts like humans do. However, Hassabis believes that AGI is on track to become 'embedded' in our daily lives by 2035. He also said that today's technology hasn't yet achieved 'consciousness' and revealed that it's a possibility that AI bots may eventually develop self-awareness. When that does happen, though, Hassabis said it may be difficult for humans to recognize. 'With machines - they're running on silicon, so even if they exhibit the same behaviors, and even if they say the same things, it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have,' the Nobel Prize winner explained. If the AI models continue to progress, Hassabis said it could be the end of human diseases and the climate crisis, he told Time Magazine in a separate interview. 'I think some of the biggest problems that face us today as a society, whether that's climate or disease, will be helped by AI solutions,' he told the publication. 'I'd be very worried about society today if I didn't know that something as transformative as AI was coming down the line.' At the same time though, Hassabis acknowledged that AI could cause destruction - especially if international cooperation isn't achieved. He argued that the AI models need to be tested for dangerous capabilities, and legal guardrails need to be implemented to prevent the technology from falling into the wrong hands. He noted that it is 'extremely difficult' to monitor AI advancement to prevent systems from acting autonomously. Other AI experts have offered a more pessimistic perspective on the future, with computer scientist Geoffrey Hinton predicting that AI will wipe out the human race. Hinton is known as the 'Godfather of AI' for his work creating the foundations for machine learning. He was a recipient of the Nobel Prize in Physics, but recently resigned from his job at Google out of concern that AI advancement was going in a dangerous direction. 'The situation we're in now is that most of the experts in the field think that sometime, within probably the next 20 years, we're going to develop AIs that are smarter than people,' Hinton revealed in an interview with the BBC. 'And that's a very scary thought,' he added. Hinton also said that whether AI helps or hurts society in the future is dependent on how the government regulates it. 'We need regulations to stop people using it for bad things, and we don't appear to have those kinds of political systems in place at present,' he explained. Microsoft founder Bill Gates offered a similar concern in a recent interview with Jimmy Fallon. 'I love the way it'll drive innovation forward, but I think it's a little bit unknown if we'll be able to shape it. And so, legitimately, people are like "wow, this is a bit scary." It's completely new territory,' he explained.
Yahoo
23-04-2025
- Business
- Yahoo
Report: US AI companies vulnerable to espionage, sabotage from China
American tech companies have spent billions of dollars in the name of artificial intelligence, but who is making the technology? A new report indicates the U.S. is too reliant on China for components found in some AI models. Brothers Jeremie and Edouard Harris co-founded Gladstone AI, the company behind the report. They join Chris Cuomo to discuss the potential national security threat faced by the U.S. as companies head toward artificial superintelligence. #China #artificialintelligence #nationalsecurity


Time Magazine
22-04-2025
- Business
- Time Magazine
Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says
Tech companies are investing hundreds of billions of dollars to build new U.S. datacenters where —if all goes to plan—radically powerful new AI models will be brought into existence. But all of these datacenters are vulnerable to Chinese espionage, according to a report published Tuesday. At risk, the authors argue, is not just tech companies' money, but also U.S. national security amid the intensifying geopolitical race with China to develop advanced AI. The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment. Today's top AI datacenters are vulnerable to both asymmetrical sabotage—where relatively cheap attacks could disable them for months—and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report's authors warn. Even the most advanced datacenters currently under construction—including OpenAI's Stargate project—are likely vulnerable to the same attacks, the authors tell TIME. 'You could end up with dozens of datacenter sites that are essentially stranded assets that can't be retrofitted for the level of security that's required,' says Edouard Harris, one of the authors of the report. 'That's just a brutal gut-punch.' The report was authored by brothers Edouard and Jeremie Harris of Gladstone AI, a firm that consults for the U.S. government on AI's security implications. In their year-long research period, they visited a datacenter operated by a top U.S. technology company alongside a team of former U.S. special forces who specialize in cyberespionage. In speaking with national security officials and datacenter operators, the authors say, they learned of one instance where a top U.S. tech company's AI datacenter was attacked and intellectual property was stolen. They also learned of another instance where a similar datacenter was targeted in an attack against a specific unnamed component which, if it had been successful, would have knocked the entire facility offline for months. The report addresses calls from some in Silicon Valley and Washington to begin a 'Manhattan Project' for AI, aimed at developing what insiders call superintelligence: an AI technology so powerful that it could be used to gain a decisive strategic advantage over China. All the top AI companies are attempting to develop superintelligence—and in recent years both the U.S. and China have woken up to its potential geopolitical significance. Although hawkish in tone, the report does not advocate for or against such a project. Instead, it says that if one were to begin today, existing datacenter vulnerabilities could doom it from the start. 'There's no guarantee we'll reach superintelligence soon,' the report says. 'But if we do, and we want to prevent the [Chinese Communist Party] from stealing or crippling it, we need to start building the secure facilities for it yesterday.' China Controls Key Datacenter Parts Many critical components for modern datacenters are mostly or exclusively built in China, the report points out. And due to the booming datacenter industry, many of these parts are on multi-year back orders. What that means is that an attack on the right critical component can knock a datacenter offline for months—or longer. Some of these attacks, the report claims, can be incredibly asymmetric. One such potential attack—the details of which are redacted in the report—could be carried out for as little as $20,000, and if successful could knock a $2 billion datacenter offline from between six months to a year. China, the report points out, is likely to delay shipment of components necessary to fix datacenters brought offline by these attacks, especially if it considers the U.S. to be on the brink of developing superintelligence. 'We should expect that the lead times on China-sourced generators, transformers, and other critical data center components will start to lengthen mysteriously beyond what they already are today,' the report says. 'This will be a sign that China is quietly diverting components to its own facilities, since after all, they control the industrial base that is making most of them.' AI Labs Struggle With Basic Security, Insiders Warn The report says that neither existing datacenters nor AI labs themselves are secure enough to prevent AI model weights—essentially their underlying neural networks—from being stolen by nation-state level attackers. The authors cite a conversation with a former OpenAI researcher who described two vulnerabilities that would allow attacks like that to happen—one of which had been reported on the company's internal Slack channels, but was left unaddressed for months. The specific details of the attacks are not included in the version of the report viewed by TIME. An OpenAI spokesperson said in a statement: 'It's not entirely clear what these claims refer to, but they appear outdated and don't reflect the current state of our security practices. We have a rigorous security program overseen by our Board's Safety and Security Committee.' The report's authors acknowledge that things are slowly getting better. 'According to several researchers we spoke to, security at frontier AI labs has improved somewhat in the past year, but it remains completely inadequate to withstand nation state attacks,' the report says. 'According to former insiders, poor controls at many frontier AI labs originally stem from a cultural bias towards speed over security.' Independent experts agree many problems remain. "There have been publicly disclosed incidents of cyber gangs hacking their way to the [intellectual property] assets of Nvidia not that long ago," Greg Allen, the director of the Wadhwani AI Center at the Washington think-tank the Center for Strategic and International Studies, tells TIME in a message. "The intelligence services of China are far more capable and sophisticated than those gangs. There's a bad offense / defense mismatch when it comes to Chinese attackers and U.S. AI firm defenders." Superintelligent AI May Break Free A third crucial vulnerability identified in the report is the susceptibility of datacenters—and AI developers—to powerful AI models themselves. In recent months, studies by leading AI researchers have shown top AI models beginning to exhibit both the drive, and the technical skill, to 'escape' the confines placed on them by their developers. In one example cited in the report, during testing, an OpenAI model was given the task of retrieving a string of text from a piece of software. But due to a bug in the test, the software didn't start. The model, unprompted, scanned the network in an attempt to understand why—and discovered a vulnerability on the machine it was running on. It used that vulnerability, also unprompted, to break out of its test environment and recover the string of text that it had initially been instructed to find. 'As AI developers have built more capable AI models on the path to superintelligence, those models have become harder to correct and control,' the report says. 'This happens because highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.' The report recommends that any effort to develop superintelligence must develop methods for 'AI containment,' and allow leaders with a responsibility for developing such precautions to block the development of more powerful AI systems if they judge the risk to be too high. 'Of course,' the authors note, 'if we've actually trained a real superintelligence that has goals different from our own, it probably won't be containable in the long run.'