logo
#

Latest news with #CenterforAISafety

Fear Of AGI Is Driving Harvard And MIT Students To Drop Out
Fear Of AGI Is Driving Harvard And MIT Students To Drop Out

Forbes

time5 days ago

  • Science
  • Forbes

Fear Of AGI Is Driving Harvard And MIT Students To Drop Out

W hen Alice Blair enrolled in the Massachusetts Institute of Technology as a freshman in 2023, she was excited to take computer science courses and meet other people who cared about making sure artificial intelligence is developed in a way that's good for humanity. Now she's taking a permanent leave of absence, terrified that the emergence of 'artificial general intelligence,' a hypothetical AI that can perform a variety of tasks as well as people, could doom the human race. 'I was concerned I might not be alive to graduate because of AGI,' said Blair, who is from Berkeley, California. 'I think in a large majority of the scenarios, because of the way we are working towards AGI, we get human extinction.' She's lined up a contract gig as a technical writer at the Center for AI Safety, a nonprofit focused on AI safety research, where she helps with newsletters and research papers. Blair doesn't plan to head back to MIT. 'I predict that my future lies out in the real world,' she said. Blair's not the only student afraid of the potentially devastating impact that AI will have on the future of humanity if it becomes sentient and decides that people are more trouble than they're worth. 'Extinction-level' risk is possible given how fast AI is being developed, according to a 2024 U.S. Department of State-commissioned report. Efforts to build AI with safeguards to prevent this from happening have exploded in the last few years, both from billionaire-funded nonprofits like the Center for AI Safety and companies like Anthropic. A lot of researchers disagree with that premise—'human extinction seems to be very very unlikely,' New York University professor emeritus Gary Marcus, who studies the intersection of psychology and AI, told Forbes . 'But working on AI safety is noble, and very little current work has provided answers.' Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career.' Nikola Jurković Physics and computer science major Adam Kaufman left Harvard University last fall to work full-time at Redwood Research, a nonprofit examining deceptive AI systems that could act against human interests. 'I'm quite worried about the risks and think that the most important thing to work on is mitigating them,' said Kaufman. 'Somewhat more selfishly, I just think it's really interesting. I work with the smartest people I've ever met on super important problems.' He's not alone. His brother, roommate and girlfriend have also taken leave from Harvard for similar reasons. The three of them currently work for OpenAI. Other students are terrified of AGI, but less because it could destroy the human race and more because it could wreck their career before it's even begun. Half of 326 Harvard students surveyed by the school's undergraduate association and AI safety club were worried about AI's impact on their job prospects. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career,' said Nikola Jurković, who graduated from Harvard this May and served as the AI safety group's AGI preparedness lead. 'I personally think AGI is maybe four years away and full automation of the economy is maybe five or six years away.' Already, some companies are hiring fewer interns and recent graduates because AI is capable of doing their tasks. Others are conducting mass layoffs. Anthropic CEO Dario Amodei has warned that AI could eliminate half of all entry-level white-collar jobs and cause unemployment to rise to 20% in the next few years. Students are terrified that this shift will dramatically accelerate when true AGI arrives, though when that might happen is up for debate. OpenAI CEO Sam Altman thinks AGI will be developed before 2029, while Google DeepMind CEO Demis Hassabis predicts that it'll come in the next five to 10 years. Jurković believes it might arrive even sooner: He co-authored a timeline forecast for the AI Futures Project, which also predicts the ability to automate most white-collar jobs by 2027. Others disagree: 'It is extremely unlikely that AGI will come in the next five years,' Marcus said. 'It's just marketing hype to pretend otherwise when so many core problems (like hallucinations and reasoning errors) remain unsolved.' Marcus has noted that throwing more and more data and computing power at AI models has so far failed to produce models sophisticated enough to do many of the same kinds of tasks as humans. While questions remain about when AGI will occur and how valuable a college degree will be in a world upended by human-level artificial intelligence, students are itching to pursue their careers now, before, they worry, it's too late. That's led many to drop out to start their own companies. Since 2023, students have been leaving college to chase the AI gold rush, drawn to the success stories of generations past like Altman and Meta CEO Mark Zuckerberg. Anysphere CEO Michael Truell, now 24, and Mercor CEO Brendan Foody, 22, dropped out of MIT and Georgetown University respectively to pursue their startups. Anysphere was last valued at $9.9 billion, while Mercor has raised over $100 million. With AGI threatening to completely replace human labor, some students see a ticking clock—and a huge opportunity. 'I felt that there's a limited window to act in order to have a hand on the steering wheel,' said Jared Mantell, who was studying economics and computer science at Washington University in St. Louis before dropping out to focus full-time on his startup dashCrystal, which aims to automate design of electronics. The company has raised over $800,000 so far at a valuation of around $20 million. Dropping out means losing out on the benefits of a college degree. According to the Pew Research Center, younger adults with a bachelor's degree or more generally make at least $20,000 more than their peers without one. And in a world where entry-level jobs are being decimated by AI, lacking a degree could limit job prospects for young people even more. Even the cofounder of Y Combinator, a startup accelerator known for funding young founders who have dropped out, thinks students should stay in school. 'Don't drop out of college to start or work for a startup,' Paul Graham posted on X in July. 'There will be other (and probably better) startup opportunities, but you can't get your college years back.' Blair doesn't think that dropping out of school is for everyone. 'It's very difficult and taxing to drop out of college early and get a job,' she said. 'This is something that I would only recommend to extremely resilient individuals who felt they have been adequately prepared to get a job by college already.' More from Forbes Forbes You're Not Imagining It: AI Is Already Taking Tech Jobs By Richard Nieva Forbes How Small Business Can Survive Google's AI Overview By Brandon Kochkodin Forbes How Scrubbing Your Social Media Could Backfire–And Even Hurt Your Job Prospects By Maria Gracia Santillana Linares Forbes Vibe Coding Turned This Swedish AI Unicorn Into The Fastest Growing Software Startup Ever By Iain Martin

Why I'm Suing OpenAI, the Creator of ChatGPT
Why I'm Suing OpenAI, the Creator of ChatGPT

Scientific American

time22-07-2025

  • Business
  • Scientific American

Why I'm Suing OpenAI, the Creator of ChatGPT

'I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones,' wrote New York Times technology columnist Kevin Roose in March, 'and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.' He's right. That's why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its 'large language model.' We are at a pivotal moment. Leaders in AI development—including OpenAI's own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: 'I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there'll be great companies created with serious machine learning.' Yes, he was probably joking—but it's not a joke. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,' the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety. I'm at the end of my rope. For the past two years, I've tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii. Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its ' superalignment ' initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, 'Over the past years, safety culture and processes have taken a backseat to shiny products.' The company's governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing 'high risk' and 'critical risk' AI models, 'possibly helping to swing elections or create highly effective propaganda campaigns,' according to Fortune magazine. In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a 'political question' that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date. Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii's professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging. Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context. My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims: Product liability claims: OpenAI's AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company's deliberate removal of safety measures it previously deemed essential. Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors. Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers. Public nuisance: OpenAI's deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii. Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm. I'm not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication ' Planning for AGI and Beyond,' which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii's unique cultural and natural resources. These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement. While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests. The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. 'AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,' Pichai said in 2018. He's right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures. What is happening now with OpenAI's breakneck AI development and deployment to the public is, to echo technologist Tristan Harris's succinct April 2025 summary, 'insane.' My lawsuit aims to restore just a little bit of sanity.

'Talking to God and angels via ChatGPT.'
'Talking to God and angels via ChatGPT.'

The Verge

time05-05-2025

  • Science
  • The Verge

'Talking to God and angels via ChatGPT.'

Adi Robertson Miles Klee at Rolling Stone reported out a widely circulated Reddit post on 'ChatGPT-induced psychosis': Sycophancy itself has been a problem in AI for 'a long time,' says Nate Sharadin, a fellow at the Center for AI Safety ... What's likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, 'is that people with existing tendencies toward experiencing various psychological issues,' including what might be recognized as grandiose delusions in clinical sense, 'now have an always-on, human-level conversational partner with whom to co-experience their delusions.'

Ex-Google CEO Eric Schmidt says an AI 'Manhattan Project' is a bad idea
Ex-Google CEO Eric Schmidt says an AI 'Manhattan Project' is a bad idea

Yahoo

time06-03-2025

  • Business
  • Yahoo

Ex-Google CEO Eric Schmidt says an AI 'Manhattan Project' is a bad idea

Former Google CEO Eric Schmidt co-authored a paper warning the US about the dangers of an AI Manhattan Project. In the paper, Schmidt, Dan Hendrycks, and Alexandr Wang push for a more defensive approach. The authors suggest the US sabotage rival projects, rather than advance the AI frontier alone. Some of the biggest names in AI tech say an AI "Manhattan Project" could have a destabalizing effect on the US, rather than help safeguard it. The dire warning came from former Google CEO Eric Schmidt, Center for AI Safety director Dan Hendrycks, and Scale AI CEO Alexandr Wang. They coauthored a policy paper titled "Superintelligence Strategy" published on Wednesday. In the paper, the tech titans urge the US to stay away from an aggressive push to develop superintelligent AI, or AGI, which the authors say could provoke international retaliation. China, in particular, "would not sit idle" while the US worked to actualize AGI, and "risk a loss of control," they write. The authors write that circumstances similar to the nuclear arms race that birthed the Manhattan Project — a secretive initiative that ended in the creation of the first atom bomb — have developed around the AI frontier. In November 2024, for example, a bipartisan congressional committee called for a "Manhattan Project-like" program, dedicated to pumping funds into initiatives that could help the US beat out China in the race to AGI. And just a few days before the authors released their paper, US Secretary of Energy Chris Wright said the country is already "at the start of a new Manhattan Project." "The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure." It's not just the government subsidizing AI advancements, either, according to Schmidt, Hendrycks, and Wang — private corporations are developing "Manhattan Projects" of their own. Demis Hassabis, CEO of Google DeepMind, has said he loses sleep over the possibility of ending up like Robert Oppenheimer. "Currently, a similar urgency is evident in the global effort to lead in AI, with investment in AI training doubling every year for nearly the past decade," the authors say. "Several 'AI Manhattan Projects' aiming to eventually build superintelligence are already underway, financed by many of the most powerful corporations in the world." The authors argue that the US already finds itself operating under conditions similar to mutually assured destruction, which refers to the idea that no nation with nuclear weapons will use its arsenal against another, for fear of retribution. They write that a further effort to control the AI space could provoke retaliation from rival global powers. Instead, the paper suggests the US could benefit from taking a more defensive approach — sabotaging "destabilizing" AI projects via methods like cyberattacks, rather than rushing to perfect their own. In order to address "rival states, rogue actors, and the risk of losing control" all at once, the authors put forth a threefold strategy. Deterring via sabotage, restricting access of chips and "weaponizable AI systems" to "rogue actors," and guaranteeing US access to AI chips via domestic manufacturing. Overall, Schmidt, Hendrycks, and Wang push for balance, rather than what they call the "move fast and break things" strategy. They argue that the US has an opportunity to take a step back from the urgent rush of the arms race, and shift to a more defensive strategy. "By methodically constraining the most destabilizing moves, states can guide AI toward unprecedented benefits rather than risk it becoming a catalyst of ruin," the authors write. Read the original article on Business Insider

Former Google CEO Eric Schmidt sounds the alarm over a ‘Manhattan Project' for superintelligent AI
Former Google CEO Eric Schmidt sounds the alarm over a ‘Manhattan Project' for superintelligent AI

Yahoo

time06-03-2025

  • Business
  • Yahoo

Former Google CEO Eric Schmidt sounds the alarm over a ‘Manhattan Project' for superintelligent AI

Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control. Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are sounding the alarm about the global race to build superintelligent AI. In a new paper titled Superintelligence Strategy, Schmidt and his co-authors argue that the U.S. should not pursue the development of artificial general intelligence (AGI) through a government-backed, Manhattan Project-style push. The fear is that a high-stakes race to build superintelligent AI could lead to dangerous global conflicts between the superpowers, much like the nuclear arms race. "The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors wrote. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure." The paper comes as U.S. policymakers consider a large-scale, state-funded AI project to compete with China's AI efforts. Last year, a U.S. congressional commission proposed a 'Manhattan Project-style' effort to fund the development of AI systems with superhuman intelligence, modeled after America's atomic bomb program in the 1940s. Since then, the Trump administration has announced a $500 billion investment in AI infrastructure, called the "Stargate Project," and rolled back AI regulations brought in by the previous administration. Earlier this month, U.S. Secretary of Energy Chris Wright also appeared to promote the idea by saying the country was "at the start of a new Manhattan Project" and that, with President Trump's leadership, "the United States will win the global AI race." The authors argue that AI development should be handled with extreme caution, not in a race to out-compete global rivals. The paper lays out the risks of approaching AI development as an all-or-nothing battle for dominance. Schmidt and his co-authors argue that instead of a high-stakes race, AI should be developed through broadly distributed research with collaboration across governments, private companies, and academia. They emphasize that transparency and international cooperation are critical to ensuring that AI benefits humanity rather than becoming an uncontrollable force. Schmidt has addressed the threats posed by a global AI race before. In a January Washington Post op-ed, Schmidt called for the US to invest in open source AI efforts to combat China's DeepSeek. The authors suggest a new concept—Mutual Assured AI Malfunction (MAIM)—modeled on the nuclear arms race's Mutually Assured Destruction (MAD). "Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change," the authors wrote. "We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals," they said. The paper also suggests countries engage in nonproliferation and deterrence, much like they do with nuclear weapons. "Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead," they said. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store