18 hours ago
Artificial General Intelligence
JOSEPH BRONIO, REPORTER: Will AI permanently destroy humanity? It's a science fiction concept that's been explored in Hollywood many times over the decades.
HUGH JACKMAN AS VINCENT MOORE IN CHAPPIE: The problem with artificial intelligence is it's way too unpredictable.
More recently, though, it's been a real question on the mind of many experts around the world, and a risk that many say should be at the forefront of developers mind. When you look at how fast AI Tech has developed over just the last few years. You can't help but wonder where we'll be in the next few years.
AI GENERATED CHARACTER: Where were you on the night of the bubble bath?
And according to Google DeepMind, the company which made the AI tech responsible for what you're watching right now, the next big step will be to a level known as artificial general Intelligence, or AGI, which is…
XUEYIN ZHA, ANU AI RESEARCHER: I'm trying to find a very crisp definition, but it just isn't really there because people disagree on it. To be very honest with you, it's more of a marketing term by tech giants. What it usually means is that it's something that is so intelligent that is able to excel at all domains of intelligence and exceeds human ability in all of those domains. And so not just any specific reasoning task, but spatial, logical, emotional intelligence, decision making, and even physical execution as well.
In a report released in April, researchers from Google DeepMind outlined their belief that AGI could cause severe harm, including potentially the permanent destruction of humanity. And they're not the only ones with these concerns. In fact, around 600 AI scientists and Big Tech names, including the likes of Bill Gates and Sam Altman, signed the statement on AI risk in 2023 amid growing concerns facing humanity from this tech. And last year, open AI safety researcher Daniel Cocatalyst quit his job, claiming that the company was being reckless in its pursuit of AGI.
None of the concern has slowed down development, though. Many companies like Google Deep Mind and Open AI are confident they will create AGI in some form by about 2030 and other companies like Anthropic say it could even be within six months. But if the companies believe there is such a huge risk to humanity, why are they rushing to develop it?
XUEYIN ZHA: Realistically speaking, you know it's kind of already has its own momentum. Even if you stop people who don't even work for a tech giant might just go home and tweak an open weight model and figure something out. Yeah, the ship has sailed.
At this point, you might be wondering how AGI will doom us all and well, not even the Google Deep Mind report could say exactly. Instead, it lists four major categories of risk.
Misuse when people intentionally use it for harm. Misalignments when the system develops unintended, harmful behaviour. Mistakes from design or training flaws and Structural Risks from the ways different organisations or AI systems interact with each other. But these are all risks we already have with the language model base, or LM, AI of today.
XUEYIN ZHA: They couldn't even imagine what the consequences of AGI in particular is, other than "ohh, what LM is already doing".
That means there are already lots of safety regulations and guardrails in place for those risks, but AGI is a tricky field to predict. While entrepreneurs and companies say AGI is just around the corner — and are pouring billions of dollars into scaling or growing their large language models to create it — AI scientists from around the world haven't been able to agree on when we will see AGI in reality, if we will ever see AGI in reality with the current methods we have now or even what it will be to begin with. And while that means that there could be some unexpected existential risks…
ARNOLD SCHWARZENEGGER AS THE TERMINATOR: I'll be back.
… probably not like that — there could also be massive benefits for humanity and experts like Xueyin say that's what we should be focusing on.
XUEYIN ZHA: For high schoolers, I think it will be really important for them to not inherit some of the fears that older generations might have. It'd be interesting for them to think more about how they could shape society with AI right? So, there's a lot of AI native skills or AI native working styles that are actually happening, and I think they actually face, you know, even more opportunities because they grow up, AI native. It will get rid of some existing jobs for sure, but it will also open up new job areas, some of some of which we haven't even imagined. I think whether you're worried or excited depends on how curious you are and how open minded you are. If we really lean into AI with an open mind, we'll see that it actually helps us liberate our human capability more.