Latest news with #post-AGI
Yahoo
5 days ago
- Business
- Yahoo
What Happens When AI Replaces Workers?
Credit - mikkelwilliam—Getty Images On Wednesday, Anthropic CEO Dario Amodei declared AI could eliminate half of all entry level white collar jobs within five years. Last week, a senior LinkedIn executive reported that AI is already starting to take jobs from new grads. In April, Fiverr's CEO made it clear: 'AI is coming for your job. Heck, it's coming for my job too.' Even the new Pope is warning about AI's dramatic potential to reshape our economy. Why do they think this? The stated goal of the major AI companies is to build artificial general intelligence, or AGI, defined as 'a highly autonomous system that outperforms humans at most economically valuable work.' This isn't empty rhetoric—companies are spending over a trillion dollars to build towards AGI. And governments around the world are supporting the race to develop this technology. They're on track to succeed. Today's AI models can score as well as humans on many standardized tests. They are better competitive programmers than most programming professionals. They beat everyone except the top experts in science questions. As a result, AI industry leaders believe they could achieve AGI sometime between 2026 and 2035. Among insiders at the top AI companies, it's the near-consensus opinion that the day of most people's technological unemployment, where they lose their jobs to AI, will arrive soon. AGI is coming for every part of the labor market. It will hit white collar workplaces first, and soon after will reach blue collar workplaces as robotics advances. In the post-AGI world, an AI can likely do your work better and cheaper than you. While training a frontier AI model is expensive, running additional copies of it is cheap, and the associated costs are rapidly getting cheaper. A commonly proposed solution for an impending era of technological unemployment is government-granted universal basic income (UBI). But this could dramatically change how citizens participate in society because work is most people's primary bargaining chip. Our modern world is upheld with a simple exchange: you work for someone with money to pay you, because you have time or skills that they don't have. The economy depends on workers' skills, judgment, and consumption. As such, workers have historically bargained for higher wages and 40-hour work weeks because the economy depends on them. With AGI, we are posed to change, if not entirely sever, that relationship. For the first time in human history, capital might fully substitute for labor. If this happens, workers won't be necessary for the creation of value because machines will do it better and cheaper. As a result, your company won't need you to increase their profits and your government won't need you for their tax revenue. We could face what we call 'The Intelligence Curse', which is when powerful actors such as governments and companies create AGI, and subsequently lose their incentives to invest in people. Just like in oil-rich states afflicted with the 'resource curse,' governments won't have to invest in their populations to sustain their power. In the worst case scenario, they won't have to care about humans, so they won't. But our technological path is not predetermined. We can build our way out of this problem. Many of the people grappling with the other major risks from AGI—that it goes rogue, or helps terrorists create bioweapons, for example—focus on centralizing and regulatory solutions: track all the AI chips, require permits to train AI models. They want to make sure bad actors can't get their hands on powerful AI, and no one accidentally builds AI that could literally end the world. However, AGI will not just be the means of mass destruction—it will be the means of production too. And centralizing the means of production is not just a security issue, it is a fundamental decision about who has power. We should instead avert the security threats from AI by building technology that defends us. AI itself could help us make sure the code that runs our infrastructure is secure from attacks. Investments in biosecurity could block engineered pandemics. An Operation Warp Speed for AI alignment could ensure that AGI doesn't go rogue. And if we protect the world against the extreme threats that AGI might bring about, we can diffuse this technology broadly, to keep power in your hands. We should accelerate human-boosting AI over human-automating AI. Steve Jobs once called computers 'bicycles for the mind,' after the way they make us faster and more efficient. With AI, we should aim for a motorcycle for the mind, rather than a wholesale replacement of it. The market for technologies that keep and expand our power will be tremendous. Already today, the fastest-growing AI startups are those that augment rather than automate humans, such as the code editor Cursor. And as AI gets ever more powerful and autonomous, building human-boosting tools today could set the stage for human-owned tools tomorrow. AI tools could capture the tacit knowledge visible to you every day and turn it into your personal data moat. The role of the labor of the masses can be replaced either with the AI and capital of a few, or the AI and capital of us all. We should build technologies that let regular people train their own AI models, run them on affordable hardware, and keep control of their data—instead of everything running through a few big companies. You could be the owner of a business, deploying AI you control on data you own to solve problems that feel unfathomable to you today. Your role in the economy could move from direct labor, to managing AI systems like the CEO of a company manages their direct reports, to steering the direction of AI systems working for you like a company board weighing in on long-term direction. The economy could run on autopilot and superhumanly fast. Even when AI can work better than you, if you own and control your piece of it, you could be a player with real power—rather than just hoping for UBI that might never come. To adapt the words of G. K. Chesterton, the problem with AI capitalism is if there aren't enough capitalists. If everyone owns a piece of the AI future, all of us can win. And of course, AGI will make good institutions and governance more important than ever. We need to strengthen democracy against corruption and the pull of economic incentives before AGI arrives, to ensure regular people can win if we reach the point where governments and large corporations don't need us. What's happening right now is an AGI race, even if most of the world hasn't woken up to it. The AI labs have an advantage in AI, but to automate everyone else they need to train their AIs in the skills and knowledge that run the economy, and then go and outcompete the people currently providing those goods and services. Can we use AI to lift ourselves up, before the AI labs train the AIs that replace us? Can we retain control over the economy, even as AI becomes superintelligent? Can we achieve a future where power still comes from the people? It is up to us all to answer those questions. Contact us at letters@


Forbes
15-05-2025
- Science
- Forbes
After Reaching AGI Some Insist There Won't Be Anything Left For Humans To Teach AI About
AGI is going to need to keep up with expanding human knowledge even in a post-AGI world. In today's column, I address a prevalent assertion that after AI is advanced to becoming artificial general intelligence (AGI) there won't be anything else for humans to teach AGI about. The assumption is that AGI will know everything that we know. Ergo, there isn't any ongoing need or even value in trying to train AGI on anything else. Turns out that's hogwash (misguided) and there will still be a lot of human-AI, or shall we say human-AGI, co-teaching going on. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common viewpoint is that if we do attain AGI, the AGI will know everything that humans know. All human knowledge will be at the computational fingertips of AGI. In that case, the seemingly logical conclusion is that AGI won't have anything else to learn from humans. The whole kit-and-kaboodle will already be in place. For example, if you find yourself idly interested in Einstein's theory of relativity, no worries, just ask AGI. The AGI will tell you all about Einstein's famed insights. You won't need to look up the theory anywhere else. AGI will be your one-stop shopping bonanza for all human knowledge. Suppose you decided that you wanted to teach AGI about how important Einstein was as a physicist. AGI would immediately tell you that you needn't bother doing so. The AGI already knows the crucial nature that Einstein played in human existence. Give up trying to teach AGI about anything at all since AGI has got it all covered. Period, end of story. There are several false or misleading assumptions underlying the strident belief that we won't be able to teach AGI anything new. First, keep in mind that AGI will be principally trained on written records such as the massive amount of writing found across the Internet, including essays, stories, poems, etc. Ask yourself whether the written content on the Internet is indeed a complete capture of all human knowledge. It isn't. There are written records that aren't on the Internet and just haven't been digitized, or if digitized haven't been posted onto the Internet. The crux is that there will still be a lot of content that AGI won't have seen. In a post-AGI world, it is plausible to assume that humans will still be posting more content onto the Internet and that on an ongoing basis, the AGI can demonstrably learn by scanning that added content. Second, AGI won't know what's in our heads. I mean to say that there is knowledge we have in our noggins that isn't necessarily written down and placed onto the Internet. None of that brainware content will be privy to AGI. As an aside, many research efforts are advancing brain-machine interfaces (BMI), see my coverage at the link here, which will someday potentially allow for the reading of minds, but we don't know when that will materialize and nor whether it will coincide with attaining AGI. Another consideration is that time continues to flow along in a post-AGI era. This suggests that the world will be changing and that humans will come up with new thoughts that we hadn't conceived of previously. AGI, if frozen or out of touch with the latest human knowledge, will have only captured human knowledge that existed at a particular earlier point in time. The odds are that we would want AGI to keep up with whatever new knowledge we've divined since that initial AGI launch. Imagine things this way. Suppose that we managed to attain AGI before Einstein was even born. I know that seems zany but just go with the idea for the moment. If AGI was locked into only knowing human knowledge before Einstein, this amazing AGI would regrettably miss out on the theory of relativity. Since it is farfetched to try and turn back the clock and postulate that AGI would be attained before Einstein, let's recast this idea. There is undoubtedly another Einstein-like person yet to be born, thus, at some point in the future, once AGI is around, it stands to reason that AGI would benefit from learning newly conceived knowledge. By and large, we can reject the premise that AGI will have learned all human knowledge in the sense that this brazen claim refers solely to the human knowledge known at the time of AGI attainment, and of which was readily available to the AGI at that point in time. This leaves a whole lot of additional teaching available on the table. Plus, the passage of time will further increase the expanding new knowledge that humans could share with AGI. Will AGI want to be taught by humans or at least learn from whatever additional knowledge that humans possess? One answer is no. You see, some worry that AGI will find it insulting to learn from humans and therefore will avoid doing so. The logic seems to be that since AGI will be as smart as humans are, the AGI might get uppity and decide we are inferior and couldn't possibly envision that we have anything useful for the AGI to gain from. I am more upbeat on this posture. I would like to think that an AGI that is as smart as humans would crave new knowledge. AGI would be eager to acquire new knowledge and do so with rapt determination. Whether the knowledge comes from humans or beetles, the AGI wouldn't especially care. Garnering new knowledge would be a key precept of AGI, which I contend is a much more logical assumption than would the conjecture that AGI would stick its nose up about gleaning new human-devised knowledge. Would humans be willing to learn from AGI? Gosh, I certainly hope so. It would seem a crazy notion that humankind would decide that we won't opt to learn things from AGI. AGI would be a huge boon to human learning. You could make a compelling case that the advent of AGI could incredibly increase the knowledge of humans immensely, assuming that people can tap into AGI easily and at a low cost. Envision that everyone with Internet access could seek out AGI to train them or teach them on whatever topic they so desired. Boom, drop the mic. In a post-AGI realm, the best course of action would be that AGI learns from us on an ongoing basis, and on an akin ongoing basis, we also learn from AGI. That's a synergy worthy of great hope and promise. The last word on this for now goes to the legendary Henry Ford: 'Coming together is a beginning; keeping together is progress; working together is success.' If humanity plays its cards right, we will have human-AGI harmony and lean heartily into the synergy that arises accordingly.