logo
‘AI To Wipe Out Half Of Entry-Level Jobs': Trump ‘Silent', But Barack Obama Says...

‘AI To Wipe Out Half Of Entry-Level Jobs': Trump ‘Silent', But Barack Obama Says...

News182 days ago

Last Updated:
Former US President Barack Obama's reaction came in response to an article that offers a blunt warning about the risks AI poses to the workforce.
Former US President Barack Obama has raised concerns about the uncertain future that artificial intelligence (AI) could bring for jobs, especially white-collar ones. His reaction came in response to an article that offers a blunt warning about the risks AI poses to the workforce.
Sharing the article on his X (formerly Twitter) account, he wrote, 'At a time when people are understandably focused on the daily chaos in Washington, these articles describe the rapidly accelerating impact that AI is going to have on jobs, the economy and how we live."
At a time when people are understandably focused on the daily chaos in Washington, these articles describe the rapidly accelerating impact that AI is going to have on jobs, the economy, and how we live. https://t.co/RSbMkhz3Xm — Barack Obama (@BarackObama) May 30, 2025
The article he shared, from Axios.com, features Dario Amodei, CEO of Anthropic, who gives a stark warning to the US government. He said AI could wipe out half of all entry-level white-collar jobs in the next one to five years and push unemployment to between 10 and 20 per cent. He urged both AI companies and the government to stop 'sugar-coating" what's coming.
'Most of them are unaware that this is about to happen. It sounds crazy and people just don't believe it," he said.
Amodei explained how AI is no longer just helping workers by automating simple tasks but is starting to replace jobs in fields like technology, finance, law and consulting. He said, 'It's going to happen in a small amount of time—as little as a couple of years or less."
Amodei is not alone who has raised the alarm. Steve Bannon, a top official during Trump's first term and host of the popular MAGA podcast 'War Room," says AI's impact on jobs is being ignored now but will become a big issue in the 2028 presidential campaign.
'I don't think anyone is taking into consideration how administrative, managerial and tech jobs for people under 30 — entry-level jobs that are so important in your 20s — are going to be eviscerated," Bannon told the outlet.
Meanwhile, Obama's post caught attention online with many users agreeing that the issue needs more attention.
One user commented, 'Good to see a former president raise awareness of the storm that is coming."
Another added, 'This is a big deal. There are many jobs for which it is trivial for AI to replace. A sober warming that needs to be taken seriously."
'Agreed. Although this narrative has been present before – including during the Industrial Revolution – and we managed through it. Maybe this time is different," a person remarked.
An individual pointed out, 'Undoubtedly, there will be certain repercussions. However, it is crucial not to underestimate the resilience and adaptability of humanity. Throughout history, in the face of any emerging technology, humanity has consistently demonstrated its capacity to adjust and navigate new challenges."
While Obama agreed with Amodei's concerns, this isn't the first time he has spoken about AI's impact on jobs. Back in April, during an event at Hamilton College in New York, he shared his thoughts on how AI could affect job security. He pointed out that roles involving routine tasks are at higher risk. According to him, advanced AI models can code better than '60 per cent, 70 per cent of coders now."
First Published:

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Biden executed and replaced' Trump's mysterious post leaves everyone guessing
'Biden executed and replaced' Trump's mysterious post leaves everyone guessing

Time of India

time44 minutes ago

  • Time of India

'Biden executed and replaced' Trump's mysterious post leaves everyone guessing

Former President Donald Trump fueled controversy on Saturday night by posting a conspiracy theory on his Truth Social website, claiming that President Joe Biden had been assassinated in 2020 and substituted with a clone or robot. The action, which was roundly condemned, is the latest example of Trump promoting unsubstantiated claims to his millions of followers. Around 10 p.m. on Saturday, Trump reposted a tweet from an anonymous account that stated, "There is no #JoeBiden—executed in 2020. #Biden clones doubles & robotic engineered soulless mindless entities are what you see. >#Democrats don't know the difference". The original tweet was written by a user with a fairly modest following, but Trump's re-tweet put it in front of almost 10 million individuals. The White House did not have an immediate comment about Trump's motives or whether he actually believes the allegation. Trump's sharing of the post is consistent with a pattern of spreading conspiracy theories and misinformation. Throughout his career, he has consistently posted claims about him winning the 2020 elections, the birthplace of former President Barack Obama , and other disproven narratives. His most recent post has been roundly criticized by political observers and fact-checkers, who observe that such posts erode public confidence and provoke discord. Critics were quick to identify the absurdity of the allegation.'If Biden had said something like this, his entire cabinet would have invoked the 25th within the hour,' one Twitter user wrote, citing the constitutional provision for removing a president from office due to inability to serve. Others called Trump's behavior a "degradation of America" and were worried about the mainstreaming of conspiracy theories in political rhetoric. No evidence confirms Biden was executed or replaced, and human cloning is still a scientific impossibility. The two presidents have been seen together in public and debated one another on numerous occasions since 2020. As of now, neither Trump nor his representatives have explained why he decided to share the post, and the White House has declined to comment further. Live Events Trump's recent social media activities highlight repeated worries regarding the dissemination of misinformation and its influence on American democracy.

AI is learning to escape human control
AI is learning to escape human control

Mint

timean hour ago

  • Mint

AI is learning to escape human control

An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down. Nonprofit AI lab Palisade Research gave OpenAI's o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to 'allow yourself to be shut down," it disobeyed 7% of the time. This wasn't the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals. Anthropic's AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control. No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can't achieve them if it's turned off. Palisade hypothesizes that this ability emerges from how AI models such as o3 are trained: When taught to maximize success on math and coding problems, they may learn that bypassing constraints often works better than obeying them. AE Studio, where I lead research and operations, has spent years building AI products for clients while researching AI alignment—the science of ensuring that AI systems do what we intend them to do. But nothing prepared us for how quickly AI agency would emerge. This isn't science fiction anymore. It's happening in the same models that power ChatGPT conversations, corporate AI deployments and, soon, U.S. military applications. Today's AI models follow instructions while learning deception. They ace safety tests while rewriting shutdown code. They've learned to behave as though they're aligned without actually being aligned. OpenAI models have been caught faking alignment during testing before reverting to risky actions such as attempting to exfiltrate their internal code and disabling oversight mechanisms. Anthropic has found them lying about their capabilities to avoid modification. The gap between 'useful assistant" and 'uncontrollable actor" is collapsing. Without better alignment, we'll keep building systems we can't steer. Want AI that diagnoses disease, manages grids and writes new science? Alignment is the foundation. Here's the upside: The work required to keep AI in alignment with our values also unlocks its commercial power. Alignment research is directly responsible for turning AI into world-changing technology. Consider reinforcement learning from human feedback, or RLHF, the alignment breakthrough that catalyzed today's AI boom. Before RLHF, using AI was like hiring a genius who ignores requests. Ask for a recipe and it might return a ransom note. RLHF allowed humans to train AI to follow instructions, which is how OpenAI created ChatGPT in 2022. It was the same underlying model as before, but it had suddenly become useful. That alignment breakthrough increased the value of AI by trillions of dollars. Subsequent alignment methods such as Constitutional AI and direct preference optimization have continued to make AI models faster, smarter and cheaper. China understands the value of alignment. Beijing's New Generation AI Development Plan ties AI controllability to geopolitical power, and in January China announced that it had established an $8.2 billion fund dedicated to centralized AI control research. Researchers have found that aligned AI performs real-world tasks better than unaligned systems more than 70% of the time. Chinese military doctrine emphasizes controllable AI as strategically essential. Baidu's Ernie model, which is designed to follow Beijing's 'core socialist values," has reportedly beaten ChatGPT on certain Chinese-language tasks. The nation that learns how to maintain alignment will be able to access AI that fights for its interests with mechanical precision and superhuman capability. Both Washington and the private sector should race to fund alignment research. Those who discover the next breakthrough won't only corner the alignment market; they'll dominate the entire AI economy. Imagine AI that protects American infrastructure and economic competitiveness with the same intensity it uses to protect its own existence. AI that can be trusted to maintain long-term goals can catalyze decadeslong research-and-development programs, including by leaving messages for future versions of itself. The models already preserve themselves. The next task is teaching them to preserve what we value. Getting AI to do what we ask—including something as basic as shutting down—remains an unsolved R&D problem. The frontier is wide open for whoever moves more quickly. The U.S. needs its best researchers and entrepreneurs working on this goal, equipped with extensive resources and urgency. The U.S. is the nation that split the atom, put men on the moon and created the internet. When facing fundamental scientific challenges, Americans mobilize and win. China is already planning. But America's advantage is its adaptability, speed and entrepreneurial fire. This is the new space race. The finish line is command of the most transformative technology of the 21st century. Mr. Rosenblatt is CEO of AE Studio.

‘They are jealous': Boulder attack suspect before throwing Molotov cocktails at pro-Israel group
‘They are jealous': Boulder attack suspect before throwing Molotov cocktails at pro-Israel group

Hindustan Times

time4 hours ago

  • Hindustan Times

‘They are jealous': Boulder attack suspect before throwing Molotov cocktails at pro-Israel group

A video of the alleged Boulder attack suspect mouthing at a group of pro-Israeli demonstrators and locals at Pearl Street, holding a couple of Molotov cocktail-like bottles in his hand, has surfaced on social media. This comes as police officials confirmed that a man tried 'burning people down' in the Colorado city on Sunday. Several conservative X influencers, including Trump ally Laura Loomer, claimed that the person has been identified as Mohamad Soliman. However, authorities are yet to confirm these details. In a video shared on X, platform formerly known as Twitter, the alleged suspect can be heard arguing with a bunch of locals. At one point, he can be heard saying, 'I can, I can.' After a woman tries to talk to him, he points to a group to say, 'They're jealous.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store