
AI isn't coming for your job—it's coming for your company
Debate about whether artificial intelligence can replicate the intellectual labor of doctors, lawyers, or PhDs forgoes a deeper concern that's looming: Entire companies—not just individual jobs—may be rendered obsolete by the accelerating pace of AI adoption.
Reports suggesting OpenAI will charge $20,000 per month for agents trained at a PhD level spun up the ongoing debate about whose job is safe from AI and whose job is not.
'I've not seen it be that impressive yet, but it's likely not far off,' James Villarrubia, head of digital innovation and AI at NASA CAS, told me.
Sean McGregor, the founder of Responsible AI Collaborative who earned a PhD in computer science, pointed out how many jobs are about more than just a set of skills: 'Current AI technology is not sufficiently robust to allow unsupervised control of hazardous chemistry equipment, human experimentation, or other domains where human PhDs are currently required.'
The big reason I polled the audience on this one was because I wanted to broaden my perspective on what jobs would be eliminated. Instead, it changed my perspective.
AI needs to outperform the system, not the role
Suzanne Rabicoff, founder of the human agency think tank and fractional practice, The Pie Grower, gave me some reading assignments from her work, instead of a quote.
Her work showed me that these times are unprecedented. But something clicked in my brain when she said in her writing that she liked the angle of more efficient companies rising instead of jobs being replaced at companies with a lot of tech and human capital debt. Her response to that statement? 'Exactly my bet.'
Sure, this is the first time that a robot is doing the homework for some college students. However, there is more precedent for robots moving market share than for replacing the same job function across a sector.
Fortune 500 companies—especially those bloated with legacy processes and redundant labor—are always vulnerable to decline as newer, more nimble competitors rise. And not because any single job is replaced, but because the foundational economics of their business models no longer hold.
AI doesn't need to outperform every employee to render an enterprise obsolete. It only needs to outperform the system.
Case study: The auto industry
Take, for example, the decline of American car manufacturers in the late 20th century.
In the 1950s, American automakers had a stranglehold on the car industry, not unlike today's tech giants. In 1950, the U.S. produced about 75% of the world's cars.
But in the 1970s, Japanese automakers pioneered the use of robotics in auto manufacturing. These companies produced higher-quality vehicles at great value thanks to leaner operations that were also more precise.
Firms like GM struggled to keep up, burdened by outdated factories and excessive human capital costs—including bloated pensions.
The seismic shift in the decades to follow paints a picture of what could be in store for large companies now. In 1960, the U.S. produced about 48% of the world's cars, while Japan accounted for just 5%. By 1980, Japan had captured around 29% of the market, while the U.S. had fallen to 23%.
Today's AI shakeup could look similar. Decades from now, we could look at Apple similarly to how we look at Ford now. AI startups with more agile structures are poised to eat market share. On top of that, startups can focus on solving specialized problems, sharpening their competitive edge.
Will your company shrivel and die?
The fallout has already begun. Gartner surveyed organizations in late 2023, finding that about half were developing their own AI tools. By the end of 2024, that dropped to 20%. As hype around generative AI cools, Gartner notes that many chief information officers are instead using outside vendors—either large language model providers or traditional software sellers with AI-enhanced offerings. In 2024, AI startups received nearly half of the $209 billion in global venture funding. If only 20% of legacy organizations currently feel confident competing with these upstarts, how many will feel that confidence as these startups mature?
While headlines continue to fixate on whether AI can match PhD-level expertise, the deeper risk remains largely unspoken: Giant companies will shrivel and some may die. And when they do, your job is at risk whether you greet customers at the front desk or hold a PhD in an engineering discipline.
But there are ways to stay afloat. One of the most impactful pieces of advice I ever received came from Jonathan Rosenberg, former SVP of products at Google and current advisor to Alphabet, when I visited the company's campus in college. 'You can't just be great at what you do, you have to catch a great wave. Early people think it's about the company, then the job, then the industry. It's actually industry, company, job…'
So, how do you catch the AI wave?
Ankur Patel, CEO of Multimodal, advises workers to learn how to do their current jobs using AI tools that enhance productivity. He also notes that soft skills—mobilizing people, building relationships, leading teams—will become increasingly valuable as AI takes over more technical or routine tasks.
'You can't have AI be a group leader or team leader, right? I just don't see that happening, even in the next generation forward,' Patel said. 'So I think that's a huge opportunity…to grow and learn from.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
7 minutes ago
- Associated Press
Canadian companies leaving productivity gains on the table by not effectively adopting new technology
Recent productivity gains only 'scratching the surface' of what new technology can deliver TORONTO, June 6, 2025 /CNW/ - While Canadian companies have been making technology investments to improve productivity, insufficient investments in their people have limited effectiveness and held back productivity gains, finds new KPMG in Canada research. KPMG recently surveyed 250 business leaders across Canada to understand what actions they were taking to improve their operations in response to the trade war with the U.S. The results found that most had stepped up investments in technology to make their organizations more efficient and productive – and are seeing positive returns – but nearly two-thirds (63 per cent) say their employees aren't using new technologies effectively – holding back full returns on their tech spend. 'Investing in new technology tools and platforms can do wonders for an organization's productivity by streamlining processes, workflows and tasks,' says Stavros Demetriou, Partner and National Leader of KPMG in Canada's People and Change practice. 'The latest Stats Can data shows Canada has made productivity gains in each of the last two quarters, but this increase still trails improvements in the U.S. over the same period. 'Unless Canadian organizations undertake effective employee education and adoption plans, their people will barely scratch the surface on what the technology can do to make them more productive, and our gap to the U.S. and others will continue to widen.' Although nearly all (89 per cent) respondents say they're investing in upskilling employees, over half (53 per cent) say their company doesn't invest enough in employee training, workshops or continuous learning opportunities, and nearly six in 10 (56 per cent) say their organization lacks the internal resources and talent needed to implement and use technology effectively. Survey highlights Mr. Demetriou notes that while three-quarters (74 per cent) of respondents believe AI will solve their productivity problem, the same proportion (74 per cent) admit they have underestimated the challenges of implementing new technologies such as AI. 'An underappreciation of the impact of things like changing processes and working habits could explain why employees are not fully equipped to harness new technologies to their fullest potential,' he says. 'There's a common belief that digitally transforming your company is primarily a technology upgrade exercise, but the reality is that technology implementation is just one part of a journey – digital transformation is just as much about advancing and elevating the workforce. It's a continuous, iterative process that, if done correctly, leads to higher productivity and innovation, and the ability to navigate the future more confidently,' he adds. Ineffective training Megan Jones, National HR and Workforce Transformation Lead at KPMG in Canada, notes that nearly nine in 10 respondents say they need better processes in place to encourage their workers to use technologies, including case studies and incentives. 'Often, when organizations implement new tools and technologies, they don't completely understand or appreciate the full capabilities of these investments. As a result, employees are simply not adequately prepared to maximize the benefits these can bring to their jobs or customers. In some cases, organizations provide full training, but it's too technical or poorly delivered. Effective training and upskilling need to be targeted, relevant, engaging, and frequent. Much like exercising consistently to build muscle, technology training must happen regularly to make the workforce stronger and more agile,' she says. Ms. Jones notes that almost nine in 10 (87 per cent) respondents acknowledged their company could do a better job of creating a culture that encourages employees to share ideas and take risks, fostering innovation and creativity. She recommends organizations provide incentives for employees to experiment with technology and explore new use cases for it. 'Regular workshops or dedicated 'days' where employees are encouraged to play around with AI and develop new solutions can go a long way in sparking innovation. Also, showcasing wins by employees in one area of the business could help spark new ideas in other parts of the organization,' she says. The digital divide Most respondents (86 per cent) hope that a more digitally-savvy younger generation will help their company become more productive through the easier adoption of new technologies such as AI, Web3, data and analytics, quantum and edge computing. Lewis Curley, a Partner in KPMG in Canada's People and Change practice, says differences in workforce composition, skills and attitudes can create additional considerations for an organization's technology program, but organizations that engage all employees early in the journey as well as training and upskilling will have more success in leveraging new technologies such as AI and increasing overall productivity. 'If an organization is looking to implement AI, they must engage the entire workforce right from the beginning. If some employees don't feel like they are part of the journey, they might disengage from the process, lose trust in AI, or worry that the technology will replace them, which could deter them from using it,' he says. 'Everyone has a role in a company's digital transformation, and every single employee – from the CEO to the most recent hire – plays a part in making their organization more productive, so transparency, communication and engagement are crucial.' About the KPMG in Canada Productivity Survey KPMG in Canada surveyed 250 business leaders in all industry sectors across Canada between May 9 and May 20, 2025, on Sago's premier business panel, using Methodify's online research platform. Thirty-one per cent lead companies with annual gross revenue between $500 million and $1 billion, 25 per cent report revenue between $100 million and $300 million, 22 per cent have revenue between $300 million and $500 million, 12 per cent between $10 million and $100 million, and 10 per cent, over $1 billion. No companies under $10 million in annual revenue were surveyed. Over half (52 per cent) of the companies are privately held, 28 per cent are owned by private equity firms, 18 per cent are publicly traded with headquarters in Canada, and 2 per cent are foreign-owned subsidiaries. About KPMG in Canada KPMG LLP, a limited liability partnership, is a full-service Audit, Tax and Advisory firm owned and operated by Canadians. For over 150 years, our professionals have provided consulting, accounting, auditing, and tax services to Canadians, inspiring confidence, empowering change, and driving innovation. Guided by our core values of Integrity, Excellence, Courage, Together, For Better, KPMG employs more than 10,000 people in over 40 locations across Canada, serving private- and public-sector clients. KPMG is consistently ranked one of Canada's top employers and one of the best places to work in the country. The firm is established under the laws of Ontario and is a member of KPMG's global organization of independent member firms affiliated with KPMG International, a private English company limited by guarantee. Each KPMG firm is a legally distinct and separate entity and describes itself as such. For more information, see For media inquiries: Alannah Page National Communications and Media Relations KPMG in Canada 306-934-6255 [email protected] Roula Meditskos National Communications and Media Relations KPMG in Canada 416-549-7982 [email protected] SOURCE KPMG LLP


Atlantic
21 minutes ago
- Atlantic
Artificial Intelligence Is Not Intelligent
On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.


Time Magazine
23 minutes ago
- Time Magazine
The Company Helping Build the Next Space Station Is Launching to the ISS
America's most expensive government housing project cost north of $100 billion, took 13 years to build, and has been continuously occupied since 2000—yet most of the time no more than six people live there. That's because the residence in question is the International Space Station (ISS), a million-pound orbital outpost with a footprint larger than a football field and the habitable volume of a six-bedroom house, circling the Earth 250 miles overhead. Over the past 25 years, on-board astronauts have done yeoman's work in the station's six laboratory modules. But the ISS is getting old, and NASA and the 14 other partner nations that built, maintain, and operate it have plans to de-orbit it by 2030, sending it tumbling down through the atmosphere for a controlled splashdown in a remote patch of ocean. Before the ISS dies, however, it will help birth its own replacement—one that will be built and launched by the private sector, with the new modules constructed by Houston-based Axiom Space, and launch services and the Dragon spacecraft provided by SpaceX. Over the past three years, Axiom has launched three private, paying crews of four astronauts to the ISS, preparatory to the commencement of station construction. On June 10 it plans to launch its fourth mission, straightforwardly dubbed Axiom Mission 4 (Ax-4)—one more milestone on the road to the first new space station module going aloft in 2027. 'It's important for the U.S. to maintain a human presence in space, and that's why I think the Axiom station is so important,' says former NASA astronaut and Axiom's director of human space flight Peggy Whitson, who commanded Axiom Mission 2 (Ax-2) and will command Ax-4. 'We've got to get the [new] station up and running before the ISS is decommissioned. That push for … expansion in space is important from a technology perspective [and] from a space power perspective.' Speedy but significant science Ax-4's mission will be a relatively brief 14 days—an eyeblink compared to the six months to a year that the station's long-term residents stay aboard. But as with the earlier Axiom flights, none of which went beyond 18 days, the crew will be performing a lot of solid science in the time they're aloft. 'This particular mission will comprise 60 scientific studies and activities representing 31 countries, including the U.S., India, Poland, Hungary, Saudi Arabia, Nigeria, the UAE, and nations across Europe,' said Allen Flynt, Axiom's chief of mission services, at a May 20 press conference. 'This will be the most research…conducted on an Axiom space mission aboard the International Space Station to date.' Among the experiments to be performed will be studies of the effect of microgravity on the brain—crucial information to have in the run-up to long-term missions to the moon and Mars—as well as similar investigations of how the heart and muscles adapt to space. Also on the research manifest will be studies of eye-hand coordination, and even how to manage blood glucose in space, opening the door to future travel by astronauts with insulin-dependent diabetes. Perhaps the most ambitious work, however, will involve testing various drugs to treat cancer—work that can be dramatically accelerated if it takes place in space. 'People always say, 'Why do you go to space to look at cancer?'' says Whitson. 'Well, cancer cells proliferate faster [in microgravity], so in a very short period of time you can give them a drug and see if it's slowing them down. If you shut them off you can effectively say, 'Hey this is a very promising drug.'' During Ax-2, the crew was specifically studying colorectal cancer. This time around they will be looking at triple-negative breast cancer —an aggressive form of the disease—and should be able to finish the study even in the short time they'll be aloft. Collaboration is key The mission won't be just about the science, but about building global partnerships that go beyond the international suite of experiments the crew will perform. Axiom Space might be a U.S. company, but Whitson will be the only American aboard when Ax-4 lifts off. Joining her will be crewmembers Shubhansa Shukla, from India; Sławosz Uznański-Wiśniewski of Poland; and Tibor Kapu of Hungary. The mission comes after a long wait for all three countries. A Hungarian has not been in space since 2009, when paying space tourist Charles Simonyi traveled to the ISS aboard a Russian Soyuz spacecraft. The last Indian citizen in space was Rakesh Sharma, who was aboard another Soyuz, which docked with Russia's Salyut 7 space station in 1984. For Poland the wait has been longest of all, dating back to 1978, when Mirosław Hermaszewski launched aboard a Soyuz. 'Private astronaut missions are a great opportunity for our member states to fly more astronauts and do more experiments aboard the International Space Station,' said Sergio Palumberi, mission manager with the European Space Agency (ESA)—which includes Hungary and Poland—at the May 20 press conference. 'ESA will be executing 17 science investigations and technological demonstrations.' 'This is more than a space mission for us,' said Orsolya Ferencz, Hungary's ministerial commissioner of space research. 'It is a strategic national effort, an expression of Hungary's commitment to sovereignty in critical technological fields and to secure a place in the rapidly evolving space economy.' The four new arrivals aboard Ax-4 will join an equally international crew of seven already living on the ISS; they include three astronauts from the U.S., three cosmonauts from Russia, and one astronaut from Japan—Takuya Onishi, who is currently station commander. During Axiom-4's trips to and from space, Whitson will be in command of her three-person crew, but when they're aboard the station, she will cede most of her authority to Onishi. 'I'm responsible for my crew and our activities on the ISS, but the crew that's already up there will be responsible for the station as a whole,' Whitson says. 'It wouldn't be appropriate for a short-term person to take over.' Building the next 'ISS' The 14-day Ax-4 mission is being flown with an eye toward a more distant goal. This flight and the previous three have been dress rehearsals both for crews and for Axiom's onsite mission control, providing critical experience working in orbit and managing activities from the ground. That patient practice will be put to use in 2027 when the first Axiom space station module goes aloft and docks with the ISS. Over the course of the following three years, four more modules will be sent up and dock with the one already there, with the new cluster serving as a semi-autonomous space station budding from the larger existing one. In 2030, not long before the ISS is deorbited, the Axiom station will undock and become its own free-floating lab. The first module, known as the payload, power, and thermal module (PPTM), will, as its name suggests, provide power, temperature control, and storage space for the new station. Following the PPTM will come one airlock module, two habitat modules, and a research and manufacturing module (RMF), where the onboard science will be performed. The RMF also includes a windowed enclosure similar to but much larger than the ISS's cupola, providing 360-degree views of the Earth below and the surrounding space. 'It's way bigger than the cupola,' says Whitson. 'It's going to be an immersive experience, where you can get your whole body inside. It will be like doing a spacewalk without having to put a spacesuit on.' Axiom does not rule out adding more modules still down the line, especially as the company attracts private sector and government customers willing to pay to have experiments run and hardware tested in microgravity. The space community has gotten spoiled in the past 25 years, with a rotating corps of international astronauts always at work, always in orbit, making their homes aboard the ISS. The station may not attract the global excitement that the old Mercury, Gemini, and Apollo missions did, but we will surely miss it when it's gone. Axiom Space is prepared to fill that void.