
AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say
Not long ago, Silicon Valley was where the world's leading artificial intelligence experts went to perform cutting-edge research.
Meta, Google and OpenAI opened their wallets for top talent, giving researchers staff, computing power and plenty of flexibility. With the support of their employers, the researchers published high-quality academic papers, openly sharing their breakthroughs with peers in academia and at rival companies.
But that era has ended. Now, experts say, AI is all about the product.
Since OpenAI released ChatGPT in late 2022, the tech industry has shifted its focus to building consumer-ready AI services, in many cases prioritizing commercialization over research, AI researchers and experts in the field told CNBC. The profit potential is massive — some analysts predict $1 trillion in annual revenue by 2028. The prospective repercussions terrify the corner of the AI universe concerned about safety, industry experts said, particularly as leading players pursue artificial general intelligence, or AGI, which is technology that rivals or exceeds human intelligence.
In the race to stay competitive, tech companies are taking an increasing number of shortcuts when it comes to the rigorous safety testing of their AI models before they are released to the public, industry experts told CNBC.
James White, chief technology officer at cybersecurity startup CalypsoAI, said newer models are sacrificing security for quality, that is, better responses by the AI chatbots. That means they're less likely to reject malicious kinds of prompts that could cause them to reveal ways to build bombs or sensitive information that hackers could exploit, White said.
"The models are getting better, but they're also more likely to be good at bad stuff," said White, whose company performs safety and security audits of popular models from Meta, Google, OpenAI and other companies. "It's easier to trick them to do bad stuff."
The changes are readily apparent at Meta and Alphabet, which have deprioritized their AI research labs, experts say. At Facebook's parent company, the Fundamental Artificial Intelligence Research, or FAIR, unit has been sidelined by Meta GenAI, according to current and former employees. And at Alphabet, the research group Google Brain is now part of DeepMind, the division that leads development of AI products at the tech company.
CNBC spoke with more than a dozen AI professionals in Silicon Valley who collectively tell the story of a dramatic shift in the industry away from research and toward revenue-generating products. Some are former employees at the companies with direct knowledge of what they say is the prioritization of building new AI products at the expense of research and safety checks. They say employees face intensifying development timelines, reinforcing the idea that they can't afford to fall behind when it comes to getting new models and products to market. Some of the people asked not to be named because they weren't authorized to speak publicly on the matter.
When Joelle Pineau, a Meta vice president and the head of the company's FAIR division, announced in April that she would be leaving her post, many former employees said they weren't surprised. They said they viewed it as solidifying the company's move away from AI research and toward prioritizing developing practical products.
"Today, as the world undergoes significant change, as the race for AI accelerates, and as Meta prepares for its next chapter, it is time to create space for others to pursue the work," Pineau wrote on LinkedIn, adding that she will formally leave the company May 30.
Pineau began leading FAIR in 2023. The unit was established a decade earlier to work on difficult computer science problems typically tackled by academia. Yann LeCun, one of the godfathers of modern AI, initially oversaw the project, and instilled the research methodologies he learned from his time at the pioneering AT&T Bell Laboratories, according to several former employees at Meta. Small research teams could work on a variety of bleeding-edge projects that may or may not pan out.
The shift began when Meta laid off 21,000 employees, or nearly a quarter of its workforce, starting in late 2022. CEO Mark Zuckerberg kicked off 2023 by calling it the "year of efficiency." FAIR researchers, as part of the cost-cutting measures, were directed to work more closely with product teams, several former employees said.
Two months before Pineau's announcement, one of FAIR's directors, Kim Hazelwood, left the company, two people familiar with the matter said. Hazelwood helped oversee FAIR's NextSys unit, which manages computing resources for FAIR researchers. Her role was eliminated as part of Meta's plan to cut 5% of its workforce, the people said.
OpenAI's 2022 launch of ChatGPT caught Meta off guard, creating a sense of urgency to pour more resources into large language models, or LLMs, that were captivating the tech industry, the people said.
In 2023, Meta began heavily pushing its freely available and open-source Llama family of AI models to compete with OpenAI, Google and others.
With Zuckerberg and other executives convinced that LLMs were game-changing technologies, management had less incentive to let FAIR researchers work on far-flung projects, several former employees said. That meant deprioritizing research that could be viewed as having no impact on Meta's core business, such as FAIR's previous health care-related research into using AI to improve drug therapies.
Since 2024, Meta Chief Product Officer Chris Cox has been overseeing FAIR as a way to bridge the gap between research and the product-focused GenAI group, people familiar with the matter said. The GenAI unit oversees the Llama family of AI models and the Meta AI digital assistant, the two most important pillars of Meta's AI strategy.
Under Cox, the GenAI unit has been siphoning more computing resources and team members from FAIR due to its elevated status at Meta, the people said. Many researchers have transferred to GenAI or left the company entirely to launch their own research-focused startups or join rivals, several of the former employees said.
While Zuckerberg has some internal support for pushing the GenAI group to rapidly develop real-world products, there's also concern among some staffers that Meta is now less able to develop industry-leading breakthroughs that can be derived from experimental work, former employees said. That leaves Meta to chase its rivals.
A high-profile example landed in January, when Chinese lab DeepSeek released its R1 model, catching Meta off guard. The startup claimed it was able to develop a model as capable as its American counterparts but with training at a fraction of the cost.
Meta quickly implemented some of DeepSeek's innovative techniques for its Llama 4 family of AI models that were released in April, former employees said. The AI research community had a mixed reaction to the smaller versions of Llama 4, but Meta said the biggest and most powerful Llama 4 variant is still being trained.
The company in April also released security and safety tools for developers to use when building apps with Meta's Llama 4 AI models. These tools help mitigate the chances of Llama 4 unintentionally leaking sensitive information or producing harmful content, Meta said.
"Our commitment to FAIR remains strong," a Meta spokesperson told CNBC. "Our strategy and plans will not change as a result of recent developments."
In a statement to CNBC, Pineau said she is enthusiastic about Meta's overall AI work and strategy.
"There continues to be strong support for exploratory research and FAIR as a distinct organization in Meta," Pineau said. "The time was simply right for me personally to re-focus my energy before jumping into a new adventure."
Meta on Thursday named FAIR co-founder Rob Fergus as Pineau's replacement. Fergus will return to the company to serve as a director at Meta and head of FAIR, according to his LinkedIn profile. He was most recently a research director at Google DeepMind.
"Meta's commitment to FAIR and long term research remains unwavering," Fergus said in a LinkedIn post. "We're working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research."
Google released its latest and most powerful AI model, Gemini 2.5, in March. The company described it as "our most intelligent AI model," and wrote in a March 25 blog post that its new models are "capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy."
For weeks, Gemini 2.5 was missing a model card, meaning Google did not share information about how the AI model worked or its limitations and potential dangers upon its release.
Model cards are a common tool for AI transparency.
A Google website compares model cards to food nutrition labels: They outline "the key facts about a model in a clear, digestible format," the website says.
"By making this information easy to access, model cards support responsible AI development and the adoption of robust, industry-wide standards for broad transparency and evaluation practices," the website says.
Google wrote in an April 2 blog post that it evaluates its "most advanced models, such as Gemini, for potential dangerous capabilities prior to their release." Google later updated the blog to remove the words "prior to their release."
Without a model card for Gemini 2.5, the public had no way of knowing which safety evaluations were conducted or whether DeepMind checked for dangerous capabilities at all.
In response to CNBC's inquiry on April 2 about Gemini 2.5's missing model card, a Google spokesperson said that a "tech report with additional safety information and model cards are forthcoming." Google published an incomplete model card on April 16 and updated it on April 28, more than a month after the AI model's release, to include information about Gemini 2.5's "dangerous capability evaluations."
Those assessments are important for gauging the safety of a model — whether people can use the models to learn how to build chemical or nuclear weapons or hack into important systems. These checks also determine whether a model is capable of autonomously replicating itself, which could lead to a company losing control of it. Running tests for those capabilities requires more time and resources than simple, automated safety evaluations, according to industry experts.
The Financial Times in March reported that Google DeepMind CEO Demis Hassabis had installed a more rigorous vetting process for internal research papers to be published. The clampdown at Google is particularly notable because the company's "Transformers" technology gained recognition across Silicon Valley through that type of shared research. Transformers were critical to OpenAI's development of ChatGPT and the rise of generative AI.
Google co-founder Sergey Brin told staffers at DeepMind and Gemini in February that competition has accelerated and "the final race to AGI is afoot," according to a memo viewed by CNBC. "We have all the ingredients to win this race but we are going to have to turbocharge our efforts," he said in the memo.
Brin said in the memo that Google has to speed up the process of testing AI models, as the company needs "lots of ideas that we can test quickly."
"We need real wins that scale," Brin wrote.
In his memo, Brin also wrote that the company's methods have "a habit of minor tweaking and overfitting" products for evaluations and "sniping" the products at checkpoints. He said employees need to build "capable products" and to "trust our users" more.
"We can't keep building nanny products," Brin wrote. "Our products are overrun with filters and punts of various kinds."
A Google spokesperson told CNBC that the company has always been committed to advancing AI responsibly.
"We continue to do that through the safe development and deployment of our technology, and research contributions to the broader ecosystem," the spokesperson said.
The debate of product versus research is at the center of OpenAI's existence. The company was founded as a nonprofit research lab in 2015 and is now in the midst of a contentious effort to transform into a for-profit entity.
That's the direction co-founder and CEO Sam Altman has been pushing toward for years. On May 5, though, OpenAI bowed to pressure from civic leaders and former employees, announcing that its nonprofit would retain control of the company even as it restructures into a public benefit corporation.
Nisan Stiennon worked at OpenAI from 2018 to 2020 and was among a group of former employees urging California and Delaware not to approve OpenAI's restructuring effort. "OpenAI may one day build technology that could get us all killed," Stiennon wrote in a statement in April. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity."
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card.
Results of the model's "preparedness evaluations," the tests OpenAI runs to assess an AI model's dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
Johannes Heidecke, OpenAI's head of safety systems, told CNBC in an interview that the company ran its preparedness evaluations on near-final versions of the o1 model. Minor variations to the model that took place after those tests wouldn't have contributed to significant jumps in its intelligence or reasoning and thus wouldn't require additional evaluations, he said. Still, Heidecke acknowledged that OpenAI missed an opportunity to more clearly explain the difference.
OpenAI's newest reasoning model, o3, released in April, seems to hallucinate more than twice as often as o1, according to the model card. When an AI model hallucinates, it produces falsehoods or illogical information.
OpenAI has also been criticized for reportedly slashing safety testing times from months to days and for omitting the requirement to safety test fine-tuned models in its latest "Preparedness Framework."
Heidecke said OpenAI has decreased the time needed for safety testing because the company has improved its testing effectiveness and efficiency. A company spokesperson said OpenAI has allocated more AI infrastructure and personnel to its safety testing, and has increased resources for paying experts and growing its network of external testers.
In April, the company shipped GPT-4.1, one of its new models, without a safety report, as the model was not designated by OpenAI as a "frontier model," which is a term used by the tech industry to refer to a bleeding-edge, large-scale AI model.
But one of those small revisions caused a big wave in April. Within days of updating its GPT-4o model, OpenAI rolled back the changes after screenshots of overly flattering responses to ChatGPT users went viral online. OpenAI said in a blog post explaining its decision that those types of responses to user inquiries "raise safety concerns — including around issues like mental health, emotional over-reliance, or risky behavior."
OpenAI said in the blogpost that it opted to release the model even after some expert testers flagged that its behavior "'felt' slightly off."
"In the end, we decided to launch the model due to the positive signals from the users who tried out the model. Unfortunately, this was the wrong call," OpenAI wrote. "Looking back, the qualitative assessments were hinting at something important, and we should've paid closer attention. They were picking up on a blind spot in our other evals and metrics."
Metr, a company OpenAI partners with to test and evaluate its models for safety, said in a recent blog post that it was given less time to test the o3 and o4-mini models than predecessors.
"Limitations in this evaluation prevent us from making robust capability assessments," Metr wrote, adding that the tests it did were "conducted in a relatively short time."
Metr also wrote that it had insufficient access to data that would be important in determining the potential dangers of the two models.
The company said it wasn't able to access the OpenAI models' internal reasoning, which is "likely to contain important information for interpreting our results." However, Metr said, "OpenAI shared helpful information on some of their own evaluation results."
OpenAI's spokesperson said the company is piloting secure ways of sharing chains of thought for Metr's research as well as for other third-party organizations.
Steven Adler, a former safety researcher at OpenAI, told CNBC that safety testing a model before it's rolled out is no longer enough to safeguard against potential dangers.
"You need to be vigilant before and during training to reduce the chance of creating a very capable, misaligned model in the first place," Adler said.
He warned that companies such as OpenAI are backed into a corner when they create capable but misaligned models with goals that are different from the ones they intended to build.
"Unfortunately, we don't yet have strong scientific knowledge for fixing these models — just ways of papering over the behavior," Adler said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
24 minutes ago
- Associated Press
Canadian companies leaving productivity gains on the table by not effectively adopting new technology
Recent productivity gains only 'scratching the surface' of what new technology can deliver TORONTO, June 6, 2025 /CNW/ - While Canadian companies have been making technology investments to improve productivity, insufficient investments in their people have limited effectiveness and held back productivity gains, finds new KPMG in Canada research. KPMG recently surveyed 250 business leaders across Canada to understand what actions they were taking to improve their operations in response to the trade war with the U.S. The results found that most had stepped up investments in technology to make their organizations more efficient and productive – and are seeing positive returns – but nearly two-thirds (63 per cent) say their employees aren't using new technologies effectively – holding back full returns on their tech spend. 'Investing in new technology tools and platforms can do wonders for an organization's productivity by streamlining processes, workflows and tasks,' says Stavros Demetriou, Partner and National Leader of KPMG in Canada's People and Change practice. 'The latest Stats Can data shows Canada has made productivity gains in each of the last two quarters, but this increase still trails improvements in the U.S. over the same period. 'Unless Canadian organizations undertake effective employee education and adoption plans, their people will barely scratch the surface on what the technology can do to make them more productive, and our gap to the U.S. and others will continue to widen.' Although nearly all (89 per cent) respondents say they're investing in upskilling employees, over half (53 per cent) say their company doesn't invest enough in employee training, workshops or continuous learning opportunities, and nearly six in 10 (56 per cent) say their organization lacks the internal resources and talent needed to implement and use technology effectively. Survey highlights Mr. Demetriou notes that while three-quarters (74 per cent) of respondents believe AI will solve their productivity problem, the same proportion (74 per cent) admit they have underestimated the challenges of implementing new technologies such as AI. 'An underappreciation of the impact of things like changing processes and working habits could explain why employees are not fully equipped to harness new technologies to their fullest potential,' he says. 'There's a common belief that digitally transforming your company is primarily a technology upgrade exercise, but the reality is that technology implementation is just one part of a journey – digital transformation is just as much about advancing and elevating the workforce. It's a continuous, iterative process that, if done correctly, leads to higher productivity and innovation, and the ability to navigate the future more confidently,' he adds. Ineffective training Megan Jones, National HR and Workforce Transformation Lead at KPMG in Canada, notes that nearly nine in 10 respondents say they need better processes in place to encourage their workers to use technologies, including case studies and incentives. 'Often, when organizations implement new tools and technologies, they don't completely understand or appreciate the full capabilities of these investments. As a result, employees are simply not adequately prepared to maximize the benefits these can bring to their jobs or customers. In some cases, organizations provide full training, but it's too technical or poorly delivered. Effective training and upskilling need to be targeted, relevant, engaging, and frequent. Much like exercising consistently to build muscle, technology training must happen regularly to make the workforce stronger and more agile,' she says. Ms. Jones notes that almost nine in 10 (87 per cent) respondents acknowledged their company could do a better job of creating a culture that encourages employees to share ideas and take risks, fostering innovation and creativity. She recommends organizations provide incentives for employees to experiment with technology and explore new use cases for it. 'Regular workshops or dedicated 'days' where employees are encouraged to play around with AI and develop new solutions can go a long way in sparking innovation. Also, showcasing wins by employees in one area of the business could help spark new ideas in other parts of the organization,' she says. The digital divide Most respondents (86 per cent) hope that a more digitally-savvy younger generation will help their company become more productive through the easier adoption of new technologies such as AI, Web3, data and analytics, quantum and edge computing. Lewis Curley, a Partner in KPMG in Canada's People and Change practice, says differences in workforce composition, skills and attitudes can create additional considerations for an organization's technology program, but organizations that engage all employees early in the journey as well as training and upskilling will have more success in leveraging new technologies such as AI and increasing overall productivity. 'If an organization is looking to implement AI, they must engage the entire workforce right from the beginning. If some employees don't feel like they are part of the journey, they might disengage from the process, lose trust in AI, or worry that the technology will replace them, which could deter them from using it,' he says. 'Everyone has a role in a company's digital transformation, and every single employee – from the CEO to the most recent hire – plays a part in making their organization more productive, so transparency, communication and engagement are crucial.' About the KPMG in Canada Productivity Survey KPMG in Canada surveyed 250 business leaders in all industry sectors across Canada between May 9 and May 20, 2025, on Sago's premier business panel, using Methodify's online research platform. Thirty-one per cent lead companies with annual gross revenue between $500 million and $1 billion, 25 per cent report revenue between $100 million and $300 million, 22 per cent have revenue between $300 million and $500 million, 12 per cent between $10 million and $100 million, and 10 per cent, over $1 billion. No companies under $10 million in annual revenue were surveyed. Over half (52 per cent) of the companies are privately held, 28 per cent are owned by private equity firms, 18 per cent are publicly traded with headquarters in Canada, and 2 per cent are foreign-owned subsidiaries. About KPMG in Canada KPMG LLP, a limited liability partnership, is a full-service Audit, Tax and Advisory firm owned and operated by Canadians. For over 150 years, our professionals have provided consulting, accounting, auditing, and tax services to Canadians, inspiring confidence, empowering change, and driving innovation. Guided by our core values of Integrity, Excellence, Courage, Together, For Better, KPMG employs more than 10,000 people in over 40 locations across Canada, serving private- and public-sector clients. KPMG is consistently ranked one of Canada's top employers and one of the best places to work in the country. The firm is established under the laws of Ontario and is a member of KPMG's global organization of independent member firms affiliated with KPMG International, a private English company limited by guarantee. Each KPMG firm is a legally distinct and separate entity and describes itself as such. For more information, see For media inquiries: Alannah Page National Communications and Media Relations KPMG in Canada 306-934-6255 [email protected] Roula Meditskos National Communications and Media Relations KPMG in Canada 416-549-7982 [email protected] SOURCE KPMG LLP
Yahoo
30 minutes ago
- Yahoo
Why CoreWeave, Inc. (CRWV) Skyrocketed Today
We recently published a list of . In this article, we are going to take a look at where CoreWeave, Inc. (NASDAQ:CRWV) stands against other Wednesday's best-performing stocks. CoreWeave rallied for a fourth consecutive day on Wednesday, jumping 8.39 percent to close at $163.10 apiece following the unveiling of record-breaking performance results using Nvidia Corp.'s latest Grace Blackwell chips. In a statement, CoreWeave, Inc. (NASDAQ:CRWV) said that it used 2,496 Nvidia GPUs on its AI-optimized cloud platform, making its submission the largest-ever benchmarked under MLPerf. A close-up of a digital cloud, signifying the expansive reach of the software-as-a-service solution. CoreWeave, Inc. (NASDAQ:CRWV) said that the test was 34x larger than the only other submission from a cloud provider. 'AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand,' said CoreWeave, Inc. (NASDAQ:CRWV) Chief Technology Officer Peter Salanki. 'These MLPerf results reinforce our leadership in supporting today's most demanding AI workloads,' he added. The unveiling followed the company's new $7-billion deal with Applied Digital Corporation (NASDAQ:APLD), which covered two 15-year lease agreements, under which the latter will deliver 250 megawatts of critical IT load to host its artificial intelligence (AI) and high-performance computing (HPC) infrastructure at its Ellendale, North Dakota data center campus. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Atlantic
39 minutes ago
- Atlantic
Artificial Intelligence Is Not Intelligent
On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.