
Tech industry experts warn AI will make us worse humans
While the top minds in artificial intelligence are racing to make the technology think more like humans, researchers at Elon University have asked the opposite question: How will AI change the way humans think?
The answer comes with a grim warning: Many tech experts worry that AI will make people worse at skills core to being human, such as empathy and deep thinking.
'I fear — for the time being — that while there will be a growing minority benefitting ever more significantly with these tools, most people will continue to give up agency, creativity, decision-making and other vital skills to these still-primitive AIs,' futurist John Smart wrote in an essay submitted for the university's nearly 300-page report, titled 'The Future of Being Human,' which was provided exclusively to CNN ahead of its publication Wednesday.
The concerns come amid an ongoing race to accelerate AI development and adoption that has attracted billions of dollars in investment, along with both skepticism and support from governments around the world. Tech giants are staking their businesses on the belief that AI will change how we do everything — working, communicating, searching for information — and companies like Google, Microsoft and Meta are racing to build 'AI agents' that can perform tasks on a person's behalf. But experts warn in the report that such advancements could make people too reliant on AI in the future.
Already, the proliferation of AI has raised big questions about how humans will adapt to this latest technology wave, including whether it could lead to job losses or generate dangerous misinformation. The Elon University report further calls into question promises from tech giants that the value of AI will be in automating rote, menial tasks so that humans can spend more time on complex, creative pursuits.
Wednesday's report also follows research published this year by Microsoft and Carnegie Mellon University that suggested using generative AI tools could negatively impact critical thinking skills.
Elon University researchers surveyed 301 tech leaders, analysts and academics, including Vint Cerf, one of the 'fathers of the internet' and now a Google vice president; Jonathan Grudin, University of Washington Information School professor and former longtime Microsoft researcher and project manager; former Aspen Institute executive vice president Charlie Firestone; and tech futurist and Futuremade CEO Tracey Follows. Nearly 200 of the respondents wrote full-length essay responses for the report.
More than 60% of the respondents said they expect AI will change human capabilities in a 'deep and meaningful' or 'fundamental, revolutionary' way over the next 10 years. Half said they expect AI will create changes to humanity for the better and the worse in equal measure, while 23% said the changes will be mostly for the worse. Just 16% said changes will be mostly for the better (the remainder said they didn't know or expected little change overall).
The respondents also predicted that AI will cause 'mostly negative' changes to 12 human traits by 2035, including social and emotional intelligence, capacity and willingness to think deeply, empathy, and application of moral judgment and mental well-being.
Human capacity in those areas could worsen if people increasingly turn to AI for help with tasks such as research and relationship-building for convenience's sake, the report claims. And a decline in those and other key skills could have troubling implications for human society, such as 'widening polarization, broadening inequities and diminishing human agency,' the researchers wrote.
The report's contributors expect just three areas to experience mostly positive change: curiosity and capacity to learn, decision-making, and problem-solving and innovative thinking and creativity. Even in tools available today, programs that can generate artwork and solve coding problems are among the most popular. And many experts believe that while AI could replace some human jobs, it could also create new categories of work that don't yet exist.
Many of the concerns detailed in the report relate to how tech leaders predict people will incorporate AI into their daily lives by 2035.
Cerf said he expects humans will soon rely on AI agents, which are digital helpers that could independently do everything from taking notes during a meeting to making dinner reservations, negotiating complex business contracts or writing code. Tech companies are already rolling out early AI agent offerings — Amazon says its revamped Alexa voice assistant can order your groceries, and Meta is letting businesses create AI customer service agents to answer questions on its social media platforms.
Such tools could save people time and energy in everyday tasks while aiding with fields like medical research. But Cerf also worries about humans becoming 'increasingly technologically dependent' on systems that can fail or get things wrong.
...Apple Podcasts
Spotify
Pandora
TuneIn
iHeart Radio
Radio.com
Amazon
RSS
'You can also anticipate some fragility in all of this. For example, none of this stuff works without electricity, right?' Cerf said in an interview with CNN. 'These heavy dependencies are wonderful when they work, and when they don't work, they can be potentially quite hazardous.'
Cerf stressed the importance of tools that help differentiate humans versus AI bots online, and transparency around the effectiveness of highly autonomous AI tools. He urged companies that build AI models to keep 'audit trails' that would let them interrogate when and why their tools get things wrong.
Futuremade's Follows told CNN that she expects humans' interactions with AI to move beyond the screens where people generally talk to AI chatbots today. Instead, AI technology will be integrated into various devices, such as wearables, as well as buildings and homes where humans can just ask questions out loud.
But with that ease of access, humans may begin outsourcing empathy to AI agents.
'AI may take over acts of kindness, emotional support, caregiving and charity fundraising,' Follows wrote in her essay. She added that 'humans may form emotional attachments to AI personas and influencers,' raising 'concerns about whether authentic, reciprocal relationships will be sidelined in favor of more predictable, controllable digital connection.'
Humans have already begun to form relationships with AI chatbots, to mixed effect. Some people have, for example, created AI replicas of deceased loved ones to seek closure, but parents of young people have also taken legal action after they say their children were harmed by relationships with AI chatbots.
Still, experts say people have time to curb some of the worst potential outcomes of AI through regulation, digital literacy training and simply prioritizing human relationships.
Richard Reisman, nonresident senior fellow at the Foundation for American Innovation, said in the report that the next decade marks a tipping point in whether AI 'augments humanity or de-augments it.'
'We are now being driven in the wrong direction by the dominating power of the 'tech-industrial complex,' but we still have a chance to right that,' Reisman wrote.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
33 minutes ago
- New York Post
Google's AI is ‘hallucinating,' spreading dangerous info — including a suggestion to add glue to pizza sauce
Google's AI Overviews, designed to give quick answers to search queries, reportedly spits out 'hallucinations' of bogus information and undercuts publishers by pulling users away from traditional links. The Big Tech giant — which landed in hot water last year after releasing a 'woke' AI tool that generated images of female Popes and black Vikings — has drawn criticism for providing false and sometimes dangerous advice in its summaries, according to The Times of London. 3 Google's latest artificial intelligence tool which is designed to give quick answers to search queries is facing criticism. Google CEO Sundar Pichai is pictured. AFP via Getty Images In one case, AI Overviews advised adding glue to pizza sauce to help cheese stick better, the outlet reported. In another, it described a fake phrase — 'You can't lick a badger twice' — as a legitimate idiom. The hallucinations, as computer scientists call them, are compounded by the AI tool diminishing the visibility of reputable sources. Instead of directing users straight to websites, it summarizes information from search results and presents its own AI-generated answer along with a few links. Laurence O'Toole, founder of the analytics firm Authoritas, studied the impact of the tool and found that click-through rates to publisher websites drop by 40%–60% when AI Overviews are shown. 'While these were generally for queries that people don't commonly do, it highlighted some specific areas that we needed to improve,' Liz Reid, Google's head of Search, told The Times in response to the glue-on-pizza incident. 3 Google AI Mode is an experimental mode utilizing artificial intelligence and large language models to process Google search queries. Gado via Getty Images The Post has sought comment from Google. AI Overviews was introduced last summer and powered by Google's Gemini language model, a system similar to OpenAI's ChatGPT. Despite public concerns, Google CEO Sundar Pichai has defended the tool in an interview with The Verge, stating that it helps users discover a broader range of information sources. 'Over the last year, it's clear to us that the breadth of area we are sending people to is increasing … we are definitely sending traffic to a wider range of sources and publishers,' he said. Google appears to downplay its own hallucination rate. When a journalist searched Google for information on how often its AI gets things wrong, the AI response claimed hallucination rates between 0.7% and 1.3%. 3 Google's AI Overviews, was introduced last summer and is powered by the Gemini language model, a system similar to ChatGPT. AP However, data from the AI monitoring platform Hugging Face indicated that the actual rate for the latest Gemini model is 1.8%. Google's AI models also seem to offer pre-programmed defenses of their own behavior. In response to whether AI 'steals' artwork, the tool said it 'doesn't steal art in the traditional sense.' When asked if people should be scared of AI, the tool walked through some common concerns before concluding that 'fear might be overblown.' Some experts worry that as generative AI systems become more complex, they're also becoming more prone to mistakes — and even their creators can't fully explain why. The concerns over hallucinations go beyond Google. OpenAI recently admitted that its newest models, known as o3 and o4-mini, hallucinate even more frequently than earlier versions. Internal testing showed o3 made up information in 33% of cases, while o4-mini did so 48% of the time, particularly when answering questions about real people.


CNET
44 minutes ago
- CNET
Google's AI Mode Now Creates Interactive Stock Charts For You
Google's AI Mode now can create interactive charts when users ask questions about stocks and mutual funds, the company said in a blog post Thursday. Users might ask the site to compare five years of stock performances for the biggest tech companies, or request to see mutual funds with the best rates of return over the past decade. Gemini, Google's AI engine, will then create an interactive graph and comprehensive explanation. I created a sample by going to the webpage for the new experiment (if you try it at work, you might learn that your admin has banned it, but it should work on a personal computer). Once there, I told it exactly what I wanted, "make me an interactive chart showing MSFT stock over the past five years." It produced the chart, and I was able to move the slider from one date to another, showing the stock price on that date. It's the same kind of chart you can probably get at your financial advisor's site, but it did work. Tell Google AI what chart you want, and it will create one that you can interact with. CNET But be warned: AI has accuracy issues, and users need to be extra-careful with financial information of any kind. "AI has historically struggled with quantitative reasoning tasks," said Sam Taube, lead investing writer at personal finance company NerdWallet. "It looks like Google's AI mode will often provide links to its data sources when answering financial queries. It's probably worth clicking those links to make sure that the raw data does, in fact, match the AI's output. If there's no link to a data source, proceed with caution; consider manually double-checking any numbers in the AI's output. I wouldn't trust any AI model to do its own math yet." The feature is a new experiment from Google Labs. At its I/O conference last month, Google announced AI Mode's ability to create interactive graphics for complex sets of data. The feature is now only for queries about stocks and mutual funds, but it will be expanded to other topics eventually. "I'd avoid asking AI any 'should I invest in XYZ' type questions," Taube told CNET. "The top AI models may be smart, but they aren't credentialed financial advisors and probably don't know enough about your personal financial situation to give good advice. What's more, AI doesn't have a great track record at picking investments, at least so far. Many AI-powered ETFs (funds that use AI to pick stocks to invest in) are underperforming the S&P 500 this year."

Yahoo
an hour ago
- Yahoo
Why investing in growth-stage AI startups is getting riskier and more complicated
Making a bet on AI startups has never been so exciting -- or more risky. Incumbents like OpenAI, Microsoft, and Google are scaling their capabilities fast to swallow many of the offerings of smaller companies. At the same time, new startups are reaching the growth stage much faster than they historically have. But defining "growth stage" in AI startups is not so cut-and-dried today. Jill Chase, partner at CapitalG, said on stage at TechCrunch AI Sessions that she's seeing more companies that are only a year old, yet have already reached tens of millions in annual recurring revenue and more than $1 billion in valuation. While those companies might be defined as mature due to their valuation and revenue generation, they often lack much of the necessary safety, hiring, and executive infrastructure. 'On one hand, that's really exciting. It represents this brand new trend of extremely fast growth, which is awesome,' Chase said. 'On the other hand, it's a little bit scary because I'm gonna pay at an $X billion valuation for this company that didn't exist 12 months ago, and things are changing so quickly.' 'Who knows who is in a garage somewhere, maybe in this audience somewhere, starting a company that in 12 months will be a lot better than this one I'm investing in that's at $50 million ARR today,' Chase continued. 'So it's made growth investing a little confusing.' To cut through the noise, Chase said it's important for investors to feel good about the category and the 'ability of the founder to very quickly adapt and see around corners.' She noted that AI coding startup Cursor is a great example of a company that 'jumped on the exact right use case of AI code generation that was available and possible given the technology at the time.' However, Cursor will need to work to maintain its edge. 'There will be, by the end of this year, AI software engineers,' Chase said. 'In that scenario, what Cursor has today is going to be a little less relevant. It is incumbent on the Cursor team to see that future and to think, okay, how do I start building my product so that when those models come out and are much more powerful, the product surface represents those and I can very quickly plug those in and switch into that state of code generation?' This article originally appeared on TechCrunch at