Majority of Americans have used AI models like ChatGPT: Survey
A majority of Americans have used ChatGPT-like artificial intelligence (AI) models according to a new survey.
In the survey from Elon University's Imagining the Digital Future Center, 52 percent said they 'use artificial intelligence (AI) large language models,' a category that includes OpenAI's famous ChatGPT.
Out of that percentage, 5 percent said they use the models 'almost constantly,' 7 percent said they use them 'several times a day,' 5 percent said they use them 'about once a day,' 10 percent said they use them 'several times a week' and 25 percent said they use them 'less often.' Forty-seven percent said they use them 'not at all.'
'The rise of large language models has been historic. In less than two-and-and-half years, half the adults in America say they have used LLMs. Few, if any, communications and general technologies have seen this pace of growth across the entire population,' a report on the survey reads.
Despite Americans appearing to be more comfortable with AI, a recent poll found 55 percent disagree with the government using AI to make choices about eligibility for unemployment assistance, college tuition aid, research investments, food aid and small business loans.
Among 500 users of large language models surveyed in the Imagining the Digital Future Center survey, 52 percent said they use them 'for work activities.' Thirty-six percent said they use them 'for schoolwork and homework activities.'
The Imagining the Digital Future Center survey for the 500 users of large language models took place from Jan. 21 to 23. and has 5.1 percentage points as its margin of error. Another wider group of 939 people, both users and non-users of large language models, has 3.2 percentage points as its margin of error.
The Hill has reached out to the Imagining the Digital Future Center about the survey dates for the wider group.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
33 minutes ago
- USA Today
Trump administration's emerging surveillance state raises privacy concerns
Trump administration's emerging surveillance state raises privacy concerns Civil liberties advocates say the Trump administration's data collection and sharing endanger Americans' constitutional rights. Show Caption Hide Caption Privacy at risk as Trump expands surveillance. Here's what we know. The Trump administration is expanding government surveillance with Big Tech's help. Here's what we know now about what's being tracked. DENVER ‒ For decades, the government has been able to watch where you drive and where you walk. It can figure out where you shop, what you buy and with whom you spend time. It knows how much money you have, where you've worked and, in many cases, what medical procedures you've had. It can figure out if you've attended a protest or bought marijuana, and it can even read your emails if it wants. But because all of those data points about you were scattered across dozens of federal, state and commercial databases, it wasn't easy for the government to easily build a comprehensive profile of your life. That's changing ‒ fast. With the help of Big Tech, in just a few short months the Trump administration has expanded the government surveillance state to a whole new level as the president and his allies chase down illegal immigrants and suspected domestic terrorists while simultaneously trying to slash federal spending they've deemed wasteful and keep foreigners from voting. And in doing so, privacy experts warn, the federal government is inevitably scooping up, sorting, combining and storing data about millions of law-abiding Americans. The vast data storehouses, some of which have been targeted for access by Elon Musk's DOGE teams, raise significant privacy concerns and the threat of cybersecurity breaches. "What makes the Trump administration's approach so chilling is that they are seeking to collect and use data across federal agencies in ways that are unprecedented," said Cody Venzke, a senior policy counsel with the American Civil Liberties Union. "The federal government's collection of data has always been a double-edged sword." Americans value their privacy Americans have fiercely guarded and worried about their privacy even from the country's earliest days: The Constitution's Fourth Amendment specifically limits the government's ability to invade a person's privacy. Those concerns have only grown as more government functions are carried out online. A 2023 survey by the Pew Research Center found that 71% of Americans worry about the government's use of data about them, up from 64% in 2019. The survey found the concern was greatest among those people who lean or consistently vote Republican, up from 63% to 77%. The level of concern among people who lean or consistently vote Democrat remained steady at 65%, the survey found. That same survey found that Americans overall are almost as concerned about government access to their data as they are about social media companies having access. People who had attended college were more worried about data privacy, while people with high school degrees were in general "confident that those who have access to their personal information will do the right thing." In acknowledgment of those concerns, the federal government carefully stores most data about Americans in separate databases, from Social Security payments to Medicare reimbursements, housing vouchers and food stamps. That limits the ability of government employees to surreptitiously build comprehensive profiles of Americans without court oversight. In the name of rooting out fraud, and government inefficiency, however, President Donald Trump in March ordered federal agencies under his control to lower the walls between their data warehouses. The Government Accounting Office estimates the federal government loses $233 billion to $521 billion to fraud annually, much of that because of improper payments to contractors or falsified medical payments, according to a GAO report in April. The report also noted significant losses via Medicare or unemployment fraud and pandemic-era stimulus payments. "Decades of restricted data access within and between agencies have led to duplicated efforts, undetected overpayments, and unchecked fraud, costing taxpayers billions," President Donald Trump said in a March 20 executive order that helped create the new system. "This executive order dismantles unnecessary barriers, promotes inter-agency collaboration, and ensures the Federal Government operates responsibly and efficiently to safeguard public funds." Merging of commercial and government databases Supporters say this kind of data archive, especially video surveillance coupled with AI-powered facial recognition, can also be a powerful tool to fight crime. Authorities in New Orleans used video footage collected by privately owned security cameras to help capture at least one of the fugitives in a high-profile prison escape in May. And systems that read license plates helped Colorado police track down a suspect accused of repeatedly vandalizing a Tesla dealership. White House authorities are now prosecuting some Tesla vandalism cases as terrorism. But the new White House efforts go far beyond anything ever attempted in the United States, allowing the government to conduct intrusive surveillance against almost anyone by combining government and commercial databases. Privacy experts say it's the merging of government and commercial databases that poses the most significant concern because much of it can be done without court oversight. As part of the broader White House effort, contractors are building a $30 million system to track suspected gang members and undocumented immigrants and buying access to a system that tracks passengers on virtually every U.S.-based airline flight. And federal officials also are making plans to compile and share state-level voting registration information, which the president argues is necessary to prevent foreign nationals from illegally voting in federal elections. Privacy experts say that while all of that data has long been collected and kept separate by different government agencies or private vendors ‒ like your supermarket frequent shopper card and cell phone provider ‒ the Trump administration is dramatically expanding its compilation into comprehensive dossiers on Americans. Much of the work has been kicked off by Musk's DOGE teams, with the assistance of billionaire Peter Thiel's Denver-based Palantir. Opponents say such a system could track women who cross state lines for abortions − something a police officer in Texas is accused of doing − or be abused by law enforcement to target political opponents or even stalk romantic partners. And if somehow accessed by hackers, the centralized systems would prove a trove of information for fraud or blackmail. The nonpartisan, nonprofit Project on Government Oversight has been warning about the risks of federal surveillance expansion for years, and it noted that Democrats and Republicans alike have voted to expand such information-gathering. "We need our leaders to recognize that as the surveillance apparatus grows, it becomes an enticing prize for a would-be autocrat," POGO said in a report in August 2024. "Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect." Starting with immigration, ending where? Trump campaigned in 2024 on a platform of tough immigration enforcement, including large-scale deportations and ending access by undocumented people to federal programs. Immigrants' rights advocates point out that people living illegally in the United States are generally barred from federal programs, although those who have children born as U.S. citizens can often access things like food assistance or health care. Supporters say having access to that data will help them prioritize people for deportation by comparing work history and tax payments to immigration status, work that used to be far more labor-intensive. Because federal officials don't know exactly who is living illegally in the United States, the systems by default must scoop up information about everyone first. One example: A newly expanded program to collect biometric data from suspected illegal immigrants intercepted at sea also can be used to collect the same information on American citizens under the vague justification of "officer safety." That data can be retained for up to 75 years, according to federal documents. "It's only a matter of time before the harmful ripples from this new effort reach other groups," Venzke said.
Yahoo
an hour ago
- Yahoo
AI Suggests, Humans Decide: SOCi Warns Brands to Embrace Human Validation or Get Left Behind
SOCi's Consumer Behavior Index Reveals Consumers Increasingly Turn to TikTok, Instagram, and Real Human Experiences for Validation SAN DIEGO, June 4, 2025 /PRNewswire/ -- SOCi's 2025 Consumer Behavior Index (CBI) sends a clear warning to CMOs and digital marketing leaders. Discovery has changed, and traditional digital strategies are no longer enough. While 19% of consumers now use generative AI tools like ChatGPT or Gemini to find businesses, a staggering 95% say AI is the least trusted source when it comes to making actual purchase decisions. "AI may answer the question, but social media earns the trust," said Monica Ho, CMO at SOCi. "That's why TikTok, Instagram, and reviews aren't just content channels. They're your most critical conversion levers. Brands that fail to show up authentically in these spaces will lose relevance fast." Consumers aren't rejecting AI; they're verifying it. As AI tools increasingly serve as a starting point for local discovery, consumers are looking to platforms like TikTok and Instagram to validate that what AI suggests is accurate, relevant, and human-backed. This is especially true for Gen Z, where 34% use TikTok and 35% use Instagram to discover new brands. The message is clear: AI may start the journey, but trust is earned through people. This is not a future problem. This is today's reality. Consumers now move fluidly between AI tools, social media, and review platforms to validate claims, check brand reputation, and watch videos from real customers before making decisions. Treating these touchpoints as disconnected strategies creates costly blind spots. With Google increasingly surfacing social content in search results, SOCi's 2024 Local Visibility Index found that brands that lack a social-first presence are leaving millions in potential revenue untapped. The trust-building doesn't stop at discovery. Reviews are now the final checkpoint before conversion. An overwhelming 91% of consumers rely on peer-generated content to evaluate local businesses. Among younger audiences, 40% prefer video reviews, and 65% say they are more likely to choose a business that actively responds to reviews. Yet 55% are increasingly skeptical of fake feedback, highlighting that transparency and responsiveness are now business-critical. Importantly, reviews are not only influential to consumers but also a key signal for AI engines (Damian Rollison, Search Engine Land). Tools like ChatGPT incorporate customer reviews as a foundational input in generating local recommendations. This underscores the dual role reviews now play in building consumer trust and shaping how businesses appear in AI-powered discovery results. "As the lines between search, social, and reputation blur, brands must double down on transparency, authenticity, and trust across every channel," said Ho. "Those who showcase real customer experiences, lean into visual content, and engage meaningfully on platforms like TikTok, Instagram, and trusted review sites will earn lasting loyalty." The CBI survey was conducted between February 19 and February 24, 2025, and included 1,001 adult U.S. respondents distributed evenly across gender, age, and geographic area. To explore the full findings of the 2025 Consumer Behavior Index, visit About SOCiSOCi is the leader in AI-powered marketing solutions for multi-location businesses. With its proprietary Genius AI™ and suite of Genius Agents™, SOCi provides a first-of-its-kind, AI-powered digital workforce capable of handling the workload of 1,000 local marketers™, empowering brands to achieve unmatched digital visibility, strengthen customer engagement, and scale faster than ever before. SOCi is recognized by Fast Company as one of the World's Most Innovative Companies, and is trusted by nearly 1,000 top brands—including Ford, Ace Hardware, Kumon, Liberty Tax, and more—to automate and optimize local marketing tasks across all locations. Founded in 2012 and backed by leading strategic investors, SOCi is transforming how multi-location brands manage and scale their marketing efforts. Learn more at or contact us at hello@ View original content to download multimedia: SOURCE SOCi Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


CNET
an hour ago
- CNET
Psychologists Are Calling for Guardrails Around AI Use for Young People. Here's What to Watch Out For
Generative AI developers should take steps to ensure the use of their tools doesn't harm young people who use them, the American Psychological Association warned in a health advisory Tuesday. The report, compiled by an advisory panel of psychology experts, called for tech companies to ensure there are boundaries with simulated relationships, to create age-appropriate privacy settings and to encourage healthy uses of AI, among other recommendations. The APA has issued similar advisories about technology in the past. Last year, the group recommended that parents limit teens' exposure to videos produced by social media influencers and gen AI. In 2023, it warned of the harms that could come from social media use among young people. "Like social media, AI is neither inherently good nor bad," APA Chief of Psychology Mitch Prinstein said in a statement. "But we have already seen instances where adolescents developed unhealthy and even dangerous 'relationships' with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now." The meteoric surge of artificial intelligence tools like OpenAI's ChatGPT and Google's Gemini the last few years has presented new and serious challenges for mental health, especially among younger users. People increasingly talk to chatbots like they would talk to a friend, sharing secrets and relying on them for companionship. While that use can have some positive effects on mental health, it can also be detrimental, experts say, reinforcing harmful behaviors or offering the wrong advice. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) What the APA recommended about AI use The group called for several different ways to ensure adolescents can use AI safely, including limiting access to harmful and false content and protecting data privacy and the likenesses of young users. One key difference between adult users and younger people is that adults are more likely to question the accuracy and intent of an AI output. A younger person (the report defined adolescents as between age 10 and 25) might not be able to approach the interaction with the appropriate level of skepticism. Relationships with AI entities like chatbots or the role-playing tool might also displace the important real-world, human social relationships people learn to have as they develop. "Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections," the report said. People in their teens and early 20s are developing habits and social skills that will carry into adulthood, and changes to how they socialize can have lifelong effects, said Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth who was not on the panel that produced the report. "Those stages of development can be a template for what happens later," he said. The APA report called for developers to create systems that prevent the erosion of human relationships, like reminders that the bot is not a human, alongside regulatory changes to protect the interests of youths. Other recommendations included that there should be differences between tools intended for use by adults and those used by children, such as age-appropriate settings being made default and designs made to be less persuasive. Systems should have human oversight and intensive testing to ensure they are safe. Schools and policymakers should prioritize education around AI literacy and how to use the tools responsibly, the APA said. That should include discussions of how to evaluate AI outputs for bias and inaccurate information. "This education must equip young people with the knowledge and skills to understand what AI is, how it works, its potential benefits and limitations, privacy concerns around personal data, and the risks of overreliance," the report said. Identifying safe and unsafe AI use The report shows psychologists grappling with the uncertainties of how a new and fast-growing technology will affect the mental health of those most vulnerable to potential developmental harms, Jacobson said. "The nuances of how [AI] affects social development are really broad," he told me. "This is a new technology that is probably potentially as big in terms of its impact on human development as the internet." AI tools can be helpful for mental health and they can be harmful, Jacobson said. He and other researchers at Dartmouth recently released a study of an AI chatbot that showed promise in providing therapy, but it was specifically designed to follow therapeutic practices and was closely monitored. More general AI tools, he said, can provide incorrect information or encourage harmful behaviors. He pointed to recent issues with sycophancy in a ChatGPT model, which OpenAI eventually rolled back. "Sometimes these tools connect in ways that can feel very validating, but sometimes they can act in ways that can be very harmful," he said. Jacobson said it's important for scientists to continue to research the psychological impacts of AI use and to educate the public on what they learn. "The pace of the field is moving so fast, and we need some room for science to catch up," he said. The APA offered suggestions for what parents can do to ensure teens are using AI safely, including explaining how AI works, encouraging human-to-human interactions, stressing the potential inaccuracy of health information and reviewing privacy settings.