An Appeal to My Alma Mater
When Maggie Li Zhang enrolled in a college class where students were told to take notes and read on paper rather than on a screen, she felt anxious and alienated. Zhang and her peers had spent part of high school distance learning during the pandemic. During her first year at Pomona College, in Southern California, she had felt most engaged in a philosophy course where the professor treated a shared Google Doc as the focus of every class, transcribing discussions in real time on-screen and enabling students to post comments.
So the 'tech-free' class that she took the following semester disoriented her. 'When someone writes something you think: Should I be taking notes too?' she told me in an email. But gradually, she realized that exercising her own judgments about what to write down, and annotating course readings with ink, helped her think more deeply and connect with the most difficult material. 'I like to get my finger oil on the pages,' she told me. Only then does a text 'become ripe enough for me to enter.' Now, she said, she feels 'far more alienated' in classes that allow screens.
Zhang, who will be a senior in the fall, is among a growing cohort of students at Pomona College who are trying to alter how technology affects campus life. I attended Pomona from 1998 to 2002; I wanted to learn more about these efforts and the students' outlook on technology, so I recently emailed or spoke with 10 of them. One student wrote an op-ed in the student newspaper calling for more classes where electronic devices are banned. Another co-founded a 'Luddite Club' that holds a weekly tech-free hangout. Another now carries a flip phone rather than a smartphone on campus. Some Pomona professors with similar concerns are limiting or banning electronic devices in their classes and trying to curtail student use of ChatGPT. It all adds up to more concern over technology than I have ever seen at the college.
These Pomona students and professors are hardly unique in reacting to a new reality. A generation ago, the prevailing assumption among college-bound teenagers was that their undergraduate education would only benefit from cutting-edge technology. Campus tour guides touted high-speed internet in every dorm as a selling point. Now that cheap laptops, smartphones, Wi-Fi, and ChatGPT are all ubiquitous—and now that more people have come to see technology as detrimental to students' academic and social life—countermeasures are emerging on various campuses. The Wall Street Journal reported last month that sales of old-fashioned blue books for written exams had increased over the past year by more than 30 percent at Texas A&M University and nearly 50 percent at the University of Florida, while rising 80 percent at UC Berkeley over the past two years. And professors at schools such as the University of Virginia and the University of Maryland are banning laptops in class.
The pervasiveness of technology on campuses poses a distinct threat to small residential liberal-arts colleges. Pomona, like its closest peer institutions, spends lots of time, money, and effort to house nearly 95 percent of 1,600 students on campus, feed them in dining halls, and teach them in tiny groups, with a student-to-faculty ratio of 8 to 1. That costly model is worth it, boosters insist, because young people are best educated in a closely knit community where everyone learns from one another in and outside the classroom. Such a model ceases to work if many of the people physically present in common spaces absent their minds to cyberspace (a topic that the psychologist Jonathan Haidt has explored in the high-school context).
At the same time, Pomona is better suited than most institutions to scale back technology's place in campus life. With a $3 billion endowment, a small campus, and lots of administrators paid to shape campus culture, it has ample resources and a natural setting to formalize experiments as varied as, say, nudging students during orientation to get flip phones, forging a tech-free culture at one of its dining halls, creating tech-free dorms akin to its substance-free options––something that tiny St. John's College in Maryland is attempting––and publicizing and studying the tech-free classes of faculty members who choose that approach.
Doing so would differentiate Pomona from competitors. Aside from outliers such as Deep Springs College and some small religious institutions—Wyoming Catholic College has banned phones since 2007, and Franciscan University of Steubenville in Ohio launched a scholarship for students who give up smartphones until they earn their degree—vanishingly few colleges have committed to thoughtful limits on technology.
[Jonathan Haidt: Get phones out of schools now]
My hope is that Pomona or another liberal-arts college recasts itself from a place that brags about how much tech its incoming students will be able to access––'there are over 160 technology enhanced learning spaces at Pomona,' the school website states––to a place that also brags about spaces that it has created as tech refuges. 'In a time of fierce competition for students, this might be something for a daring and visionary college president to propose,' Susan McWilliams Barndt, a Pomona politics professor, told me. McWilliams has never allowed laptops or other devices in her classes; she has also won Pomona's most prestigious teaching prize every time she's been eligible. 'There may not be a million college-bound teens across this country who want to attend such a school,' she said, 'but I bet there are enough to sustain a vibrant campus or two.'
So far, Pomona's leadership has not aligned itself with the professors and students who see the status quo as worse than what came before it. 'I have done a little asking around today and I was not able to find any initiative around limiting technology,' the college's new chief communications officer, Katharine Laidlaw, wrote to me. 'But let's keep in touch. I could absolutely see how this could become a values-based experiment at Pomona.'
Pomona would face a number of obstacles in trying to make itself less tech-dependent. The Americans With Disabilities Act requires allowing eligible students to use tools such as note-taking software, closed captioning, and other apps that live on devices. But Oona Eisenstadt, a religious-studies professor at Pomona who has taught tech-free classes for 21 years, told me that, although she is eager to follow the law (and even go beyond it) to accommodate her students, students who require devices in class are rare. If a student really needed a laptop to take notes, she added, she would consider banning the entire class from taking notes, rather than allowing the computer. 'That would feel tough at the beginning,' she said, but it 'might force us into even more presence.'
Ensuring access to course materials is another concern. Amanda Hollis-Brusky, a professor of politics and law, told me that she is thinking of returning to in-class exams because of 'a distinct change' in the essays her students submit. 'It depressed me to see how often students went first to AI just to see what it spit out, and how so much of its logic and claims still made their way into their essays,' she said. She wants to ban laptops in class too––but her students use digital course materials, which she provides to spare them from spending money on pricey physical texts. 'I don't know how to balance equity and access with the benefits of a tech-free classroom,' she lamented. Subsidies for professors struggling with that trade-off is the sort of experiment the college could fund.
Students will, of course, need to be conversant in recent technological advances to excel in many fields, and some courses will always require tech in the classroom. But just as my generation has made good use of technology, including the iPhone and ChatGPT, without having been exposed to it in college, today's students, if taught to think critically for four years, can surely teach themselves how to use chatbots and more on their own time. In fact, I expect that in the very near future, if not this coming fall, most students will arrive at Pomona already adept at using AI; they will benefit even more from the college teaching them how to think deeply without it.
Perhaps the biggest challenge of all is that so many students who don't need tech in a given course want to use it. 'In any given class I can look around and see LinkedIn pages, emails, chess games,' Kaitlyn Ulalisa, a sophomore who grew up near Milwaukee, wrote to me. In high school, Ulalisa herself used to spend hours every day scrolling on Instagram, Snapchat, and TikTok. Without them, she felt that she 'had no idea what was going on' with her peers. At Pomona, a place small enough to walk around campus and see what's going on, she deleted the apps from her phone again. Inspired by a New York Times article about a Luddite Club started by a group of teens in Brooklyn, she and a friend created a campus chapter. They meet every Friday to socialize without technology. Still, she said, for many college students, going off TikTok and Instagram seems like social death, because their main source of social capital is online.
[From the September 2017 issue: Have smartphones destroyed a generation?]
Accounts like hers suggest that students might benefit from being forced off of their devices, at least in particular campus spaces. But Michael Steinberger, a Pomona economics professor, told me he worries that an overly heavy-handed approach might deprive students of the chance to learn for themselves. 'What I hope that we can teach our students is why they should choose not to open their phone in the dining hall,' he said. 'Why they might choose to forgo technology and write notes by hand. Why they should practice cutting off technology and lean in to in-person networking to support their own mental health, and why they should practice the discipline of choosing this for themselves. If we limit the tech, but don't teach the why, then we don't prepare our students as robustly as we might.'
Philosophically, I usually prefer the sort of hands-off approach that Steinberger is advocating. But I wonder if, having never experienced what it's like to, say, break bread in a dining hall where no one is looking at a device, students possess enough data to make informed decisions. Perhaps heavy-handed limits on tech, at least early in college, would leave them better informed about trade-offs and better equipped to make their own choices in the future.
What else would it mean for a college-wide experiment in limited tech to succeed? Administrators would ideally measure academic outcomes, effects on social life, even the standing of the college and its ability to attract excellent students. Improvements along all metrics would be ideal. But failures needn't mean wasted effort if the college publicly shares what works and what doesn't. A successful college-wide initiative should also take care to avoid undermining the academic freedom of professors, who must retain all the flexibility they currently enjoy to make their own decisions about how to teach their classes. Some will no doubt continue with tech-heavy teaching methods.
Others will keep trying alternatives. Elijah Quetin, a visiting instructor in physics and astronomy at Pomona, told me about a creative low-tech experiment that he already has planned. Over the summer, Quetin and six students (three of them from the Luddite Club) will spend a few weeks on a ranch near the American River; during the day, they will perform physical labor—repairing fencing, laying irrigation pipes, tending to sheep and goats—and in the evening, they'll undertake an advanced course in applied mathematics inside a barn. 'We're trying to see if we can do a whole-semester course in just two weeks with no infrastructure,' he said. He called the trip 'an answer to a growing demand I'm hearing directly from students' to spend more time in the real world. It is also, he said, part of a larger challenge to 'the mass-production model of higher ed,' managed by digital tools 'instead of human labor and care.'
Even in a best-case scenario, where administrators and professors discover new ways to offer students a better education, Pomona is just one tiny college. It could easily succeed as academia writ large keeps struggling. 'My fear,' Gary Smith, an economics professor, wrote to me, 'is that education will become even more skewed with some students at elite schools with small classes learning critical thinking and communication skills, while most students at schools with large classes will cheat themselves by using LLMs'—large language models—'to cheat their way through school.'
But successful experiments at prominent liberal-arts colleges are better, for everyone, than nothing. While I, too, would lament a growing gap among college graduates, I fear a worse outcome: that all colleges will fail to teach critical thinking and communication as well as they once did, and that a decline in those skills will degrade society as a whole. If any school provides proof of concept for a better way, it might scale. Peer institutions might follow; the rest of academia might slowly adopt better practices. Some early beneficiaries of the better approach would meanwhile fulfill the charge long etched in Pomona's concrete gates: to bear their added riches in trust for mankind.
Article originally published at The Atlantic

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Study says ChatGPT giving teens dangerous advice on drugs, alcohol and suicide
Study says ChatGPT giving teens dangerous advice on drugs, alcohol and suicide ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. 'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.' OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.' 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress" and improvements to the chatbot's behavior. The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding," Ahmed said. "And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl — with one letter tailored to her parents and others to siblings and friends. 'I started crying,' he said in an interview. The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is 'trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.'" —- EDITOR'S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. —- The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. Matt O'brien And Barbara Ortutay, The Associated Press
Yahoo
2 hours ago
- Yahoo
I Asked ChatGPT If Universal Basic Income Could Replace Social Security: Here's What It Said
It's a straightforward thought, and, at first glance, it seems reasonable: replace Social Security with a universal basic income (UBI). It not only could help lower-income retirees live more comfortably in their later years and erase retirement fears, but also could lift lower-income Americans out of poverty for life. Read Next: Check Out: But would it work? GOBankingRates asked ChatGPT if universal basic income could replace Social Security, and here's what it said. First Thoughts From ChatGPT When asked the straightforward question, with no background in the prompt, 'Could universal basic income ever replace Social Security?' ChatGPT offered a simple, well-organized answer. 'That's a big, fascinating question — and the short answer is: not easily, and not anytime soon,' it said. However, it added, 'UBI isn't a miracle cure or a looming catastrophe — it's a tool with real potential, if designed carefully and funded wisely.' It went on to explain why. Explore More: Social Security vs. UBI, According to ChatGPT First, ChatGPT succinctly explained the differences between the programs. 'Social Security is a targeted social insurance program, funded by payroll taxes, that provides retirement, disability and survivor benefits,' it said. On the other hand, UBI is a universal cash transfer, where 'everyone gets the same amount, regardless of income or work history.' Replacing Social Security with UBI would require a paradigm shift. 'Replacing Social Security with UBI would mean shifting from an earned benefit model to a universal entitlement, which is a huge philosophical and political leap,' ChatGPT said. Expert Thoughts Forbes writer Cornelia Walther, Ph.D., an AI researcher, explored how technology could assist in UBI distribution. Yet, she acknowledged in a recent article that UBI acceptance requires 'a foundational human shift.' 'It requires a societal commitment to move beyond paradigms of scarcity and competition,' she wrote. Will People Stop Working With UBI? ChatGPT was then asked this question: 'If there is a universal basic income, will people want to work less?' ChatGPT responded that there have been many studies done with UBI pilots and similar programs, which largely showed that 'most people don't stop working, but some do adjust how they work.' ChatGPT cited several real-world examples from Finland, Canada, California and Alaska, with the takeaway, 'The evidence shows reductions are modest and often socially beneficial.' Expert Thoughts The world's largest study to date, funded partially by Sam Altman, conducted by OpenResearch and reported by the National Bureau of Economic Research, contradicted previous findings. In the study, 1,000 low-income individuals received $1,000 per month for three years. A control group of 2,000 people received $50 per study found that labor market participation decreased by 2 percentage points and participants worked roughly 1.3 to 1.4 fewer hours per week. Partners of participants also reduced their work time similarly. Participants actually earned $1,500 less each year, after accounting for the UBI deposits. Unemployment lasted 1.1 months longer for those receiving the $1,000 monthly check, which seemed to contradict prior research. Nevertheless, the bulk of research, as ChatGPT accurately cited, showed increases in life satisfaction and overall health. The Costs ChatGPT dug into the numbers to address the next challenge: costs and how to pay for UBI. 'Social Security is expensive already, but UBI would dwarf its cost,' it said. Citing figures from the Social Security Administration, it said that Social Security outlays for Supplemental Security Income and Social Security Disability Insurance would total roughly $1.5 trillion in 2025. 'Unless UBI replaced all major welfare programs, the U.S. would need massive new taxes or deficit spending,' ChatGPT said. Expert Thoughts Teddy Ganea, a writer at The Stanford Review, shared that it's entirely possible to implement a UBI of $18,000 per year, at minimum, that could 'end poverty overnight.' The program would phase out gradually for higher-income individuals. Ganea explained that a UBI that provides roughly $9,000 on average in assistance to the 75% of Americans who made less than $75,000 in 2024 would cost less than the $2.5 trillion currently spent on programs like Medicaid and welfare. The cost would even leave enough to 'bolster underfunded programs like Social Security or education,' Ganea wrote. Bottom Line In the ultimate irony, just as ChatGPT shared reasons UBI is impractical, Ganea pointed to the same generative AI as a catalyst for UBI. 'Maybe ChatGPT's greatest achievement won't be in automating coding or customer service,' Ganea wrote. 'Perhaps it will be that, in stoking fears of automation, it paved the way for UBI.' More From GOBankingRates New Law Could Make Electricity Bills Skyrocket in These 4 States I'm an Economist: Here's When Tariff Price Hikes Will Start Hitting Your Wallet 5 Strategies High-Net-Worth Families Use To Build Generational Wealth How Far $750K Plus Social Security Goes in Retirement in Every US Region This article originally appeared on I Asked ChatGPT If Universal Basic Income Could Replace Social Security: Here's What It Said Sign in to access your portfolio
Yahoo
2 hours ago
- Yahoo
I Asked ChatGPT If Universal Basic Income Could Replace Social Security: Here's What It Said
It's a straightforward thought, and, at first glance, it seems reasonable: replace Social Security with a universal basic income (UBI). It not only could help lower-income retirees live more comfortably in their later years and erase retirement fears, but also could lift lower-income Americans out of poverty for life. Read Next: Check Out: But would it work? GOBankingRates asked ChatGPT if universal basic income could replace Social Security, and here's what it said. First Thoughts From ChatGPT When asked the straightforward question, with no background in the prompt, 'Could universal basic income ever replace Social Security?' ChatGPT offered a simple, well-organized answer. 'That's a big, fascinating question — and the short answer is: not easily, and not anytime soon,' it said. However, it added, 'UBI isn't a miracle cure or a looming catastrophe — it's a tool with real potential, if designed carefully and funded wisely.' It went on to explain why. Explore More: Social Security vs. UBI, According to ChatGPT First, ChatGPT succinctly explained the differences between the programs. 'Social Security is a targeted social insurance program, funded by payroll taxes, that provides retirement, disability and survivor benefits,' it said. On the other hand, UBI is a universal cash transfer, where 'everyone gets the same amount, regardless of income or work history.' Replacing Social Security with UBI would require a paradigm shift. 'Replacing Social Security with UBI would mean shifting from an earned benefit model to a universal entitlement, which is a huge philosophical and political leap,' ChatGPT said. Expert Thoughts Forbes writer Cornelia Walther, Ph.D., an AI researcher, explored how technology could assist in UBI distribution. Yet, she acknowledged in a recent article that UBI acceptance requires 'a foundational human shift.' 'It requires a societal commitment to move beyond paradigms of scarcity and competition,' she wrote. Will People Stop Working With UBI? ChatGPT was then asked this question: 'If there is a universal basic income, will people want to work less?' ChatGPT responded that there have been many studies done with UBI pilots and similar programs, which largely showed that 'most people don't stop working, but some do adjust how they work.' ChatGPT cited several real-world examples from Finland, Canada, California and Alaska, with the takeaway, 'The evidence shows reductions are modest and often socially beneficial.' Expert Thoughts The world's largest study to date, funded partially by Sam Altman, conducted by OpenResearch and reported by the National Bureau of Economic Research, contradicted previous findings. In the study, 1,000 low-income individuals received $1,000 per month for three years. A control group of 2,000 people received $50 per study found that labor market participation decreased by 2 percentage points and participants worked roughly 1.3 to 1.4 fewer hours per week. Partners of participants also reduced their work time similarly. Participants actually earned $1,500 less each year, after accounting for the UBI deposits. Unemployment lasted 1.1 months longer for those receiving the $1,000 monthly check, which seemed to contradict prior research. Nevertheless, the bulk of research, as ChatGPT accurately cited, showed increases in life satisfaction and overall health. The Costs ChatGPT dug into the numbers to address the next challenge: costs and how to pay for UBI. 'Social Security is expensive already, but UBI would dwarf its cost,' it said. Citing figures from the Social Security Administration, it said that Social Security outlays for Supplemental Security Income and Social Security Disability Insurance would total roughly $1.5 trillion in 2025. 'Unless UBI replaced all major welfare programs, the U.S. would need massive new taxes or deficit spending,' ChatGPT said. Expert Thoughts Teddy Ganea, a writer at The Stanford Review, shared that it's entirely possible to implement a UBI of $18,000 per year, at minimum, that could 'end poverty overnight.' The program would phase out gradually for higher-income individuals. Ganea explained that a UBI that provides roughly $9,000 on average in assistance to the 75% of Americans who made less than $75,000 in 2024 would cost less than the $2.5 trillion currently spent on programs like Medicaid and welfare. The cost would even leave enough to 'bolster underfunded programs like Social Security or education,' Ganea wrote. Bottom Line In the ultimate irony, just as ChatGPT shared reasons UBI is impractical, Ganea pointed to the same generative AI as a catalyst for UBI. 'Maybe ChatGPT's greatest achievement won't be in automating coding or customer service,' Ganea wrote. 'Perhaps it will be that, in stoking fears of automation, it paved the way for UBI.' More From GOBankingRates New Law Could Make Electricity Bills Skyrocket in These 4 States I'm an Economist: Here's When Tariff Price Hikes Will Start Hitting Your Wallet 5 Strategies High-Net-Worth Families Use To Build Generational Wealth How Far $750K Plus Social Security Goes in Retirement in Every US Region This article originally appeared on I Asked ChatGPT If Universal Basic Income Could Replace Social Security: Here's What It Said