We're offloading mental tasks to AI. It could be making us stupid
Koen Van Belle, a test automation engineer who codes for a living, had been using the artificial intelligence large language model Copilot for about six months when one day the internet went down. Forced to return to his traditional means of work using his memory and what he had decades of experience doing, he struggled to remember some of the syntax he coded with.
'I couldn't remember how it works,' Van Belle, who manages a computer programming business in Belgium, told Salon in a video call. 'I became way too reliant on AI … so I had to turn it off and re-learn some skills.'
As a manager in his company, Van Belle oversees the work of a handful of interns each year. Because their company has limits on the use of AI, the interns had to curb their use as well, he said. But afterward, the amount and quality of their coding was drastically reduced, Van Belle said.
'They are able to explain to ChatGPT what they want, it generates something and they hope it works,' Van Belle said. 'When they get into the real world and have to build a new project, they will fail.'
Since AI models like Copilot and ChatGPT came online in 2022, they have exploded in popularity, with one survey conducted in January estimating that more than half of Americans have used Copilot, ChatGPT, Gemini or Claude. Research examining how these programs affect users is limited because they are so new, but some early studies suggest they are already impacting our brains.
'In some sense, these models are like brain control interfaces or implants — they're that powerful,' said Kanaka Rajan, a computational neuroscientist and founding faculty member at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. 'In some sense, they're changing the input streams to the networks that live in our brains.'
In a February study conducted by researchers from Microsoft and Carnegie Mellon University, groups of people working with data worked more efficiently with the use of generative AI tools like ChatGPT — but used less critical thinking than a comparator group of workers who didn't use these tools. In fact, the more that workers reported trusting AI's ability to perform tasks for them, the more their critical thinking was reduced.
Another 2024 study published last year reported that the reduction in critical thinking stemmed from relying on AI to perform a greater proportion of the brain work necessary to perform tasks in a process called cognitive offloading.
Cognitive offloading is something we do everyday when we write our shopping list, make an event on the calendar or use a calculator. To reduce our brain's workload, we can 'offload' some of its tasks to technology, which can help us perform more complex tasks. However, it has also been linked in other research to things like having a worse memory.
As a review published in March concluded: 'Although laboratory studies have demonstrated that cognitive offloading has benefits for task performance, it is not without costs.' It's handy, for example, to be able to rely on your brain to remember the grocery list in case it gets lost. So how much cognitive offloading is good for us — and how is AI accelerating those costs?
This concept is not new: The Greek philosopher Socrates was afraid that the invention of writing would make humans dumber because we wouldn't exercise our memory as much. He famously never wrote anything down, though his student, Plato, did. Some argue Socrates was right and the trend is escalating: with each major technological advancement, we increasingly rely on tools outside of ourselves to perform tasks we once accomplished in-house. Many people may not perform routine calculations in their head anymore due to the invention of the calculator, and most people use a GPS instead of pulling out a physical map or going off physical markers to guide them to their destination.There is no doubt these inventions have made us more efficient, but the concern lies in what happens when we stop flexing the parts of the brain that are responsible for these tasks. And over time, some argue we might lose those abilities. There is an old ethos of "use it or lose it" that may apply to cognitive tasks as well.
Despite concerns that calculators would destroy our ability to do math, research has generally shown that there is little difference in performance when calculators are used and when they are not. Some have even been critical that the school system still generally spends so much time teaching students foundational techniques like learning the multiplication tables when they can now solve those sorts of problems at the touch of a button, said Matthew Fisher, a researcher at Southern Methodist University.
On the other hand, others argue that this part of the curriculum is important because it provides the foundational mathematical building blocks from which students learn other parts of math and science, he explained. As Fisher told Salon in a phone interview: "If we just totally get rid of that mathematical foundation, our intuition for later mathematical study, as well as just for living in the world and understanding basic relationships, is going to be off.'
Other studies suggest relying on newer forms of technology does influence our brain activity. Research, for example, has found that students' brains were more active when they handwrote information rather than typing it on a keyboard and when using a pen and paper versus a stylus and a tablet.
Research also shows that 'use it or lose it' is somewhat true in the context of the skills we learn. New neurons are produced in the hippocampus, the part of the brain responsible for learning. However, most of these new cells will die off unless the brain puts effort and focus into learning over a period of time. People can certainly learn from artificial intelligence, but the danger lies in forgoing the learning process to simply regurgitate information that it feeds us.
In 2008, after about two decades of the public internet, The Atlantic published a cover story asking "Is Google making us stupid?" Since then, and with the emergence of smart phones and social media, research has shown that too much time on the internet can lower our ability to concentrate, make us feel isolated and lower our self-esteem.
One 2011 review found that people increasingly turn to the internet for difficult questions and are less able to recall the information that they found on the internet when using it to answer those questions. Instead, participants had an enhanced ability to recall where they found it.
'The internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves,' the authors concluded.
In 2021, Fisher co-authored research that also found people who used internet searches more had an inflated sense of their own knowledge, reporting exaggerated claims about things they read on the internet compared to a control group who learned things without it. He termed this phenomenon the 'Google effect.'
'What we seem to have a hard time doing is differentiating where our internally mastered knowledge stops and where the knowledge we can just look up but feels a lot like our knowledge begins,' Fisher said.
Many argue that AI takes this even further and cuts out a critical part of our imaginative process. In an opinion piece for Inside Higher Education, John Warner wrote that overrelying on ChatGPT for written tasks 'risks derailing the important exploration of an idea that happens when we write.'
'This is particularly true in school contexts, when the learning that happens inside the student is far more important than the finished product they produce on a given assignment,' Warner wrote.
Much of the energy dedicated to understanding how AI affects our brains has been focused on adolescents because younger generations use these tools more and may also be more vulnerable to changes that occur because their brains are still developing. One 2023 study, for example, found junior high school students who used AI more had less of an ability to adapt to new social situations.
Another 2023 paper also found that students who more heavily relied on AI to answer multiple choice questions summarizing a reading excerpt scored lower than those who relied on their memory alone, said study author Qirui Ju, a researcher at Duke University.
'Writing things down is helping you to really understand the material,' Ju told Salon in a phone interview. 'But if you replace that process with AI, even if you write higher quality stuff with less typos and more coherent sentences, it replaces the learning process so that the learning quality is lower.'
To get a better idea of what is happening with people's brains when using large language models, researchers at the Massachusetts Institute of Technology connected 32-channel electroencephalograms to three groups of college-age students who were all answering the same writing prompts: One group used ChatGPT, another used Google and the third group simply used their own brains.
Although the study was small, with just 55 participants, its results suggest large language models could affect our memory, attention and creativity, said Nataliya Kos'myna, the leader of the 'Your Brain on LLM' project, and a research scientist at the MIT Media Lab.
After writing the essay, 85% of the group using Google and the group using their brains could recall a quote from their writing, compared to only 20% of those who used large language models, Kos'myna said. Furthermore, 16% of people using AI said they didn't even recognize their essay as their own after completing it, compared to 0% of students in the other group, she added.
Overall, there was less brain activity and interconnectivity in the group that used ChatGPT compared to the groups that used Google or their brains only. Specifically, activity in the regions of the brain corresponding to language processing, imagination and creative writing in students using large language models were reduced compared to students in other groups, Kos'myna said.
The research team also performed another analysis in which students first used their brains for the tasks before switching to performing the same task with the large language models, and vice versa.
Those who used their brains first and then went on to try their hand at the task with the assistance of AI appeared to perform better and had the aforementioned areas of their brains activated. But the same was not true for the group that used AI first and then went on to try it with just their brains, Kos'myna said.
'It looks like the large language models did not necessarily help you and provide any additional interconnectivity in the brain,' Kos'myna told Salon in a video call. 'However, there is potential … that if you actually use your brain and then rework the task when being exposed to the tool, it might be beneficial.'
Whether AI hinders or promotes our capacity for learning may depend more on how we use it than whether we use it. In other words, it is not AI that is the problem, but our overreliance on it.
Van Belle, in Belgium, now uses large language models to write social media posts for his company because he doesn't feel like that is where his skills are most refined and the process can be very time-consuming otherwise.
'I would like to think that I would be able to make a fairly decent LinkedIn post by myself, but it would take me an extra amount of time,' he said. 'That is time that I don't want to waste on something I don't really care about.'
These days, he sees AI as a tool, which it can be — as long as we don't offload too much of our brain power on it.
'We've been on this steady march now for thousands of years and it feels like we are at the culmination of deciding what is left for us to know and for us to do,' Fisher said. 'It raises real questions about how best to balance technology and get the most out of it without sacrificing these essentially human things.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
Cathie Wood sells $22.8 million of hot stock near all-time highs
Cathie Wood sells $22.8 million of hot stock near all-time highs originally appeared on TheStreet. Cathie Wood has long been aggressive in hunting tech stocks that she believes will have a 'disruptive' impact on the future world. However, she sometimes sells a stock when it is high to secure gains. In the past week, the head of Ark Investment Management sold a popular AI stock that has surged nearly 70% year-to-date. 💵💰Don't miss the move: Subscribe to TheStreet's free daily newsletter 💰💵 Cathie Wood's investments have had a volatile ride this year, swinging from strong gains to sharp losses, and now back to outperforming the broader market. In January and February, the Ark funds rallied as investors bet on the Trump administration's potential deregulation that could benefit Wood's tech bets. But the funds stumbled in the following weeks, underperforming sharply as several of its top holdings —especially Tesla, its largest position — declined amid macroeconomic and trade policy uncertainties. Now, the fund is regaining momentum. As of June 6, the flagship Ark Innovation ETF () is up 6.11% year-to-date, outpacing the S&P 500's 2.02% gain. Wood gained a remarkable 153% in 2020, which helped build her reputation and attract loyal investors. Still, her long-term performance has made many others skeptical of her aggressive style. As of June 6, Ark Innovation ETF, with $5 billion under management, has delivered a five-year annualized return of negative 0.5%. In comparison, the S&P 500 has an annualized return of 15.18% over the same period. Wood's investment strategy is straightforward: Her Ark ETFs typically buy shares in emerging high-tech companies in fields such as artificial intelligence, blockchain, biomedical technology and robotics. Wood says these companies have the potential to reshape industries, but their volatility leads to major fluctuations in Ark funds' Ark Innovation ETF wiped out $7 billion in investor wealth over the 10 years ending in 2024, according to an analysis by Morningstar's analyst Amy Arnott. That made it the third-biggest wealth destroyer among mutual funds and ETFs in Arnott's ranking. Wood said the U.S. is coming out of a three-year 'rolling recession' and heading into a productivity-led recovery that could trigger a broader bull market. In a letter to investors published on April 30, she dismissed predictions of a recession dragging into 2026, as she expects "more clarity on tariffs, taxes, regulations, and interest rates over the next three to six months." "If the current tariff turmoil results in freer trade, as tariffs and non-tariff barriers come down in tandem with declines in other taxes, regulations, and interest rates, then real GDP growth and productivity should surprise on the high side of expectations at some point during the second half of this year," she wrote. She also struck an optimistic tone for tech stocks. "During the current turbulent transition in the US, we think consumers and businesses are likely to accelerate the shift to technologically enabled innovation platforms including artificial intelligence, robotics, energy storage, blockchain technology, and multiomics sequencing," she said. But not everyone shares Wood's bullish outlook. Her flagship Ark Innovation ETF has seen $2.23 billion in net outflows over the past year through June 5, including nearly $154 million in the last month alone, according to ETF research firm VettaFi. From June 2 to June 5, Wood's Ark funds sold 179,846 shares of Palantir Technologies () , which was valued at roughly $22.8 million. Palantir is known for providing AI-driven data analytics software to the U.S. government, military, and commercial clients worldwide, including JPMorgan Chase, Airbus, and Merck. The company reported stronger-than-expected first-quarter revenue in early May and raised its full-year outlook as demand for AI tools increased. 'We are delivering the operating system for the modern enterprise in the era of AI,' CEO Alex Karp said. While many tech stocks have struggled this year, Palantir has stood out. Its shares are up roughly 69% in 2025 and just hit a record close of $133.17 on June of the recent momentum comes from its government work. Back in May 2024, Palantir won a $480 million, five-year U.S. Army contract to build its Maven Smart System, which is a battlefield AI prototype. Last month, the Defense Department modified the contract, increasing the licensing ceiling from $480 million to $1.275 billion. Palantir's Foundry platform has been adopted by at least four federal agencies, including the Department of Homeland Security and the Department of Health and Human Services, according to a New York Times report published May 30. Fannie Mae also announced a partnership with Palantir in May to work on AI-based fraud detection. However, the New York Times article also raised concerns about the company's relationship with the Trump administration, alleging that the U.S. president could use Palantir's technology to target immigrants and political opponents. The article also claimed that some Palantir employees felt uncomfortable with the company's decision to work with the Trump administration and that it "risks becoming the face of Mr. Trump's political agenda." Palantir responded in a June 3 post on X, denying the accusations. More Palantir Palantir gets great news from the Pentagon Wall Street veteran doubles down on Palantir Palantir bull sends message after CEO joins Trump for Saudi visit 'The recently published article by The New York Times is blatantly untrue,' the company wrote. 'Palantir never collects data to unlawfully surveil Americans.' Palantir remains a core position for Wood even after recent trims. The stock is now the 9th largest holding in the ARK Innovation ETF, accounting for 4.54%. Wood's latest trades in the past week include buying shares of Advanced Micro Devices () , () , Guardant Health () and Veracyte () . At the same time, she trimmed positions in Tesla () , Roblox () , Robinhood () , and Meta Platforms () .Cathie Wood sells $22.8 million of hot stock near all-time highs first appeared on TheStreet on Jun 8, 2025 This story was originally reported by TheStreet on Jun 8, 2025, where it first appeared.


Forbes
35 minutes ago
- Forbes
Why We Need Global Prosocial AI Governance — Now
Abstract image of a person's profile symbolically composed of dials of different sizes. Concept man, ... More time, space The artificial intelligence revolution isn't coming — it's here. But unlike previous technological waves, AI's transformative power is being concentrated in the hands of remarkably few players, creating global imbalances that threaten to entrench existing inequalities for generations. As AI systems increasingly shape our economies, societies, and daily lives, we face a critical choice: Will we allow narrow market forces and geopolitical power dynamics to dictate AI's development, or will we proactively steer this technology toward benefiting humanity as a whole? It is late to set the stage for global prosocial AI governance, but it is not too late – yet. Before examining governance frameworks, we must confront an uncomfortable truth: the AI revolution is built on a foundation of extreme market concentration that makes Big Tech's dominance look almost quaint by comparison. Nvidia controls approximately 80 percent of revenues and shipments for datacenter GPU computing, the essential infrastructure powering modern AI systems. This isn't just market leadership — it's approaching technological hegemony. The implications extend far beyond corporate balance sheets. Collectively, the global south is home to just over 1 percent of the world's top computers, and Africa just 0.04 percent. Meanwhile, the U.S. government further restricts AI chip and technology exports, dividing up the world to keep advanced computing power in the United States and among its allies. This creates what development economists call a digital colonialism scenario — entire regions become structurally dependent on technology controlled by a handful of corporations and governments. The concentration isn't limited to hardware. Three cloud providers — Amazon, Microsoft, and Google — control over 65% of global cloud infrastructure, creating additional bottlenecks for AI access. When you need specialized chips from one company, hosted on infrastructure controlled by three others, and governed by regulations written primarily in wealthy nations, the barriers to entry become virtually insurmountable for most of the world's population. This hardware concentration translates into stark global inequalities that dwarf previous technological divides. The economic and social benefits of AI remain geographically concentrated, primarily in the Global North. But unlike the gradual rollout of previous technologies like the internet or mobile phones, AI's infrastructure requirements create immediate exclusion rather than delayed adoption. Consider the practical reality: training a state-of-the-art AI model requires computational resources that cost millions of dollars and consume as much electricity as entire cities. The rise of AI could exacerbate both within-country and between-country inequality, placing upward pressure on global inequality as high-income individuals and regions benefit disproportionately while resource-poor regions risk being left behind. This creates a vicious cycle. Countries and regions without access to AI infrastructure become less competitive economically, reducing their ability to invest in the very infrastructure they need to participate in the AI economy. Meanwhile, AI-enabled automation threatens to disrupt traditional export industries that many developing economies rely on, from manufacturing to service outsourcing. The result is what economists call premature deindustrialization — developing countries losing industrial competitiveness before achieving full industrialization. But now it's happening at digital speed, compressed from decades into years. Yet maybe the fundamental challenge with AI isn't the technology itself — it's the intention behind its development and deployment, now amplified by a sharpened concentration of control. Today's AI systems are predominantly designed to maximize engagement, extract value, or optimize narrow business metrics determined by a small number of actors. Social media algorithms amplify divisive content because controversy drives clicks. Hiring algorithms perpetuate bias because they're trained on historical data that reflects past discrimination. Financial AI systems may optimize for short-term profits while creating systemic risks. This is where prosocial AI governance becomes essential. Unlike traditional regulatory approaches that focus on constraining harmful outcomes, prosocial AI governance aims to actively incentivize beneficial behaviors from the outset. ProSocial AI can enhance access to essential services, improve efficiency in resource use, and promote sustainable practices across all levels of society — but only if we design governance systems that prioritize broad-based benefits over narrow optimization. The global AI regulation landscape is fragmented and rapidly evolving. Earlier optimism that global policymakers would enhance cooperation and interoperability within the regulatory landscape now seems distant. The European Union has pioneered comprehensive AI regulation through its AI Act, while other jurisdictions take vastly different approaches — from the United States' innovation-first philosophy to China's state-directed development model. This fragmentation creates several problems. First, it allows AI developers to engage in regulatory arbitrage, developing systems in jurisdictions with the most permissive rules. Second, it prevents the emergence of global standards that could ensure AI systems operate prosocially across borders. Third, it creates competitive disadvantages for companies that voluntarily adopt higher ethical standards. Given the borderless nature of this issue, an internationally coordinated response is necessary. AI systems don't respect national boundaries — a biased hiring algorithm developed in one country can perpetuate discrimination globally, while misinformation generated by AI can destabilize societies worldwide. Traditional regulatory approaches tend to prove inadequate for rapidly evolving technologies. By the time regulators identify and respond to harms, the damage has already been done. Prosocial AI governance offers a different approach: building beneficial outcomes into the DNA of AI systems from the beginning. This means designing AI systems that actively promote human flourishing rather than merely avoiding harm. Instead of social media algorithms that maximize engagement at all costs, we need systems that promote constructive dialogue and community building. Rather than AI systems that automate away human jobs without consideration for displaced workers, we need technologies that augment human capabilities and create new opportunities for meaningful work. Companies with strong environmental, social, and governance frameworks, enhanced by AI, outperform competitors financially and foster greater brand loyalty. This suggests that prosocial AI isn't just morally imperative — it's also economically advantageous for businesses that adopt it early. Forward-thinking business leaders are beginning to recognize that prosocial AI governance isn't a constraint on innovation—it's a competitive advantage. Organizations that proactively embed prosocial values into their AI systems build stronger relationships with customers, employees, and communities. They reduce regulatory risk, attract top talent who want to work on meaningful problems, and position themselves as leaders in an increasingly values-driven marketplace. Moreover, prosocial AI often leads to better technical outcomes. Systems designed with diverse stakeholders in mind tend to be more robust, adaptable, and effective across different contexts. AI systems built with fairness and transparency as core requirements often discover innovative solutions that benefit everyone. The economic argument becomes even stronger when considering systemic risks. AI systems that prioritize narrow optimization over broader social welfare can create negative externalities that ultimately harm the very markets they operate in. Financial AI that ignores systemic risk can contribute to market crashes. Recommendation systems that polarize societies can undermine the social cohesion that stable markets depend on. Establishing global prosocial AI governance requires coordinated action across multiple levels. International bodies need to develop frameworks that incentivize prosocial AI development while allowing for innovation and adaptation to local contexts. These frameworks should focus on outcomes rather than specific technologies, creating space for diverse approaches while ensuring consistent prosocial objectives. At the organizational level, companies need to move beyond compliance-based approaches to AI ethics. This means embedding prosocial considerations into product development processes, establishing clear accountability mechanisms, and investing in the technical infrastructure needed to build genuinely beneficial AI systems. Technical standards organizations should develop metrics and evaluation frameworks that measure prosocial outcomes, not just traditional performance metrics. We need ways to assess whether AI systems actually promote human flourishing, environmental sustainability, and social cohesion. The urgency cannot be overstated. As AI systems become more powerful and pervasive, the window for establishing prosocial governance frameworks is rapidly closing. Once entrenched systems and business models become established, changing them becomes exponentially more difficult and expensive. We're at a pivotal moment where the next generation of AI systems will be designed and deployed. The decisions we make now about how to govern these systems will shape society for decades to come. We can either allow narrow economic interests to drive AI development, or we can proactively steer this technology toward broadly beneficial outcomes. The challenge of prosocial AI governance isn't someone else's problem — it's a defining challenge of our time that requires leadership from every sector of society. Business leaders, policymakers, technologists, civil society organizations and ultimately each of us have roles in the AI-infused play that society has become. Prosocial AI governance isn't a constraint on innovation — it's the foundation for sustainable technological progress that benefits everyone. The time to act is now, before today's AI solutions become tomorrow's entrenched problems.
Yahoo
37 minutes ago
- Yahoo
Scale AI's 30-year-old billionaire founder still shops at Shein and pulls up to work in a Honda Civic: ‘Act broke, stay rich,' Lucy Guo says
Billionaire college dropout Lucy Guo is reportedly the youngest self-made woman on the planet—knocking Taylor Swift off the top spot. But even with a $1.3 billion reported net worth, the 30-year-old cofounder of Scale AI tells Fortune she still pinches the pennies and shops at Shein. Despite founding and retaining a $1.3 billion stake in an AI unicorn, you won't catch Lucy Guo wasting her billions on a lavish lifestyle to match her new status. 'I don't like wasting money,' the frugal 30-year-old tells Fortune. Of course, sometimes Guo will splurge—like if she's got a 16 hour flight to endure, she says she'll opt for business class. And there's the odd designer dress hanging in her closet for when she needs it. 'But in terms of like daily life, my assistant just drives me in a pretty old Honda Civic. I don't care,' she says. 'Everything I wear is free or from Shein… Some of them aren't going to be that great quality, but there's always like two pieces or so that really work out, and I just wear them every day,' the billionaire founder laughs. 'I still literally buy buy-one-get-one-free on Uber Eats.' Guo, who is currently the founder and CEO of the creator community platform Passes, adds that a quote she stumbled on on the morning of our interview perfectly summarises her approach: 'It's like, act broke, stay rich.' Guo hit the jackpot after the AI startup she cofounded, Scale AI, was reportedly valued at $25 billion in April as part of a share sale. Although she left the company in 2018 (two years after founding it), the 5% stake she held onto is now worth an estimated $1.2 billion—making the millennial one of just 5 female billionaires under 40 according to Forbes' latest ranking, including Rihanna and Anthropic's cofounder Daniela Amodei. It's why Guo no longer feels the need to prove her wealth with a Patek Philippe everyday watch, or a Hermès Birkin to carry her laptop. That, she says, is the behaviour of millionaires. 'Who you see typically wasting money on, designer clothes, a nice car, et cetera, they're technically in the millionaire range,' Guo explains. 'All their friends are multimillionaires, or billionaires and they feel a little bit insecure, so they feel the need to be flashy to show other people, 'look, I'm successful.'' 'I'm not showing off to anyone, right?' Indeed, for our interview, she's makeup-free, dressed down, and could pass for any other millennial. But earlier in her career, Guo admits she, too, may have been dripping in designer gear. 'I do think that this is actually something that I personally went through, and I think a lot of people go through when you're in that middle ground of you're successful, but not as successful as you want to be.' 'And I think the reason most billionaires dress in a t-shirt, jeans, hoodies, is that they can. They don't need to be in the suit 24/7 because they're done proving themselves to the rest of the world. The rest of the world is just sucking up to them,' she adds. 'And I think that's kind of how I like feel, where I'm past that hump. I don't really have to prove myself to anyone.' 'No one's going to look at me and point at me like, 'Haha, she's so broke' when I'm pulling up in a Honda Civic because whatever, it doesn't matter.' Guo's not the only ultra-wealthy to admit she's 'pretty frugal.' The world's most powerful have been boasting about their quiet luxury lifestyle for some time now. They've been donning logo-less angora wool jumpers and linen trousers that could be from anywhere to the unassuming eye. Experts say their wealthy peers can tell who is wearing Zara from who is in Loro Piana, but the point is to resemble people in lower tax brackets. Others, like KeKe Palmer and Warren Buffet, have been less subtle about how they lead very normal lives, despite their huge net worths—with the world's most famous investor going as far to call himself 'cheap'. But in Guo's eyes, she's one of the few who actually are as cheap as they say they are. 'I think that people want to fit into society. Specifically in America, I do think there is a 'we hate billionaires situation.' So because of that, people want to show, 'look, I'm not your typical billionaire. I'm frugal,'' she explains. 'I'm not saying it to be like, 'let me show you the world that I'm not like other billionaires,' Guo adds. 'I fully admit it, I have gone through that spending spree when I was more insecure, and I felt like I needed to show something.' And those who really aren't spending their millions? They aren't doing it to be relatable, she says it's because like her they had their flashy era—then reached the inevitable realisation: 'Why am I wasting my money on something that doesn't matter?' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data