Bill Gates shows what the end of perpetual philanthropy looks like
For the last quarter-century, Bill Gates has been the donor behind what has long been one of the nation's largest private philanthropic foundations, a behemoth that has long dwarfed nearly all other charitable institutions.
The Gates Foundation, in its commitment to a large, professionalized staff, driven by quantifiable data, and with its focus on global health, has served as a model for many other donors. And as an individual, Gates has long been the world's most recognized philanthropist, in terms of media attention, accolades, and public knowledge. As one of the co-founders of the Giving Pledge, the campaign to get the world's billionaires to donate more than half their wealth to charitable causes, he has also been the individual most closely identified with efforts to shape global philanthropic norms in an age of super-wealth. (Disclosure: I am an employee of the Urban Institute, which receives funding from the Gates Foundation.)
That, in fact, is the best context in which to understand the significance of Gates's recent announcement that he will give virtually all this wealth to the Gates Foundation over the next 20 years, and that the foundation would 'close its doors permanently' by the end of 2045, after all that money has been given away. With Gates's own wealth listed at north of $100 billion, and his foundation sitting on an endowment of more than $75 billion, Gates estimates that his foundation 'will spend more than $200 billion between now and 2045.' As he explained it: 'I have decided to give my money back to society much faster than I had originally planned.'
In sheer monetary terms, this pledge, if honored, would be a very big deal. It would require the foundation to maintain an unprecedentedly high level of annual spending, likely doubling its current $9 billion per year. And it would require contemplating a world in which the Gates Foundation no longer exists.
But the extent to which Gates's announcement might encourage important shifts in broader philanthropic norms may be an even bigger deal. To put it simply: It could galvanize billionaires to give more — and perhaps more importantly, to give more faster.
'You could say this announcement is not very timely,' Gates quipped to the New York Times in an interview accompanying his announcement. He meant by this that his new pledge was fueled by an optimism about philanthropy's power to dramatically improve global health that sits oddly with a prevailing sense that progress seems to be eroding.
But looked at another way, what was most significant about Gates's announcement was not the sheer dollar figure that received so much attention, but its embrace of the importance of timeliness in philanthropy. By philanthropic timeliness, I mean that he's elevated his responsibility to the current moment, to contemporary needs, exigencies, and opportunities, as the driving motive in his giving.
In the rollout of his announcement, Gates has made clear that he is placing a premium on getting more money out the door now. He told the Times that 'this is a 'miraculous time,' ripe with all sorts of possibilities for astonishing advances in global health, like single-shot gene therapy for HIV/AIDS and new tools to prevent maternal and childhood mortality, like portable, AI-enabled ultrasounds. Given all these opportunities, Gates says, 'It makes a big difference to take the money and spend it now versus later.'
That might seem obvious. But for many philanthropists, foundations are instruments designed as much to warehouse wealth as to give it away. Gates is now putting his celebrity brand behind the latter, pushing for present-day concerns to be met by large-scale philanthropic contributions.
But there's obviously another reason why the current moment matters. Gates's announcement acknowledged that he's committing additional funds at a time when governments around the world, especially in the US, are slashing their own funding for global aid. He has maintained his insistence that philanthropy can never adequately stand in for government funding for global health — in 2023, USAID managed more than $35 billion in appropriations, for instance — and he's situating large-scale giving less as confirmation of the superiority of private philanthropy than as an urgent argument that Elon Musk's team got it wrong in gutting aid.
'It's unclear whether the world's richest countries will continue to stand up for its poorest people,' he wrote in a blog post explaining his decision. 'But the one thing we can guarantee is that, in all of our work, the Gates Foundation will support efforts to help people and countries pull themselves out of poverty.'
For this reason, Musk, who has bragged about 'feeding USAID into the wood chipper,' has emerged as a sort of nemesis in the rollout of the announcement. 'The picture of the world's richest man killing the world's poorest children is not a pretty one,' Gates commented to the Financial Times.
It's an uncharacteristic public feud for Gates, who until recently has studiously cultivated a public persona that avoided any hint of partisanship. Still, his comments did conjure up one inconvenient fact: Musk has actually signed the Giving Pledge (in 2012). But Gates took this on directly, and in doing so, offered an implicit critique of the system of philanthropic norms that he had taken the lead in developing. 'The Giving Pledge — an unusual aspect of it [is] that you can wait until you die and still fulfill it,' he said in the New York Times interview.
And it's true that from its conception, the Giving Pledge was agnostic on the question of timeliness. The metric of success for the Pledge was getting 'this set of billionaires to think earlier in their life about how they're going to give money back, whether it's during their lifetime or at their death,' as Melinda French Gates said during a 2010 interview with Charlie Rose.
Gates is now signaling a call for donors to do more than start thinking about giving — and start actually giving more now. As he has explained it, he is now pushing the wealthy to increase not just the scale of their giving, but the pace of their giving too.
It's something Gates learned from the example of Chuck Feeney, the co-founder of a duty-free shopping empire. Feeney gave significant amounts anonymously for years, only willingly embracing a public identity as a mega-donor as a means of spreading a gospel of 'Giving while Living.' It's an ethic that Gates namechecked in his announcement as having 'shaped how I think about philanthropy.'
There are a host of reasons why donors have generally preferred to defer giving, from not having the time to devote to philanthropy, to the compulsion to get the gift exactly right, to a desire to maintain funds to address future problems, to the simple fact that for some, it's just hard to let go of wealth. On an institutional level, one of the main challenges is the commitment to perpetuity, which imposes a certain ceiling on spending levels so that the endowment isn't drained.
For much of the final decades of the 20th century, and for the first decade of the new century, perpetuity was something of an implicit default in the philanthropic sector. In the deliberations over the Tax Reform Act of 1969, which established the regulatory regime under which foundations would operate for the next half-century, Congress considered imposing a 40-year time limit on foundations. The proposal, championed by Sen. Al Gore Sr., was ultimately rejected, and in its stead, a 6 percent (changed a few years later to 5 percent) annual payout requirement was passed, as part of a 'Grand Bargain' that exchanged some commitment to 'philanthropic timeliness' for the legitimation of perpetuity.
But over the last two decades, the tendency to treat perpetuity as the default mode of philanthropy has eroded. The reasons behind that shift are varied, from the urgency of the environmental crisis (several of the first wave of 21st-century spend-down foundations devoted themselves to the cause), to the propensity of young tech donors who had made their fortunes relatively quickly to look to spend their philanthropic resources quickly as well.
This might very well be a pivotal moment for the norms surrounding philanthropic timeliness.
In a 2020 global survey, Rockefeller Philanthropy Advisors found that nearly half of the organizations established in the 2010s were founded as time-limited vehicles, up from around 20 percent in the 1980s. A 2022 survey found that, 'Of the responding philanthropies established since 2000, almost one quarter (23 percent) were established as time-limited, representing an increase of 22 percentage points.'
In fact, Bill and Melinda French Gates had never really committed their foundation to perpetuity. Seven years after creating the foundation in 2000, they had pledged to shut it down 50 years after their deaths. At an event in 2022, Gates had suggested that the foundation would last another 25 years. But the new announcement of the 2045 date is a much more definite endorsement of 'time-limited philanthropy.'
So this might very well be a pivotal moment for the norms surrounding philanthropic timeliness. We're living through a period defined by cascading crises — climate, racial justice, Covid, and now those related to the Trump administration's budget cuts. In response to each, a handful of foundations have significantly increased their spending rates; some have committed to spend down their assets.
It's also been a period characterized by the proliferation of high-profile billion-dollar philanthropic pledges from individual donors. These are timely to the extent that they draw immediate public and media attention, but they have not necessarily translated into the commensurate timely disbursement of philanthropic funds. In recent years, MacKenzie Scott captured considerable attention, and for a moment rivaled Gates as the nation's most prominent public philanthropist, with the speed and urgency with which she embraced the challenge of directing her Amazon fortune to philanthropy, and with a commitment to 'keep at it until the safe is empty.' She's given some $19 billion away in the last five years, though even she has struggled to keep up with the relentless pace of compounding interest and Amazon's surging stock price; her total wealth has barely budged since.
Taking it all in, then, there hasn't yet been a definitive shift toward giving now in philanthropy. Might Gates's announcement help precipitate one? If it does, Gates will cast light on a whole other assortment of debates within the philanthropic sector. One of the most important of these relates to a chief paradox of contemporary criticisms of philanthropy, which boils down to the old joke: 'The food here is terrible — and the portions are too small!'
Alongside demands for more and faster giving sit concerns about the ways mega-philanthropy can warp democratic norms and institutions. Gates has not merely been one of the most recognized and celebrated philanthropists, but also one of the most criticized, on precisely those terms.
Whether the surge of giving that will be coming from the Gates Foundation is compatible with democratic demands — whether, for instance, it can help shift power to local communities and institutions — will likely be as important a question to the construction of the next generation of philanthropic norms as those related to scale and pacing.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
AI Can't Replace Education
Credit - Tingting Ji—Getty Images As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it. Contact us at letters@
Yahoo
an hour ago
- Yahoo
OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'
OpenAI CEO Sam Altman said Thursday that the company plans to appeal a court decision in a copyright case brought by The New York Times. "Recently the NYT asked a court to force us to not delete any user chats. We think this was an inappropriate request that sets a bad precedent," Altman wrote in a post on X, adding that the company is "appealing the decision." Ai-generated Attorney Outrages Judge Who Scolds Man Over Courtroom Fake: 'Not A Real Person' "We will fight any demand that compromises our users' privacy; this is a core principle." The OpenAI co-founder said the case has accelerated the need for a conversation about "AI privilege," in which "talking to an AI should be like talking to a lawyer or a doctor." "I hope society will figure this out soon," Altman wrote, suggesting that "spousal privilege" might be a better analogy. Read On The Fox Business App The copyright case, brought by The New York Times, requires OpenAI to preserve ChatGPT output data indefinitely at the outlet's request. Ai's Development Is Critically Important For America – And It All Hinges On These Freedoms U.S. District Judge Sidney Stein was asked to vacate the May data preservation order on June 3, according to a court filing obtained by Reuters. The New York Times sued OpenAI and Microsoft in late 2023, accusing them of using millions of its articles without permission to train the large language model powering ChatGPT. "Defendants' generative artificial intelligence ('GenAI') tools rely on large-language models ('LLMs') that were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more," the lawsuit says. The lawsuit claims that while defendants "engaged in widescale copying from many sources," OpenAI "gave Times content particular emphasis when building their LLMs — revealing a preference that recognizes the value of those works." Chatgpt Will Now Combat Bias With New Measures Put Forth By Openai "Using the valuable intellectual property of others in these ways without paying for it has been extremely lucrative for Defendants," the lawsuit says. "Microsoft's deployment of Times-trained LLMs throughout its product line helped boost its market capitalization by a trillion dollars in the past year alone. And OpenAI's release of ChatGPT has driven its valuation to as high as $90 billion." Stein said in an April court opinion that The Times had made a case that OpenAI and Microsoft were responsible for inducing users to infringe its copyrights. The opinion explained an earlier order that rejected parts of an OpenAI and Microsoft motion to dismiss, saying that the Times' "numerous" and "widely publicized" examples of ChatGPT producing material from its articles justified allowing the claims to continue. OpenAI COO Brad Lightcap said The New York Times and other plaintiffs "have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API consumer data indefinitely." "This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections," Lightcap said in a press release. "We strongly believe this is an overreach by The New York Times. We're continuing to appeal this order so we can keep putting your trust and privacy first. Click Here To Get Fox Business On The Go FOX Business has reached out to The New York Times for comment on OpenAI's appeal. FOX Business' Danielle Wallace and Reuters contributed to this report. Original article source: OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege'

Business Insider
2 hours ago
- Business Insider
Why Wall Street's Dr. Doom now wants to be Dr. Boom
Wall Street has been calling him "Dr. Doom" for 17 years, but Nouriel Roubini — the economist famous for his persistently bearish and frequently dystopian takes on the world economy — is sounding surprisingly positive lately. He's rescinded his earlier call for a recession, and now sees a US tech and artificial intelligence investment boom unfolding that will uplift the economy through the rest of this decade. By 2030, Roubini thinks economic growth in the US will double from around 2% to 4%, while productivity growth surges from around 1.9% to 3%. The stock market is also likely to climb higher, he told Business Insider in an interview, predicting the S&P 500 would see high single-digit percentage growth in 2025, on par with its historical return. It's a sharp turnaround from the gloomy forecasts he' is known for. Roubini told BI the nickname started to stick in 2008, when the New York Times referred to him as " Dr. Doom" after he correctly called the Great Financial Crisis, he told BI. "Even before, I always said I'm not Dr. Doom and I'm Dr. Realist, first of all," Roubini said. He said that he's made numerous forecasts that were more bullish than the consensus throughout the years when the evidence lines up. "So I don't know why people think that I'm always Dr. Doom. It's not the case." His outlook, though, has brightened considerably since 2022. Back then, he appeared on TV and penned op-eds warning of a coming stagflationary debt crisis. At the time, he described the turmoil he saw looming as an all-in-one financial crisis involving spiraling debt levels, soaring inflation, and a severe recession. Roubini told BI there are a few things that have gotten him to change his tune. 1. Artificial Intelligence Roubini says he began to hear the murmurs of the AI revolution well before ChatGPT went viral at the end of 2022. In his 2022 book, "Megathreats," he acknowledged the potential for artificial intelligence to significantly boost economic growth and serve as a major tailwind for markets. That's become a reality way faster than Roubini expected, and a major reason he's become more bullish, he told BI. He believes the economy could start to reap the growth and productivity benefits of AI in the next several years, particularly as humanoid robots enter the mainstream. 2. An energy revolution A breakthrough in fusion energy would be another bullish force for the economy, Roubini said. Fusion energy hasn't been achieved yet, but tech firms are pouring vast sums of money into making it happen. Chevron and Google contributed to a more than $150 million funding round this week for TAE Technologies, a fusion energy company that plans to have a working prototype power plant by the early 2030s. Type One Energy, another fusion energy firm, also plans to roll out a power plant by the middle of the next decade. "We're not in an AI winter anymore. We had the fusion winter for 40 years. We're not anymore," Roubini said, pointing to the stagnation in tech and fusion energy development is the past. "Now it's happening." 3. Markets are checking Trump President Donald Trump's tariffs may not be as harmful to the US economy as some investors think, Roubini says. He thinks it's more likely that markets will throw a tantrum and force Trump to walk back his most aggressive policies. That's already happened a few times this year. Roubini pointed to sharp sell-offs in the bond market that preceded Trump's 90-day pause of his "Liberation Day" tariffs, and the softening of his tone regarding firing Jerome Powell. "That means the bond vigilantes are the most powerful people in the world," Roubini said. "The instincts might be very bad, but then, markets are unforgiving," he added of policymakers. Roubini speculates that tariffs on China, for instance, could wind up somewhere around 39%, well-below the 145% tariff rate Trump proposed earlier in the year. Meanwhile, AI, quantum computing, and other tech advancements in the US can more than offset the impact of the trade war, Roubini said. Tariffs are expected to drag down GDP growth by 0.06% a year through 2035, according to estimates from the Congressional Budget Office. It's a fraction of the 2 percentage point increase in growth Roubini expects to see by the end of the decade. Roubini now pegs the odds of a recession to just around 25%. Even if the US enters a downturn this year, Roubini says he expects it to be shallow and short, as the Fed can cut interest rates to boost the economy, while tech powers growth over the long-run. That's not to say Dr. Doom has shed all of his bearish views. Roubini says many of the things he feared several years ago — stagflation, spiraling government debt levels, and rising geopolitical conflict — still loom. He rattled off a list of potential risks the US could conceivably face in the future: migration controls fueling stagflation in the economy, the US dollar collapsing in value, and China and the US not reaching a trade agreement and seeing an escalating cold war, to name a few scenarios. "So there's plenty of stuff in the world that can go wrong," he said.