logo
#

Latest news with #GPT3

Why OpenAI Is Fueling the Arms Race It Once Warned Against
Why OpenAI Is Fueling the Arms Race It Once Warned Against

Bloomberg

time16-05-2025

  • Business
  • Bloomberg

Why OpenAI Is Fueling the Arms Race It Once Warned Against

On June 11, 2020, more than two years before the launch of ChatGPT brought generative artificial intelligence to the mainstream, OpenAI launched its first commercial product — an application programming interface (API) that let companies build features on top of what was then its most powerful AI system, GPT-3. Suddenly, developers could use OpenAI's technology to spit out sonnets, social media posts and code, just as millions of users would do in late 2022 through a more intuitive chatbot interface. The product was not open source but rather intended to be a moneymaker. It was introduced earlier than some employees wanted in part because of (untrue) rumors that Google was about to put out its own AI model. And when it launched, OpenAI did not yet have a formal trust and safety team in place to address misuses of the technology, nor did it have clear rules for acceptable uses.

OpenAI's nonprofit mission fades further into the rearview
OpenAI's nonprofit mission fades further into the rearview

Yahoo

time08-05-2025

  • Business
  • Yahoo

OpenAI's nonprofit mission fades further into the rearview

OpenAI was founded as a nonprofit with a mission to build safe artificial general intelligence for the benefit of humanity. For a while, that structure made sense. But in 2019, the company made a discovery that changed everything: Scaling up AI models—with more data, compute, and parameters—led to predictably stronger results. Most Read from Fast Company The insight was formalized in a 2020 paper titled 'Scaling Laws for Neural Language Models,' and it reshaped OpenAI's trajectory. That same year, the company released GPT-3, a model 100 times larger than GPT-2. Microsoft invested. Venture capitalists piled in. Inside the company, employees began to see Sam Altman as the one who could turn a nonprofit breakthrough into a world-changing—and highly profitable—business. And yet OpenAI remained a nonprofit company. Seen in that light, yesterday's announcement that OpenAI's for-profit arm will become a 'public benefit company' (PBC) is no big surprise. Under the newly proposed structure, OpenAI will continue operating as a for-profit AI business housed within a nonprofit parent. (Altman said last year he wanted to free the for-profit from the nonprofit parent.) 'We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California,' OpenAI board member Bret Taylor said in a blog post Monday. The change is that the for-profit part will now be a 'public benefit corporation' and no longer a 'capped profit' entity. Now there's no limit on how much OpenAI shareholders—including investors and employees—can earn. Dropping the capped-profit model was also a condition of OpenAI's last two funding rounds. In the most recent (and largest), lead investor SoftBank stipulated that OpenAI adopt a new corporate structure by the end of 2025. Investors are willing to bet big on OpenAI, but they want the potential for big returns. Altman and others at OpenAI have said that bringing in revenue has become more important with the realization that building progressively better models will require massive investments in infrastructure and computing power. The key worry about Sam Altman is that, under his leadership, the company might prioritize pushing toward superintelligent AI without adequately safety-testing its models or mitigating their risks. The new PBC structure likely won't do much to quiet those concerns.

OpenAI's nonprofit mission fades further into the rearview
OpenAI's nonprofit mission fades further into the rearview

Fast Company

time06-05-2025

  • Business
  • Fast Company

OpenAI's nonprofit mission fades further into the rearview

OpenAI was founded as a nonprofit with a mission to build safe artificial general intelligence for the benefit of humanity. For a while, that structure made sense. But in 2019, the company made a discovery that changed everything: scaling up AI models—with more data, compute, and parameters—led to predictably stronger results. The insight was formalized in a 2020 paper titled ' Scaling Laws for Neural Language Models,' and it reshaped OpenAI's trajectory. That same year, the company released GPT-3, a model 100 times larger than GPT-2. Microsoft invested. Venture capitalists piled in. Inside the company, employees began to see Sam Altman as the one who could turn a nonprofit breakthrough into a world-changing—and highly profitable—business. And yet OpenAI remained a nonprofit company. Seen in that light, yesterday's announcement that OpenAI's for-profit arm will become a 'public benefit company' (PBC) is no big surprise. Under the newly proposed structure, OpenAI will continue operating as a for-profit AI business housed within a nonprofit parent. (Altman said last year he wanted to free the for-profit from the nonprofit parent.) 'We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California,' OpenAI board member Bret Taylor said in a blog post Monday. The change is that the for-profit part will now be a 'public benefit corporation' and no longer a 'capped profit' entity. Now there's no limit on how much OpenAI shareholders—including investors and employees—can earn. Dropping the capped-profit model was also a condition of OpenAI's last two funding rounds. In the most recent (and largest), lead investor SoftBank stipulated that OpenAI adopt a new corporate structure by the end of 2025. Investors are willing to bet big on OpenAI, but they want the potential for big returns. Altman and others at OpenAI have said that bringing in revenue has become more important with the realization that building progressively better models will require massive investments in infrastructure and computing power.

Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"
Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"

Yahoo

time19-04-2025

  • Business
  • Yahoo

Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"

Meta has been accused of illegally using copyrighted material to train its AI models — and the tech giant's defense is pretty thin. In the ongoing suit Richard Kadrey et al v. Meta Platforms, led by a group of authors including Pulitzer Prize winner Andrew Sean Greer and National Book Award winner Ta-Nehisi Coates, the Mark Zuckerberg-led company has argued that its alleged scraping over seven million books from the pirated library LibGen constituted "fair use" of the material, and was therefore not illegal. The specious defenses don't end there. As Vanity Fair spotlights in a new writeup, Meta's attorneys are also arguing that the countless books that the company used to train its multibillion-dollar language models and springboard itself into the headspinningly buzzy AI race are actually worthless. Meta cited an expert witness who downplayed the books' individual importance, averring that a single book adjusted its LLM's performance "by less than 0.06 percent on industry standard benchmarks, a meaningless change no different from noise." Thus there's no market in paying authors to use their copyrighted works, Meta says, because "for there to be a market, there must be something of value to exchange," as quoted by Vanity Fair — "but none of [the authors'] works has economic value, individually, as training data." Other communications showed that Meta employees stripped the copyright pages from the downloaded books. This is emblematic of the chicaneries and two-faced logic that Meta, and the AI industry at large, deploys when it's pressed about all the human-created content it devours. Somehow, that stuff is simultaneously not that valuable, and we should all stop pearl-clutching about the sanctity of art, and anyway an AI writes creative prose just as well as a human now — but is also absolutely essential to building our new synthetic gods that will solve climate change, so please don't make us pay for using any of it. That last bit is literally what OpenAI argued to the British Parliament last year — that there isn't enough stuff in the public domain to beef up its AI models, so it must be allowed to plumb the bounties of contemporary copyrighted works without paying a penny. Seemingly, this is an unspoken understanding at the top AI companies. When one Meta researcher inquired if the company's legal team had okayed using LibGen, another responded: "I didn't ask questions but this is what OpenAI does with GPT3, what Google does with PALM, and what Deepmind does with Chinchilla so we will do it to[o]," per Vanity Fair, from internal messages cited in the suit. Tellingly, the unofficial policy seems to be to not speak about it at all. "In no case would we disclose publicly that we had trained on LibGen, however there is practical risk external parties could deduce our use of this dataset," an internal Meta slide deck read. The deck noted that "if there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues." More on AI copyright: OpenAI Says It's "Over" If It Can't Steal All Your Copyrighted Work

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store