logo
#

Latest news with #GPT‑2

OpenAI rolls out o3-pro model; CEO Sam Altman says open-weights model needs more time
OpenAI rolls out o3-pro model; CEO Sam Altman says open-weights model needs more time

The Hindu

time3 days ago

  • Business
  • The Hindu

OpenAI rolls out o3-pro model; CEO Sam Altman says open-weights model needs more time

OpenAI announced that its o3-pro model was rolling out to users, but CEO Sam Altman noted that an upcoming open weight model would not release this month. OpenAI o3-pro will be available in the model picker for Pro and Team users, replacing OpenAI o1-pro. Enterprise and Edu users will get access after a week, per the company. 'In expert evaluations, reviewers consistently prefer OpenAI o3-pro over o3, highlighting its improved performance in key domains—including science, education, programming, data analysis, and writing. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy,' noted OpenAI. OpenAI o3-pro can search the web, analyse files, process visual inputs, use Python, and personalise responses using memory, per the company. 'o3-pro is rolling out now for all chatgpt pro users and in the api. it is really smart! [I] didnt believe the win rates relative to o3 the first time i saw them,' posted OpenAI CEO Sam Altman on X on Wednesday (June 11, 2024). However, a highly-anticipated open weights model that was set to be released will require more time, according to Altman. '[W]e are going to take a little more time with our open-weights model, i.e. expect it later this summer but not june. [O]ur research team did something unexpected and quite amazing and we think it will be very very worth the wait, but needs a bit longer,' posted Altman on X on Wednesday. An open weights model, while not open source, gives technical users more flexibility to refine the model to suit their own needs and specifications. OpenAI called this model its 'first open language model since GPT‑2,' which came out in 2019. OpenAI announced this week that its annualised revenue run rate hit $10 billion as of June, putting it on track to hit its targets for 2025.

The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI
The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

WIRED

time21-05-2025

  • Business
  • WIRED

The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

May 21, 2025 7:00 AM In her new book Empire of AI , journalist Karen Hao chronicles the anxieties around the OpenAI office in its early days. Sam Altman, CEO of OpenAI, arrives to testify during a Senate hearing in May 2025. Photograph:Dario Amodei 's AI safety contingent was growing disquieted with some of Sam Altman's behaviors. Shortly after OpenAI's Microsoft deal was inked in 2019, several of them were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didn't align with what they had understood from Altman. If AI safety issues actually arose in OpenAI's models, they worried, those commitments would make it far more difficult, if not impossible, to prevent the models' deployment. Amodei's contingent began to have serious doubts about Altman's honesty. 'We're all pragmatic people,' a person in the group says. 'We're obviously raising money; we're going to do commercial stuff. It might look very reasonable if you're someone who makes loads of deals like Sam, to be like, 'All right, let's make a deal, let's trade a thing, we're going to trade the next thing.' And then if you are someone like me, you're like, 'We're trading a thing we don't fully understand.' It feels like it commits us to an uncomfortable place.' This was against the backdrop of a growing paranoia over different issues across the company. Within the AI safety contingent, it centered on what they saw as strengthening evidence that powerful misaligned systems could lead to disastrous outcomes. One bizarre experience in particular had left several of them somewhat nervous. In 2019, on a model trained after GPT‑2 with roughly twice the number of parameters, a group of researchers had begun advancing the AI safety work that Amodei had wanted: testing reinforcement learning from human feedback (RLHF) as a way to guide the model toward generating cheerful and positive content and away from anything offensive. But late one night, a researcher made an update that included a single typo in his code before leaving the RLHF process to run overnight. That typo was an important one: It was a minus sign flipped to a plus sign that made the RLHF process work in reverse, pushing GPT‑2 to generate more offensive content instead of less. By the next morning, the typo had wreaked its havoc, and GPT‑2 was completing every single prompt with extremely lewd and sexually explicit language. It was hilarious—and also concerning. After identifying the error, the researcher pushed a fix to OpenAI's code base with a comment: Let's not make a utility minimizer. In part fueled by the realization that scaling alone could produce more AI advancements, many employees also worried about what would happen if different companies caught on to OpenAI's secret. 'The secret of how our stuff works can be written on a grain of rice,' they would say to each other, meaning the single word scale . For the same reason, they worried about powerful capabilities landing in the hands of bad actors. Leadership leaned into this fear, frequently raising the threat of China, Russia, and North Korea and emphasizing the need for AGI development to stay in the hands of a US organization. At times this rankled employees who were not American. During lunches, they would question, Why did it have to be a US organization? remembers a former employee. Why not one from Europe? Why not one from China? During these heady discussions philosophizing about the long‑term implications of AI research, many employees returned often to Altman's early analogies between OpenAI and the Manhattan Project. Was OpenAI really building the equivalent of a nuclear weapon? It was a strange contrast to the plucky, idealistic culture it had built thus far as a largely academic organization. On Fridays, employees would kick back after a long week for music and wine nights, unwinding to the soothing sounds of a rotating cast of colleagues playing the office piano late into the night. The shift in gravity unsettled some people, heightening their anxiety about random and unrelated incidents. Once, a journalist tailgated someone inside the gated parking lot to gain access to the building. Another time, an employee found an unaccounted‑for USB stick, stirring consternation about whether it contained malware files, a common vector of attack, and was some kind of attempt at a cybersecurity breach. After it was examined on an air‑gapped computer, one completely severed from the internet, the USB turned out to be nothing. At least twice, Amodei also used an air‑gapped computer to write critical strategy documents, connecting the machine directly to a printer to circulate only physical copies. He was paranoid about state actors stealing OpenAI's secrets and building their own powerful AI models for malicious purposes. 'No one was prepared for this responsibility,' one employee remembers. 'It kept people up at night.' Altman himself was paranoid about people leaking information. He privately worried about Neuralink staff, with whom OpenAI continued to share an office, now with more unease after Elon Musk's departure. Altman worried, too, about Musk, who wielded an extensive security apparatus including personal drivers and bodyguards. Keenly aware of the capability difference, Altman at one point secretly commissioned an electronic countersurveillance audit in an attempt to scan the office for any bugs that Musk may have left to spy on OpenAI. To employees, Altman used the specter of US adversaries advancing AI research faster than OpenAI to rationalize why the company needed to be less and less open while working as fast as possible. 'We must hold ourselves responsible for a good outcome for the world,' he wrote in his vision document. 'On the other hand, if an authoritarian government builds AGI before we do and misuses it, we will have also failed at our mission—we almost certainly have to make rapid technical progress in order to succeed at our mission.' In the author's note at the beginning of the book, Karen Hao notes, 'I reached out to all of the key figures and companies that are described in this book to seek interviews and comment. OpenAI and Sam Altman chose not to cooperate.' Hao also attempted to reach Elon Musk for comment and got no response. Excerpt adapted from Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao. Published by arrangement with Penguin Press. Copyright © 2025 by Karen Hao.

OpenAI seeks to make its upcoming 'open' AI model best-in-class
OpenAI seeks to make its upcoming 'open' AI model best-in-class

Yahoo

time24-04-2025

  • Business
  • Yahoo

OpenAI seeks to make its upcoming 'open' AI model best-in-class

Toward the end of March, OpenAI said it intended to release its first "open" language model since GPT‑2 sometime this year. Now, details about that model are beginning to trickle out from the company's sessions with the AI developer community. Aidan Clark, OpenAI's VP of research, is leading development of the open model, which is in the very early stages, sources familiar with the proceedings told TechCrunch. The company is aiming to release the model — a reasoning model along the lines of its o-series models — in early summer, and wants to make sure it tops benchmarks versus other open reasoning models. OpenAI is exploring a highly permissive license for the model with few usage or commercial restrictions, the sources said. Open models like Llama and Google's Gemma have been criticized by some in the community for imposing onerous requirements — criticisms that OpenAI is seemingly seeking to avoid. The ChatGPT maker is facing increasing pressure from rivals, such as Chinese AI lab DeepSeek, that have adopted an open approach to launching models. In contrast to OpenAI's strategy, these "open" competitors make their models available to the AI community for experimentation and, in some cases, commercialization. It has proven to be a wildly successful strategy for some outfits. Meta, which has invested heavily in its Llama family of open AI models, said in early March that Llama has racked up over 1 billion downloads. Meanwhile, DeepSeek has quickly amassed a large worldwide user base and attracted the attention of investors. Sources told TechCrunch that OpenAI intends for its open model, which will be "text in, text out," to run on high-end consumer hardware, and possibly allow developers to toggle its "reasoning" on or off, similar to reasoning models recently released by Anthropic and others. If the launch is well-received, OpenAI may follow up with additional models — potentially smaller models as well. OpenAI's CEO Sam Altman has previously said he thinks OpenAI has been on the wrong side of history when it comes to open-sourcing its technologies. "[I personally think we need to] figure out a different open source strategy," Altman said during a Reddit Q&A in January. "Not everyone at OpenAI shares this view, and it's also not our current highest priority […] We will produce better models [going forward], but we will maintain less of a lead than we did in previous years." Altman has also said OpenAI's upcoming open model will be thoroughly red-teamed and evaluated for safety. Sources tell TechCrunch that the company intends to release a model card for the model — a thorough technical report showing the results of OpenAI's internal and external benchmarking and safety testing. "[B]efore release, we will evaluate this model according [to] our preparedness framework, like we would for any other model," Altman said in a post on X last month. "[A]nd we will do extra work given that we know this model will be modified post-release." OpenAI has raised the ire of some AI ethicists for reportedly rushing the safety testing of its recent models, and for failing to release model cards for others. Altman also stands accused of misleading OpenAI executives about model safety reviews prior to his brief ouster in November 2023. We've reached out to OpenAI for comment and will update this piece if we hear back. Sign in to access your portfolio

OpenAI plans to release open-weight language model in coming months
OpenAI plans to release open-weight language model in coming months

Reuters

time31-03-2025

  • Business
  • Reuters

OpenAI plans to release open-weight language model in coming months

March 31 (Reuters) - OpenAI is planning to release its first open-weight language model with reasoning capabilities since GPT‑2 in the coming months, CEO Sam Altman said on Monday. The company will speak with developers about how to make it useful, Altman said in a post on X. here. An open-weight language model is a type that allows the model's weights to be adjusted or fine-tuned based on new input data or tasks. "We still have some decisions to make, so we are hosting developer events to gather feedback and later play with early prototypes," Altman said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store