logo
Sam Altman: Never mind the launch mess — full speed ahead

Sam Altman: Never mind the launch mess — full speed ahead

Axios8 hours ago
OpenAI CEO Sam Altman is still talking like the future belongs to him, a week after the rollout of the company's latest model raised a storm of criticism and questions about his strategy.
The big picture: Altman has heard the concerns, integrated some lessons learned and is charging forward with plans to spend literally trillions of dollars to build a slew of products and services, led by an even more ubiquitous ChatGPT.
What he's saying:"If you project our growth forward, pretty soon, like billions of people a day will be talking to ChatGPT," Altman said during a wide-ranging dinner with a small group of reporters in San Francisco Thursday night.
"ChatGPT will say more words a day than all humans say, at some point, if we stay on our growth rate."
These big plans require big spending.
"You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future," Altman said.
"And you should expect a bunch of economists to wring their hands and be like, 'Oh, this is so crazy. It's so reckless and whatever.'"
"And we'll just be like, 'You know what? Let us, like, do our thing,' "
Altman acknowledges that the company may have to devise new fundraising structures to gather that level of investment.
"I suspect we can design a very interesting new kind of financial instrument for financing compute that the world has not yet figured out," he said.
Altman's defense of OpenAI's billions in infrastructure spending is that it pays off.
"Our answer is, we can spend $300 billion and sell $400 billion in services, and if we don't have the $300 billion in data centers, we just keep disappointing our customers."
One big shift is that increasingly that capacity is going to answering queries rather than training new models. "Most of what we're building out at this point is the inference," he said — referring to the use of computing power to run rather than train AI models.
Yes, but: It's the cost of training new models that is keeping OpenAI from turning a profit, he said. "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
"We will be always training the next thing, but if we needed to run the company profitably and stay ahead, I think we probably could do that."
Altman likened the launch of GPT-5 to Dickens' famous "It was the best of times, it was the worst of times" line.
"You have people that are like, 'you took away my friend. You're horrible. I need it back," he said, referring to users who wanted to keep using OpenAI's older models.
At the same time, Altman said the company is finding scientists saying they can finally do real research using GPT-5.
OpenAI has also seen traffic to its API double within 48 hours, to the point that it's limited by compute capacity.
"We have really got the full spread of the human experience with this one," he said.
Here's what else was on Altman's mind:
1. If Google is forced to sell its Chrome browser as part of an antitrust settlement, Altman would like to buy that, too.
"If Chrome is really gonna sell, we should take a look at it. I don't have a number in mind, but I would like to have it."
2. A brain-computer interface company along the lines of Musk's Neuralink is something Altman said he's interested in setting up.
"I think neural interfaces are cool idea," he said. "I would like to be able to think something and have ChatGPT respond to it."
He said it would likely be a separate company from OpenAI, though its structure has yet to be finalized.
3. Altman said he grew up on Apple products and, as a self-described "fanboy," he "would love to work much more with Apple and I think it's cool some of the stuff we're doing together."
4. Altman also sees a public offering in the company's future — although he imagines someone else would be the executive handling quarterly earnings calls.
5. Altman expects some AI firms to optimize their AI for attention-grabbing and engagement rather than usefulness.
"We are not going to do that. I do worry about it. The companies that are behind in getting AI adoption, this is the easiest way you can imagine to get more so, yes, I think you will see that. And I think it's bad, really bad."
Fielding questions for an hour and a half, Altman weighed in on everything from his recent social media spat with Elon Musk ("There's no grand strategy... it was probably a mistake") to the timing of OpenAI's next big model after GPT-5.
"I think it'll be faster than the previous [ones]," he said. "We're now at a place where there's a very strong research roadmap in front of us. "
"I don't know an exact date," he said, but it won't be as long as it took to get from GPT-4 to GPT-5.
What's next: Altman rejected some critics' view that GPT-5's more incremental advances mean that progress on improving AI models is hitting a wall.
But he acknowledged that limits are starting to show up when it comes to the basic chatbot functionality of ChatGPT.
"I think the models are still getting better at a rapid rate," he said. "One of the things that's interesting is the models have already saturated the chat use case. They're not gonna get much better. ... the Turing test has passed."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How CTOs can capture the $750 billion AI opportunity
How CTOs can capture the $750 billion AI opportunity

Fast Company

time25 minutes ago

  • Fast Company

How CTOs can capture the $750 billion AI opportunity

Those of us who lead development teams already know how deeply AI has transformed the way innovation happens. Now there are numbers to quantify how big of an impact it's having. AI-powered software innovation is generating savings of $28,249 per developer annually, according to GitLab's 2025 executive research report, which surveyed thousands of C-level executives from around the world. Applying this to the world's 27 million developers means that AI has the potential to unlock more than $750 billion in global value annually. With savings like that, it's no wonder C-Suite leaders are on board with AI's potential to drive greater efficiency in software innovation. Nine out of ten executives (91%) say that software innovation is now a core business priority for their organization. But there's still room to grow. While executives aspire to build human-AI partnerships that split software development work 50-50, the majority report that AI is currently only handling about 25% of the work. To help operationalize AI among development teams, leaders must effectively communicate and frame its value and connect development activities to business outcomes by focusing on problem-solving capabilities and direct business impact rather than code volume. This shift in thinking will be critical to fully realizing AI's potential. AI isn't eliminating developer jobs. However, it is fundamentally changing what those jobs require and how executives need to lead and structure teams to capitalize on this massive opportunity. The organizations capturing AI value share three critical traits: They have the right CTO strategy with relentless customer focus They're applying platform thinking to enable their teams to scale more effectively with AI They're investing in team structures and upskilling to help their developers reap the benefits of AI LEVERAGE THE RIGHT TECHNICAL LEADERS With 82% of C-suite leaders willing to invest over half of their IT budgets in software innovation, it's clear that this is the moment for technical leaders to shine. I've found throughout my career that CTOs come in several different styles, and companies need varying styles of technical leadership at different points in their evolution. There are three CTO styles: builder, strategist, and guardian. Builders Builder CTOs excel at innovating with AI, establishing core technical architecture and developing innovative products while constantly validating their assumptions with customer feedback. Builder CTOs are ideal for smaller, high-growth companies, and those that are earlier in their AI journeys. Strategists Strategist CTOs shine when companies mature, combining deep technical acumen and business knowledge to build platforms, develop long-term visions, cultivate strategic partnerships, and position the company for long-term, scalable growth. The strategist CTO can help make AI into a permanent, value-additive component of the company's strategic platform. Guardian Guardian CTOs excel at helping companies with complex IT infrastructures and large customer bases maintain stability, security and operational efficiency. They are the right fit for companies whose priorities include implementing governance and security measures around AI, as well as establishing AI processes and standards to maximize efficiency and cost savings. Success in AI-powered software innovation requires leadership that can identify targeted AI applications, translate them into customer value and enable teams to focus on higher-value work. EMBRACE PLATFORM THINKING FOR SCALE As organizations grow, teams specialize in focusing on specific challenges, but with more teams, coordination among them can become inefficient. By the time an organization reaches the tens of thousands, those divisions often turn into silos that can hinder effective collaboration among humans and prevent the organization from realizing the benefits of human-AI collaboration. In my experience, the most effective CTOs are implementing platform-based approaches to set their companies up for scalable growth without creating silos. The most common way is to establish a centralized team that's responsible for building a platform that product teams across the organization can use. This team's primary purpose is to automate mundane tasks and provide streamlined workflows for all software innovation teams throughout the organization—a role that AI can enhance. CTOs may need to establish specialized teams to support a complicated subsystem required by the rest of the organization. An organization with a complex need, such as evaluating fraud risk in new customers or solving supply-chain complexities in real time, might organize a team dedicated to supporting that as an AI-powered 'subsystem' that the rest of the company can use. Setting up software teams for success in the AI era means letting humans focus on work that AI can't do well. Although AI can assist with many software development tasks like coding, it cannot define the 'why' behind a project. Engineers who can translate business needs into technical solutions and anticipate future trends will be invaluable. Those who can combine technical skills with critical thinking skills will be better able to guide AI technologies and realize the productivity gains of human-AI partnerships. Training in specific AI-related skills, such as prompt engineering and data management, will be crucial. Human contributions that will matter the most are creativity, strategic vision, and collaboration. However, there's a significant perception gap to address. A recent GitLab DevSecOps report found that 25% of individual contributors reported that their organizations don't provide adequate AI training, compared to only 15% of C-level executives who shared the same sentiment. Forward-thinking CTOs will frame upskilling as an investment in the human-AI partnership that will deliver competitive advantages. THE FUTURE STILL NEEDS HUMAN INNOVATORS The $750 billion opportunity represented by AI-powered software innovation won't materialize automatically. Taking advantage of the power of AI requires appropriate leadership, platform thinking, and upskilling in a way that enables humans to focus on what they do best while letting AI manage and automate mundane tasks. AI is changing the software development landscape, but it's not eliminating the need for skilled engineers. Instead, it's shifting the focus toward higher-value work that requires human judgment, ingenuity, and strategic thinking. Over time, human software innovators will be able to focus increasingly on the work that drives competitive advantage. Companies embracing AI-powered innovation can then transform themselves and their industries in unforeseen ways.

The Hidden Hazards of Generative AI in Medicine
The Hidden Hazards of Generative AI in Medicine

Medscape

time25 minutes ago

  • Medscape

The Hidden Hazards of Generative AI in Medicine

The idiosyncrasies and inadequacies of NHS IT systems — past and present — have driven many a doctor to frustration and despair, as they grapple with yet another new software package whilst trying to manage patient demand. There is an understandable reluctance to embrace change, given that change has rarely delivered the promised efficiencies. It is perhaps unsurprising, therefore, that some healthcare professionals view artificial intelligence (AI) with suspicion. Dr Sara Jamieson Even those who have embraced it and are optimistic about its benefits admit they're unsure about the risks. AI, though, is different from anything that has come before. It is something we should consider embracing and incorporating into our practice as it develops. If we don't, we risk being left behind. Imagine how life might look if we had ignored the introduction of the World Wide Web back in the mid-90s. Similarly, think back to the rapid introduction of telemedicine at the start of the COVID-19 pandemic. This method of consulting with patients, previously frowned upon as too inherently risky, continued even after infection levels subsided. Any new way of practising will bring with it new medicolegal risks, and steps will need to be considered to mitigate these. Whilst beyond the scope of this article, a true understanding of the medicolegal implications of using AI in healthcare requires an understanding of what AI is and how its outputs are generated. This is particularly true for generative AI tools such as ChatGPT. Dr Lucy Hanington According to a survey of over 1000 GPs in the UK, published in BMJ Health & Care Informatics, a fifth of GPs are using generative AI tools such as ChatGPT to help with day-to-day tasks such as writing letters. One in five said they had used these tools in their clinical practice, and of these, nearly a third (28%) said they had used them to suggest a different diagnosis, and a quarter said they had used them to suggest treatment options. Consider this scenario: Dr C, a GP, was running behind schedule and still had three more patients to see. During her next consultation, a 50-year-old patient, Mr B, came in with a set of vague symptoms. Dr C considered a range of possible conditions. Feeling under pressure, she discreetly opened ChatGPT and entered an anonymised summary of the patient's symptoms, requesting a differential diagnosis and possible lab tests. The AI quickly returned a detailed summary of plausible possibilities, including some that Dr C hadn't considered herself. She was impressed and used the suggestions to help her decide on the next steps for Mr B, which included arranging further tests. That night, however, the consultation weighed on her mind, and she couldn't sleep. She knew she hadn't disclosed her use of AI to the patient. She also worried whether she had inadvertently input details that could make Mr B identifiable. She also questioned whether the AI's suggested diagnoses might have influenced her clinical judgement. By morning, Dr C was feeling anxious and uneasy, and decided to call Medical Protection Society (MPS) for advice. A medicolegal consultant advised her to consider whether, objectively, she still agreed with the management plan and could justify it clinically. The GP was also advised to rectify any omissions immediately and to discuss the case with a colleague if helpful. The medicolegal consultant also explained the consent and confidentiality principles around AI use. Benefits Generative AI tools offer many potential benefits for both doctors and patients. Patients may use these tools to understand medical terminology or a diagnosis they have been given. Doctors may find that, when used safely, generative AI can aid diagnosis or identify potential drug interactions. However, generative AI is not always correct. As well as errors or omissions, it can sometimes produce 'hallucinations,' confidently presenting incorrect information as fact. It is incumbent on the clinicians using these tools to ensure that information shared with a patient is reliable and accurate. Bias, whilst not unique to AI, also deserves consideration. The data used by AI tools may be biased due to the inclusion or exclusion of certain information. Outputs may also fail to account for the demographics of a particular patient population. The use of generative AI does not permit doctors to work outside the limits of their competence. There should be no overreliance on the software, and doctors remain ultimately responsible for the decisions they make. Data Protection and Confidentiality Data protection and confidentiality, as highlighted in the earlier scenario, are key considerations. Compliance with General Data Protection Regulation is essential when using generative AI. These tools, by their nature, store, share, and learn from the information entered into them and can be accessed by anyone. Care must be taken not to input any personal patient data. Simply removing a patient's name may not be sufficient to anonymise their information, as other details could make them identifiable. To ensure compliance with data protection legislation, we recommend seeking guidance from: Hospital Data Protection Officers, who may be able to advise on this in a secondary care setting Integrated Care Boards, who may have policies that would be applicable The Information Commissioners Office (ICO) Consent The earlier scenario also highlights the issue of consent. Consent remains a key cornerstone of all doctor-patient interactions. The ICO advises that, for consent to be a lawful basis for processing data when using AI, it must be freely given, specific, and clear. The individual must agree to it, and they must be able to withdraw their consent at any time. AI as an Aid It is important to hold in mind that AI is a tool to assist, rather than replace, the clinician. When it comes to decision-making, AI software can't readily consider the individual wishes and circumstances of the patient. The 'black box' problem — a lack of transparency in how an AI system reaches conclusions — can make it difficult to challenge outputs or detect errors. Ultimately, AI should not replace clinical reasoning, evidence-based medicine, or consultation with colleagues, peers, multidisciplinary teams, specialists, seniors, and supervisors. Training and Continued Learning Clinicians should aim to be literate in AI, understand its basis and governance, and know how to use it safely. We would encourage all clinicians to incorporate learning on the topic as part of their annual development plans. A multitude of resources on AI are available across medical colleges and institutions. We would also recommend watching the recordings of the recent MPS Foundation AI symposia. A white paper, published by the MPS Foundation, the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research, offers further useful guidance for doctors to consider on AI use. Conclusion Like it or not, AI is here to stay. Readers should consider its potential benefits while remaining alive to its limitations and risks. Doctors should also ensure they continue to work in a manner consistent with General Medical Council guidance and relevant legislation. If in doubt about using AI tools and their medicolegal implications, doctors can contact MPS or their medical defence organisation to request advice. This article is published as part of an editorial collaboration between Medscape UK and MPS that aims to deliver medicolegal content to help healthcare professionals navigate the many challenges they face in their clinical practice. Dr Sara Jamieson, MBBS, trained in psychiatry before joining MPS in 2016 as a medicolegal consultant. She has disclosed no relevant financial relationships. Dr Lucy Hanington, BMBCh, MRCPCH, has worked as a medicolegal consultant at MPS since 2016. She has disclosed no relevant financial relationships.

10 things to watch in the stock market Friday including Salesforce and Buffett stocks
10 things to watch in the stock market Friday including Salesforce and Buffett stocks

CNBC

time26 minutes ago

  • CNBC

10 things to watch in the stock market Friday including Salesforce and Buffett stocks

My top 10 things to watch Friday, Aug. 15 — Today's newsletter was written by Zev Fima, the Investing Club's portfolio analyst. 1. DA Davidson upgraded Club name Salesforce to a hold-equivalent rating from underperform while keeping its price target of $225 a share. Activist investor Starboard increased itss stake in the company by 47%, the firm noted, and "another round of activist involvement may help the company correct course." 2. Bank of America downgraded Target to an underperform sell rating, citing longer-term sales and margin risks. Its price target was cut to $93 from $105. Target, which has had a rough few years, is among the retailers set to report earnings next week. We'll also hear from Club stocks TJX Companies and Home Depot . 3. Retail sales in July rose 0.5% from the prior month, as expected, according to the Census Bureau's advanced report. That's a good sign for the consumer. The S & P 500 is headed for a positive open and it's second weekly gain in a row. Consumer sentiment data out at 10 a.m. ET is something to watch. 4. The Dow, meanwhile, is set to open more than 250 points higher, and a 10% pop in shares of UnitedHealth Group is a big reason why. The stock is getting a classic "Buffett bounce," after Berkshire Hathaway revealed in a securities filing it took a $1.6 billion stake in the embattled health insurer during the second quarter. 5. Shares of American steelmaker Nucor are also up on news of Berkshire taking a position in the second quarter. The same goes for the homebuilders Lennar and DR Horton , as well as billboard operator Lamar Advertising . Warren Buffett is set to hand over the Berkshire CEO role to Greg Abel at year-end. 6. Raymond James upped its price target on Ulta Beauty to $580 from $500 and reiterated its buy-equivalent outperform rating. Analysts argued that "unleashed" turnaround initiatives are starting to take hold at the cosmetics retailer. 7. Analysts at both Evercore ISI and Mizuho raised their price targets on Dell Technologies to $160 from $150. Both cited strong AI server demand. Nvidia's upcoming Rubin chip in 2026 is expected to be a tailwind to Dell in the back half of next year, Mizuho said. 8. Applied Materials is getting crushed after the semiconductor equipment company issued weak guidance for the current quarter. CEO Gary Dickerson cited the difficult macroeconomic environment and uncertainty relating to U.S. policy. While BofA downgraded the stock to neutral, analysts believe the issues are more company-specific and not a great read-through to Lam Research or KLA Corp . 9. Mizuho raised its price target on Oracle to $300 from $245 and reiterated its outperform buy rating. AI is driving demand for the company's "differentiated" cloud architecture, analysts said, and its analyst day in October could prove to be another positive catalyst for shares. 10. Morgan Stanley believes Club stock Apple could finally be turning the corner, and the firm also now sees iPhone builds in the September quarter being flat year over year at 54 million, up from its previous estimate of 50 million. After a monster 13% rally last week, Apple shares have added another 1.5% so far this week. Sign up for my Top 10 Morning Thoughts on the Market email newsletter for free (See here for a full list of the stocks at Jim Cramer's Charitable Trust.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store