logo
GPT-5 is supposed to be nicer now

GPT-5 is supposed to be nicer now

TechCrunch8 hours ago
In Brief
OpenAI announced late Friday that it's updating its latest model to be 'warmer and friendlier.'
The company recently launched the much-anticipated GPT-5 in a process that CEO Sam Altman admitted was 'a little more bumpy than we'd hoped for,' with some users complaining that they preferred the previous model, GPT-4o.
OpenAI is trying to address some of those complaints with this update, with changes that it says are 'subtle' but will make GPT-5 'more approachable now.'
'You'll notice small, genuine touches like 'Good question' or 'Great start,' not flattery,' the company wrote in a social media post. 'Internal tests show no rise in sycophancy compared to the previous GPT-5 personality.'
At a dinner this week with journalists, OpenAI executives tried to focus on the company's plans beyond GPT-5, but as Max Zeff reports, the rocky launch was the elephant in the room. As far as model friendliness goes, VP Nick Turley said that the GPT-5 was 'just very to the point,' but that the new update would — as now announced — make it feel warmer.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Yahoo

time4 hours ago

  • Yahoo

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

The White House AI advisor discussed "AI psychosis" on a recent podcast. David Sacks said he doubted the validity of the concept. He compared it to the "moral panic" that surrounded earlier tech leaps, like social media. AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed "AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges. Read the original article on Business Insider Solve the daily Crossword

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Business Insider

time5 hours ago

  • Business Insider

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed " AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges.

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.
I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Yahoo

time5 hours ago

  • Yahoo

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Jigar Bhati transitioned from Twitter to OpenAI for a startup-like environment. He said his work experience in infrastructure and scaling helped him get hired at OpenAI. Bhati said OpenAI still feels like a startup with a focus on productivity and collaboration. This is an as-told-to essay based on a conversation with Jigar Bhati, a member of technical staff at OpenAI. He's worked at the company since 2023 and previously worked as a software engineer at Twitter. This story has been edited for length and clarity. I was working at Twitter for almost seven years and was looking for a change in general. Twitter was a big tech company, and it was handling millions of users already. I wanted to join a company that was a bit smaller scale and work in a startup-like environment. When ChatGPT launched, software engineers were in awe. So I was already using it before joining OpenAI. I was fascinated by the product itself. There was definitely that connection with the product that, as a software engineer, you often don't feel. I could also see there was a huge opportunity and huge impact to be created as part of joining OpenAI in general. And I think if I look back at the past two years of my journey, I think that's mostly true. It's been exciting seeing the user base grow over two years. I got to work and lead some critical projects, which was really great for my own professional development. And you're always working with smart people. OpenAI hires probably the best talent out there. You're contributing while also learning from different people, so there's exponential growth in your professional career as well. Building up the right work experience is key My career at Twitter and before that had been focused on infrastructure capabilities and handling large-scale distributed systems, which was something the team was looking for when I interviewed with OpenAI. When I joined OpenAI, there were a lot of challenges with respect to scaling infrastructure. So my expertise has helped steer some of those challenges in the right direction. Having that relevant experience helps when you're interviewing for any team. So one of the best things is to find the right team that has the closest match for you. Having public references, for me, helped. I had some discussions and conference talks and met a lot of people as part of that conference networking. That also helps you stay on top of state-of-the-art technological advancements, which helps in whatever work stream you're working with. I think OpenAI is probably one of the fastest-growing companies in the world, so you really need to move fast as part of that environment. You're shipping, or delivering, daily or weekly. And as part of that, I think you really need to have that cultural fit with the company. When you're interviewing, I think you need to be passionate about working at OpenAI and solving hard problems. In general for professionals, it's important to have a good career trajectory so that they can showcase how they have solved real, hard problems at a large scale. I think that goes a long way. Starting your career at a startup or a larger tech company are both fine. It's up to the students' interests and cultural fit as well. OpenAI is not just a place for experienced engineers. OpenAI also hires new grads and interns. I've definitely seen people entering as part of that pipeline. And it's amazing to see how they enjoy working at OpenAI and how they come up with new ideas and new capabilities and new suggestions. But whichever place you end up in, I think it's important to have good growth aspects professionally and also for you to ship products. At any company you can create impact for the company and yourself, and be able to have that career trajectory. OpenAI still feels like a startup One of the most exciting things is that I think OpenAI still operates like it operated two years ago. It still feels like a startup, even though we may have scaled the number of engineers, number of products, the number of user bases. It's still very much like a startup environment and there's a big push for productivity and collaboration. The velocity and the productivity you get working at OpenAI is definitely much higher than some of the other companies that I've worked with. That makes things really exciting because you get to ship products on a continuous basis, and you get into that habit of shipping daily rather than weekly or monthly or yearly. It feels great to be working in AI right now. In addition to having a connection with the product, it makes things very interesting when you are able to work from within the company, shaping the direction of a product that will be used by the entire world. With GPT-5, for instance, we had early access to it and could steer the direction of its launch. Every engineer is empowered to provide feedback that will help improve the model, add new capabilities, and make it more usable and user-friendly for all the different 700 million users out there. I think that's pretty amazing and speaks to the kind of impact you can make as part of the company. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store