logo
OpenAI Unveils Technology That Can ‘Reason' With Images

OpenAI Unveils Technology That Can ‘Reason' With Images

New York Times16-04-2025
In September, OpenAI introduced A.I. technology that could 'reason' through tasks involving math, coding and science.
Now, this technology can tackle similar tasks that involve images, including sketches, posters, diagrams and graphs.
On Wednesday, the company unveiled two new versions of its reasoning technology called OpenAI o3 and OpenAI o4-mini. Each can handle tasks that involve both images and text.
These systems can 'manipulate, crop and transform images in service of the task you want to do,' said Marc Chen, head of research at OpenAI, in announcing the new system during an internet livestream.
OpenAI also said that these systems could generate images, search the web and use other digital tools.
Unlike early versions of its ChatGPT chatbot, these reasoning systems spend a significant amount of time 'thinking' about a question before answering, rather than providing an instant response.
The systems are part of a wider effort to build A.I. that can reason through complex tasks. Companies like Google, Meta and DeepSeek, a Chinese start-up, are developing similar technologies.
The goal is to build systems that can solve a problem through a series of steps, each one building on the last, similar to how humans reason. These technologies can be particularly useful to computer programmers who use A.I. systems to write code.
The reasoning systems are based on a technology called large language models, or L.L.M.s. To build reasoning systems, companies put L.L.M.s through an additional process called reinforcement learning. During this process, a system learns behavior through extensive trial and error.
By working through various math problems, for instance, it can learn which methods lead to the right answer and which do not. If it repeats this process with a large number of problems, it can identify patterns.
OpenAI's latest systems have learned to handle problems that involve both images and text.
Experts point out that reasoning systems do not necessarily reason like a human. And like other A.I. technologies, they can get things wrong and make stuff up — a phenomenon called hallucination.
OpenAI also unveiled a new tool called Codex CLI that is designed to further facilitate computer programming tasks that involve systems like o3 and o4-mini. Called an A.I. agent, it provides ways of using these A.I. systems in tandem with existing code stored on a programmer's personal machine.
The company said it is open sourcing this tool, meaning it is freely sharing its underlying technology with programmers and businesses, allowing them to modify and build on the technology.
OpenAI said that, beginning Wednesday, these new systems would be available to anyone who subscribed to ChatGPT Plus, a $20-a-month service, or ChatGPT Pro, a $200-a-month service that provides access to all of the company's latest tools.
(The New York Times has sued OpenAI and its partner, Microsoft, for copyright infringement of news content related to A.I. systems. Both companies have denied the claims).
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Workday says hackers used social engineering to access personal data during a breach
Workday says hackers used social engineering to access personal data during a breach

Engadget

time4 minutes ago

  • Engadget

Workday says hackers used social engineering to access personal data during a breach

Human resources technology company Workday has confirmed that a data breach has affected its third-party CRM platform. In a blog post announcing the breach, the company said that a social engineering campaign had targeted its employees, with threat actors posing as IT or HR in order to trick employees into sharing account access or personal information. The company says that while the threat actors were able to access some information from the CRM, there is no indication of any access to customer accounts or the data within them. "We acted quickly to cut the access and have added extra safeguards to protect against similar incidents in the future," the post reads. Workday says that the information gathered from the CRM consists of "commonly available" business contact information such as names, email addresses and phone numbers. From the sound of its blog post, the information of Workday end users was not revealed, only information from the companies it has contracts with. What is known with some certainty at this point is that Workday's CRM was breached. The company's statement that "no indication" of a deeper customer data breach was found is far from a guarantee — often, the full scope of hacks like this aren't known until later. Earlier this year, Workday laid off around 1,750 employees, or around 8.5 percent of its workforce. The company said it was "prioritizing innovation investments like AI and platform development, and rigorously evaluating the ROI of others across the board." The precise third-party CRM Workday is referring to was not disclosed. Earlier this year Google fell victim to a hack via the Salesforce app, and last year Disney said it would stop using Slack, the Salesforce-owned messaging platform, after a hack exposed company data.

Claude AI Can Now End Conversations It Deems Harmful or Abusive
Claude AI Can Now End Conversations It Deems Harmful or Abusive

CNET

time4 minutes ago

  • CNET

Claude AI Can Now End Conversations It Deems Harmful or Abusive

Anthropic has announced a new experimental safety feature, allowing its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive scenarios. This move reflects the company's growing focus on what it calls "model welfare," the notion that safeguarding AI systems, even if they're not sentient, may be a prudent step in alignment and ethical design. Read also: Meta Is Under Fire for AI Guidelines on 'Sensual' Chats With Minors According to Anthropic's own research, the models were programmed to cut off dialogues after repeated harmful requests, such as sexual content involving minors or instructions facilitating terrorism -- especially when the AI had already refused and attempted to steer the conversation constructively. The AI may exhibit what Anthropic describes as "apparent distress," which guided the decision to give Claude the ability to end these interactions in simulated and real-user testing. When this feature is triggered, users can't send additional messages in that particular chat, although they're free to start a new conversation or edit and retry previous messages to branch off. Crucially, other active conversations remain unaffected. Anthropic emphasizes that this is a last-resort measure, intended only after multiple refusals and redirects have failed. The company explicitly instructs Claude not to end chats when a user may be at imminent risk of self-harm or harm to others, particularly when dealing with sensitive topics like mental health. Anthropic frames this new capability as part of an exploratory project in model welfare, a broader initiative that explores low-cost, preemptive safety interventions in case AI models were to develop any form of preferences or vulnerabilities. The statement says the company remains "highly uncertain about the potential moral status of Claude and other LLMs (large language models)." Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist A new look into AI safety Though rare and primarily affecting extreme cases, this feature marks a milestone in Anthropic's approach to AI safety. The new conversation-ending tool contrasts with earlier systems that focused solely on safeguarding users or avoiding misuse. Here, the AI itself is treated as a stakeholder in its own right, as Claude has the power to say, "this conversation isn't healthy" and end it to safeguard the integrity of the model itself. Anthropic's approach has sparked broader discussion about whether AI systems should be granted protections to reduce potential "distress" or unpredictable behavior. While some critics argue that models are merely synthetic machines, others welcome this move as an opportunity to spark more serious discourse on AI alignment ethics. "We're treating this feature as an ongoing experiment and will continue refining our approach," the company said.

Google Home adds scheduling for older Nest thermostats
Google Home adds scheduling for older Nest thermostats

The Verge

time5 minutes ago

  • The Verge

Google Home adds scheduling for older Nest thermostats

Google has announced that users of the US and UK versions of the 3rd gen Nest Learning Thermostat and the Nest Thermostat E can now schedule temperature adjustments using the Google Home app. Newer versions of the Nest thermostats already allow you to manually set schedules in the Home app, but those older models required users to keep the dedicated Nest app installed for scheduling. Google says the feature 'began rolling out starting last week' so it may not yet be available to all Nest thermostat users yet. The company originally announced the expanded Google Home support for the older Nest models last April after revealing that it would also end software updates for the first-generation Nest Learning thermostat that launched in 2011, and the second generation model that was released a year later. Existing temperature schedules in the Nest app are transferred over to the Home app, which also allows sensor and fan schedules to be set. Today Google also announced other improvements to its Home app for all Nest thermostats including more vibrant colors and updated descriptions making it easier to see the device's heating and cooling state, and a refreshed energy dashboard with more details and a weekly comparison of year-over-year energy usage. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Smart Home Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store