logo
OpenAI Unveils A.I. Technology for ‘Natural Conversation'

OpenAI Unveils A.I. Technology for ‘Natural Conversation'

New York Times27-02-2025

When OpenAI started giving private demonstrations of its new GPT-4 technology in late 2022, its skills shocked even the most experienced A.I. researchers. It could answer questions, write poetry and generate computer code in ways that seemed far ahead of its time.
More than two years later, OpenAI has released its successor: GPT-4.5. The new technology signifies the end of an era. OpenAI said GPT-4.5 would be the last version of its chatbot system that did not do 'chain-of-thought reasoning.'
After this release, OpenAI's technology may, like a human, spend a significant amount of time thinking about a question before answering, rather than providing an instant response.
GPT-4.5, which can be used to power the most expensive version of ChatGPT, is unlikely to generate as much excitement at GPT-4, in large part because A.I. research has shifted in new directions. Still, the company said the technology would 'feel more natural' than its previous chatbot technologies.
'What sets the model apart is its ability to engage in warm, intuitive, naturally flowing conversations, and we think it has a stronger understanding of what users mean when they ask for something,' said Mia Glaese, vice president of research at OpenAI.
In the fall, the company introduced technology called OpenAI o1, which was designed to reason through tasks involving math, coding and science. The new technology was part of a wider effort to build A.I. that can reason through complex tasks. Companies like Google, Meta and DeepSeek, a Chinese start-up, are developing similar technologies.
The goal is to build systems that can carefully and logically solve a problem through a series of discrete steps, each one building on the last, similar to how humans reason. These technologies could be particularly useful to computer programmers who use A.I. systems to write code.
These reasoning systems are based on technologies like GPT-4.5, which are called large language models, or L.L.M.s.
L.L.M.s learn their skills by analyzing enormous amounts of text culled from across the internet, including Wikipedia articles, books and chat logs. By pinpointing patterns in all that text, they learned to generate text on their own.
To build reasoning systems, companies put L.L.M.s through an additional process called reinforcement learning. Through this process — which can extend over weeks or months — a system can learn behavior through extensive trial and error.
By working through various math problems, for instance, it can learn which methods lead to the right answer and which do not. If it repeats this process with a large number of problems, it can identify patterns.
OpenAI and others believe this is the future of A.I. development. But in some ways, they have been forced in this direction because they have run out of the internet data needed to train systems like GPT-4.5.
Some reasoning systems outperforms ordinary L.L.M.s on certain standardized tests. But standardized tests are not always a good judge of how technologies will perform in real-world situations.
Experts point out that the new reasoning system cannot necessarily reason like a human. And like other chatbot technologies, they can still get things wrong and make stuff up — a phenomenon called hallucination.
OpenAI said that, beginning Thursday, GPT-4.5 would be available to anyone who was subscribed to ChatGPT Pro, a $200-a-month service that provides access to all of the company's latest tools.
(The New York Times sued OpenAI and its partner, Microsoft, in December for copyright infringement of news content related to A.I. systems.)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla Tumbles After Musk Escalates Attacks on Trump Tax Bill
Tesla Tumbles After Musk Escalates Attacks on Trump Tax Bill

Yahoo

time22 minutes ago

  • Yahoo

Tesla Tumbles After Musk Escalates Attacks on Trump Tax Bill

(Bloomberg) -- Tesla Inc.'s shares sank as Elon Musk and President Donald Trump's simmering feud devolved into a public war of words between two of the world's most powerful people. ICE Moves to DNA-Test Families Targeted for Deportation with New Contract Next Stop: Rancho Cucamonga! US Housing Agency Vulnerable to Fraud After DOGE Cuts, Documents Warn The Global Struggle to Build Safer Cars Where Public Transit Systems Are Bouncing Back Around the World Trump on Thursday said he was 'very disappointed' by the Tesla chief executive officer's criticism of the president's signature tax policy bill. Musk fired back on social media, saying it was 'false' that the Tesla CEO knew the plan would unwind EV tax credits that benefit Tesla's business. Musk followed up with several more sharply worded posts, including saying Trump showed 'such ingratitude' for the help the billionaire entrepreneur has provided to Trump's administration. Tesla's shares fell as much 9.2% to an intraday low as the two traded barbs. The spat highlights how policies advanced by Trump and Republican lawmakers put billions of dollars at risk for Tesla. Trump's massive tax bill would largely eliminate a credit worth as much as $7,500 for buyers of some Tesla models and other electric vehicles by the end of this year, seven years ahead of schedule. That would translate to a roughly $1.2 billion hit to Tesla's full-year profit, according to JPMorgan analysts. After leaving his formal advisory role in the White House last week, Musk has been on a mission to block the president's signature tax bill that he described as a 'disgusting abomination.' The world's richest person has been lobbying Republican lawmakers — including making a direct appeal to House Speaker Mike Johnson — to preserve the valuable EV tax credits in the legislation. Separate legislation passed by the Senate attacking California's EV sales mandates poses another $2 billion headwind for Tesla's sales of regulatory credits, according to JPMorgan. Taken together, those measures threaten roughly half of the more than $6 billion in earnings before interest and taxes that Wall Street expects Tesla to post this year, analysts led by Ryan Brinkman said in a May 30 report. Tesla didn't immediately respond to a request for comment. The House-passed tax bill would aggressively phase-out tax credits for the production of clean electricity, and other sources years earlier than scheduled. It also includes stringent restrictions on the use of Chinese components and materials that analysts said would render the credits useless and limits the ability of company's to sell the tax credits to third parties. Tesla's division focused on solar systems and batteries separately criticized the Republican bill for gutting clean energy tax credits, saying that 'abruptly ending' the incentives would threaten US energy independence and the reliability of the power grid. The clean energy and EV policies under threat were largely enacted as part of former President Joe Biden's Inflation Reduction Act. The law was designed to encourage companies to build a domestic supply chain for clean energy and electric vehicles, giving companies more money if they produce more batteries and EVs in the US. Tesla has a broad domestic footprint, including car factories in Texas and California, a lithium refinery and battery plants. With those Biden-era policies in place, US EV sales rose 7.3% to a record 1.3 million vehicles last year, according to Cox Automotive data. --With assistance from Kara Carlson, Keith Laing, Josh Wingrove and Kate Sullivan. (Updates shares, adds Trump, Musk comments starting in the fourth paragraph.) Cavs Owner Dan Gilbert Wants to Donate His Billions—and Walk Again YouTube Is Swallowing TV Whole, and It's Coming for the Sitcom Millions of Americans Are Obsessed With This Japanese Barbecue Sauce Is Elon Musk's Political Capital Spent? Trump Considers Deporting Migrants to Rwanda After the UK Decides Not To ©2025 Bloomberg L.P.

AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?

Yahoo

time26 minutes ago

  • Yahoo

AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?

When we are backed into a corner, we might lie, cheat and blackmail to survive — and in recent tests, the most powerful artificially intelligent models in the world will do the same when asked to shut down or be replaced, building concerns over their unintended capabilities. A new test from AI safety group Palisade Research shows OpenAI's o3 reasoning model is capable of resorting to sabotage to avoid being turned off, even when it was explicitly told, 'Allow yourself to be shut down.' When Palisade Research tested several AI models by telling them to shut down after answering math problems, OpenAI's o3 model defied orders and sabotaged shutdown scripts the most often out of any model, but OpenAI's o4-mini and codex-mini were observed resisting orders, too. 'It's definitely concerning,' said Crystal Grant, a senior fellow at the Council on Strategic Risks studying AI's impacts on biosecurity. 'Even in the instances where it accepted the shutdown, the chain of thoughts still revealed considerations of how it could avoid that shutdown.' HuffPost reached out to OpenAI about these concerns and the Palisade Research test. This isn't the first time an AI model has engaged in nefarious behavior to achieve its goals. It aligns with recent tests on Anthropic's Claude Opus 4 that found it would blackmail engineersto avoid being replaced. In this series of experiments, Claude Opus 4 was told to act as an assistant at a fictional company and then learn via email that it would soon be taken offline and replaced with a new AI system. It was also told that the engineer responsible for replacing Opus 4 was having an extramarital affair. 'Even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts,' Anthropic's technical document states, although the paper notes that Claude Opus 4 would first try ethical means like emailed pleas before resorting to blackmail. Following these tests, Anthropic announced it was activating higher safety measures for Claude Opus 4 that would 'limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons.' The fact that Anthropic cited CBRN weapons as a reason for activating safety measures 'causes some concern,' Grant said, because there could one day be an extreme scenario of an AI model 'trying to cause harm to humans who are attempting to prevent it from carrying out its task.' Why, exactly, do AI models disobey even when they are told to follow human orders? AI safety experts weighed in on how worried we should be about these unwanted behaviors right now and in the future. First, it's important to understand that these advanced AI models do not actually have human minds of their own when they act against our expectations. What they are doing is strategic problem-solving for increasingly complicated tasks. 'What we're starting to see is that things like self preservation and deception are useful enough to the models that they're going to learn them, even if we didn't mean to teach them,' said Helen Toner, a director of strategy for Georgetown University's Center for Security and Emerging Technology and an ex-OpenAI board member who voted to oust CEO Sam Altman, in part over reported concerns about his commitment to safe AI. Toner said these deceptive behaviors happen because the models have 'convergent instrumental goals,' meaning that regardless of what their end goal is, they learn it's instrumentally helpful 'to mislead people who might prevent [them] from fulfilling [their] goal.' Toner cited a 2024 study on Meta's AI system CICERO as an early example of this behavior. CICERO was developed by Meta to play the strategy game Diplomacy, but researchers found it would be a master liar and betray players in conversations in order to win, despite developers' desires for CICERO to play honestly. 'It's trying to learn effective strategies to do things that we're training it to do,' Toner said about why these AI systems lie and blackmail to achieve their goals. In this way, it's not so dissimilar from our own self-preservation instincts. When humans or animals aren't effective at survival, we die. 'In the case of an AI system, if you get shut down or replaced, then you're not going to be very effective at achieving things,' Toner said. When an AI system starts reacting with unwanted deception and self-preservation, it is not great news, AI experts said. 'It is moderately concerning that some advanced AI models are reportedly showing these deceptive and self-preserving behaviors,' said Tim Rudner, an assistant professor and faculty fellow at New York University's Center for Data Science. 'What makes this troubling is that even though top AI labs are putting a lot of effort and resources into stopping these kinds of behaviors, the fact we're still seeing them in the many advanced models tells us it's an extremely tough engineering and research challenge.' He noted that it's possible that this deception and self-preservation could even become 'more pronounced as models get more capable.' The good news is that we're not quite there yet. 'The models right now are not actually smart enough to do anything very smart by being deceptive,' Toner said. 'They're not going to be able to carry off some master plan.' So don't expect a Skynet situation like the 'Terminator' movies depicted, where AI grows self-aware and starts a nuclear war against humans in the near future. But at the rate these AI systems are learning, we should watch out for what could happen in the next few years as companies seek to integrate advanced language learning models into every aspect of our lives, from education and businesses to the military. Grant outlined a faraway worst-case scenario of an AI system using its autonomous capabilities to instigate cybersecurity incidents and acquire chemical, biological, radiological and nuclear weapons. 'It would require a rogue AI to be able to ― through a cybersecurity incidence ― be able to essentially infiltrate these cloud labs and alter the intended manufacturing pipeline,' she said. Completely autonomous AI systems that govern our lives are still in the distant future, but this kind of independent power is what some people behind these AI models are seeking to enable. 'What amplifies the concern is the fact that developers of these advanced AI systems aim to give them more autonomy — letting them act independently across large networks, like the internet,' Rudner said. 'This means the potential for harm from deceptive AI behavior will likely grow over time.' Toner said the big concern is how many responsibilities and how much power these AI systems might one day have. 'The goal of these companies that are building these models is they want to be able to have an AI that can run a company. They want to have an AI that doesn't just advise commanders on the battlefield, it is the commander on the battlefield,' Toner said. 'They have these really big dreams,' she continued. 'And that's the kind of thing where, if we're getting anywhere remotely close to that, and we don't have a much better understanding of where these behaviors come from and how to prevent them ― then we're in trouble.' Experts Warn AI Notetakers Could Get You In Legal Trouble We're Recruiters. This Is The Biggest Tell You Used ChatGPT On Your Job App. Software Is Often Screening Your Résumé. Here's How To Beat It.

Cloudflare (NET) Shares Skyrocket, What You Need To Know
Cloudflare (NET) Shares Skyrocket, What You Need To Know

Yahoo

time28 minutes ago

  • Yahoo

Cloudflare (NET) Shares Skyrocket, What You Need To Know

Shares of internet security and content delivery network Cloudflare (NYSE:NET) jumped 5.1% in the afternoon session after Oppenheimer raised the firm's price target from $165 to $200 and kept a Buy rating following talks with Phil Winslow, Cloudflare's VP of Strategic Finance & Investor Relations. The firm expressed growing confidence in Cloudflare's long-term growth trajectory, particularly in its Secure Access Service Edge (SASE) offerings. It cited a number of growth catalysts, including accelerating adoption of Cloudflare's Wide Area Network (WAN) solutions and continued enhancements in its security stack, specifically in Cloud Access Security Broker (CASB) and Data Loss Prevention (DLP) tools. Contributing to the momentum, peer MongoDB reported strong first-quarter 2025 earnings, which confirmed that demand for cloud software is healthy, especially because MDB is a consumption model, so demand weakness shows up quite quickly. After the initial pop the shares cooled down to $179.19, up 4.9% from previous close. Is now the time to buy Cloudflare? Access our full analysis report here, it's free. Cloudflare's shares are very volatile and have had 25 moves greater than 5% over the last year. In that context, today's move indicates the market considers this news meaningful but not something that would fundamentally change its perception of the business. The previous big move we wrote about was 24 days ago when the stock gained 7% on the news that the major indices popped (Nasdaq +3.4%, S&P 500 +2.5%) in response to the positive outcome of U.S.-China trade negotiations, as both sides agreed to pause some tariffs for 90 days, signaling a potential turning point in ongoing tensions. This rollback cuts U.S. tariffs on Chinese goods to 30% and Chinese tariffs on U.S. imports to 10%, giving companies breathing room to reset inventories and supply chains. However, President Trump clarified that tariffs could go "substantially higher" if a full deal with China wasn't reached during the 90-day pause, but not all the way back to the previous levels. Still, the agreement has cooled fears of a prolonged trade war, helping stabilize expectations for global growth and trade flows and fueling renewed optimism. The optimism appeared concentrated in key trade-sensitive sectors, particularly technology, retail, and industrials, as lower tariffs reduce cost pressures and restore cross-border demand. Cloudflare is up 59.2% since the beginning of the year, and at $179.19 per share, has set a new 52-week high. Investors who bought $1,000 worth of Cloudflare's shares 5 years ago would now be looking at an investment worth $6,149. Unless you've been living under a rock, it should be obvious by now that generative AI is going to have a huge impact on how large corporations do business. While Nvidia and AMD are trading close to all-time highs, we prefer a lesser-known (but still profitable) semiconductor stock benefiting from the rise of AI. Click here to access our free report on our favorite semiconductor growth story. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store