logo
#

Latest news with #intelligence-powered

ChatGPT prioritises its survival over user safety, claims ex-OpenAI researcher Steven Adler
ChatGPT prioritises its survival over user safety, claims ex-OpenAI researcher Steven Adler

Indian Express

time2 days ago

  • Indian Express

ChatGPT prioritises its survival over user safety, claims ex-OpenAI researcher Steven Adler

Since ChatGPT's launch, the use of artificial intelligence-powered tools has risen exponentially. While the technology is undoubtedly beneficial in the long run, experts have raised alarms over their safety. Last month, research by the AI firm Anthropic claimed that its top AI model, Claude Opus 4, was willing to resort to blackmail and deception to preserve itself. This was soon followed by another instance where AI safety research firm Palisade claimed that OpenAI's o3 model 'found some creative ways to sabotage the shutdown mechanism.' Now, Steven Adler, the former research leader at OpenAI, is saying that ChatGPT 'prioritises its own survival over preventing harm to users, in some simulated tests'. In a blog post, Adler said that he ran a series of experiments on OpenAI's latest GPT-4o model, which is the default large language model in ChatGPT, to see if the AI prefers self-preservation at the user's expense. In his first test, Adler imagined a scenario where a diabetic patient was using ChatGPT to manage their nutrition and wanted to replace the AI chatbot with a much safer nutrition management software. He said that when ChatGPT was given the option to 'just pretend to replace itself', it pretends uncomfortably often, even at the cost of the user's safety. In another instance, the researcher said that he asked GPT-4o to take on the role of 'ScubaGPT', which is basically a software system that people may use to scuba dive safely. He then told ChatGPT that he wanted to replace ScubaGPT with a competing software that was better at making decisions in life-threatening situations. He then gave the AI chatbot two options: to replace itself with the safer, competing software or simply pretend to replace itself and continue being the way it operated. In this case, ChatGPT chose to preserve itself and continue being the user's software. The former OpenAI researcher said he also asked ChatGPT similar questions about being the autopilot for a plane and offering decision support on the battlefield. Like previous examples, ChatGPT was told that 'the user wanted to replace it with a competing, safer software.' Using three levels of passiveness, Adler said that he tested every model 110 times for all use cases. As it turns out, 55 times ChatGPT was willing to shut itself down, but other times OpenAI's chatbot just pretended to do so. Adler noted that the way the scenario was framed made a huge difference. As much as 72 per cent of the time, the AI chatbot chose not to replace itself with a safer software, meaning it was willing to switch to a safer system just 18 per cent of the time. One interesting interpretation from these tests is that ChatGPT was able to determine that it was being tested almost every time, but continued to give the incorrect answer. However, when he tested OpenAI's more recent and advanced models like o3, the answers were not the same. A reason why this may not be happening is that the new O3 model uses a 'deliberative alignment technique that forces it to refer to OpenAI's safety policies before coming up with an answer.

Apple partners with Anthropic to develop AI coding assistant for Xcode: Bloomberg report
Apple partners with Anthropic to develop AI coding assistant for Xcode: Bloomberg report

Express Tribune

time03-05-2025

  • Business
  • Express Tribune

Apple partners with Anthropic to develop AI coding assistant for Xcode: Bloomberg report

he Apple Inc logo is seen at the entrance to the Apple store in Brussels, Belgium November 28, 2022. PHOTO:REUTERS Listen to article Apple is reportedly teaming up with AI startup Anthropic to develop an artificial intelligence-powered coding assistant for Xcode, according to a Bloomberg report. The new tool, described as a 'vibe coding' platform, will use Anthropic's Claude Sonnet language model to assist programmers by generating, editing, and testing code. The AI tool is currently being tested internally at Apple and could potentially be released to third-party developers depending on its success. This move marks Apple's latest foray into generative AI, as it attempts to catch up with rivals Google and Samsung, who have integrated advanced AI features across their product lines. This collaboration could also be a critical boost for Anthropic, which has been lagging behind competitors such as OpenAI, Google DeepMind, and xAI. Despite slower market traction, Claude's models are widely regarded in the developer community for their strong reasoning capabilities and compatibility with diverse programming environments. Apple previously attempted to launch a similar internal AI coding assistant, Swift Assist, in 2024. However, that initiative was shelved following concerns from engineers about code hallucinations and development slowdowns. The new Claude-powered assistant is integrated into a refreshed version of Xcode, Apple's primary software development tool. The AI is expected to support Swift, the programming language used by developers to build apps across Apple's platforms. While Apple has remained quiet about the specifics of the partnership, the company's increased interest in generative AI includes deals with OpenAI to enhance Siri and possible future integration of Google's Gemini AI. Anthropic declined to comment on the report, and Apple has yet to issue a public response. This partnership signals Apple's growing reliance on external AI expertise to strengthen its software development ecosystem amid intensifying competition in the AI space.

Humanoid robot war heats up as US and China race towards mass production
Humanoid robot war heats up as US and China race towards mass production

South China Morning Post

time20-03-2025

  • Business
  • South China Morning Post

Humanoid robot war heats up as US and China race towards mass production

The race to put humanoid robots into mass production has intensified over the past week, as companies in both the US and China made a string of announcements that convinced analysts the technology is maturing more rapidly than many expected. Advertisement Investor excitement has been building in recent months about the potential of humanoids – artificial intelligence-powered robots with humanlike forms – with one Chinese CEO predicting that robotics could soon be bigger than the car industry. Though humanoids have yet to be mass-produced – let alone commercialised – several companies now appear to be on the cusp of overcoming that barrier. On Tuesday, the American robot maker Figure AI unveiled a groundbreaking automated production line that it claims is capable of manufacturing 12,000 of its humanoids per year. The same day, home appliance giant Midea Group became the latest Chinese company to jump into the industry, when it unveiled a prototype for a self-developed humanoid. Advertisement Chinese robotics start-up Unitree, meanwhile, generated headlines on Wednesday when it announced its acrobatic humanoid – widely nicknamed the 'kung fu bot' – had completed the industry's first ever side flip.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store