logo
#

Latest news with #RSP

Porsche Teases New 963 RSP Hypercar
Porsche Teases New 963 RSP Hypercar

Hypebeast

time4 days ago

  • Automotive
  • Hypebeast

Porsche Teases New 963 RSP Hypercar

Summary ​Porschehas officially unveiled the name of its mysterious and potentially roadgoing endurance project: the 963 RSP.​ Teased in a new ASMR-style video, the one-off hypercar is based on the 963 endurance racer that's already clinched back-to-back victories at the 24 Hours of Daytona. The short film shows Porsche artisans sewing custom upholstery, sanding carbon-fiber panels and applying paint by hand, culminating in the installation of a real Porsche crest on the hood, a notable departure from the sticker badge used on the race car for weight savings. While Porsche hasn't revealed specs, the RSP is expected to mirror the race-bred twin-turbo V8 hybrid setup of its track-only sibling. According to earlyreports, the RSP name may also nod to Roger S. Penske, whose team oversees Porsche's dominant endurance program. With only one 963 RSP slated for production, its debut is likely timed for just before the 2025 24 Hours of Le Mans on June 14. In the meantime, check out the teaser for yourself below.

Demand for unification of IPGISSH and IGH grows
Demand for unification of IPGISSH and IGH grows

New Indian Express

time26-05-2025

  • Health
  • New Indian Express

Demand for unification of IPGISSH and IGH grows

ROURKELA: Demand is growing for development of a comprehensive medical institution with unification of the new Ispat Post-Graduate Institute and Super Specialty Hospital (IPGISSH) and Ispat General Hospital (IGH), both under the Rourkela Steel Plant (RSP) of SAIL. In 2015, Prime Minister Narendra Modi had announced to upgrade the IGH into a medical college and super specialty hospital. Instead, the RSP constructed the IPGISSH on the vacant land of IGH with Central funds. The IPGISSH was inaugurated in March 2021 without the PG institute. In a letter to the PM, senior BJP leader Ramesh Agarwal said the IPGISSH established with Rs 350 crore Central funds has been addressing the need for specialised healthcare, but the PG institute is yet to come into existence. The RSP spends around Rs 80-Rs 90 crore annually towards its employee health benefit scheme. Yet, due to limitations in modern infrastructure of IGH and the Sector-22 hospital, many RSP employees and their families still seek treatment in referred hospitals across Odisha and beyond. Agarwal said the IGH is already running a nursing institution. The IPGISSH with its unused infrastructure of the PG institute shares common boundary with the IGH. Unifying these institutions would require minimal physical restructuring and can immediately serve as a well-integrated hub for healthcare and medical education in eastern India, he added.

Exclusive: New Claude Model Prompts Safeguards at Anthropic
Exclusive: New Claude Model Prompts Safeguards at Anthropic

Yahoo

time22-05-2025

  • Science
  • Yahoo

Exclusive: New Claude Model Prompts Safeguards at Anthropic

A smartphone displaying the logo of Claude, an AI language model developed by Anthropic. Credit -Today's newest AI models might be capable of helping would-be terrorists create bioweapons or engineer a pandemic, according to the chief scientist of the AI company Anthropic. Anthropic has long been warning about these risks—so much so that in 2023, the company pledged to not release certain models until it had developed safety measures capable of constraining them. Now this system, called the Responsible Scaling Policy (RSP), faces its first real test. On Thursday, Anthropic launched Claude Opus 4, a new model that, in internal testing, performed more effectively than prior models at advising novices on how to produce biological weapons, says Jared Kaplan, Anthropic's chief scientist. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Kaplan says. Accordingly, Claude Opus 4 is being released under stricter safety measures than any prior Anthropic model. Those measures—known internally as AI Safety Level 3 or 'ASL-3'—are appropriate to constrain an AI system that could 'substantially increase' the ability of individuals with a basic STEM background in obtaining, producing or deploying chemical, biological or nuclear weapons, according to the company. They include beefed-up cybersecurity measures, jailbreak preventions, and supplementary systems to detect and refuse specific types of harmful behavior. To be sure, Anthropic is not entirely certain that the new version of Claude poses severe bioweapon risks, Kaplan tells TIME. But Anthropic hasn't ruled that possibility out either. 'If we feel like it's unclear, and we're not sure if we can rule out the risk—the specific risk being uplifting a novice terrorist, someone like Timothy McVeigh, to be able to make a weapon much more destructive than would otherwise be possible—then we want to bias towards caution, and work under the ASL-3 standard,' Kaplan says. 'We're not claiming affirmatively we know for sure this model is risky … but we at least feel it's close enough that we can't rule it out.' If further testing shows the model does not require such strict safety standards, Anthropic could lower its protections to the more permissive ASL-2, under which previous versions of Claude were released, he says. This moment is a crucial test for Anthropic, a company that claims it can mitigate AI's dangers while still competing in the market. Claude is a direct competitor to ChatGPT, and brings in over $2 billion in annualized revenue. Anthropic argues that its RSP thus creates an economic incentive for itself to build safety measures in time, lest it lose customers as a result of being prevented from releasing new models. 'We really don't want to impact customers,' Kaplan told TIME earlier in May while Anthropic was finalizing its safety measures. 'We're trying to be proactively prepared.' But Anthropic's RSP—and similar commitments adopted by other AI companies—are all voluntary policies that could be changed or cast aside at will. The company itself, not regulators or lawmakers, is the judge of whether it is fully complying with the RSP. Breaking it carries no external penalty, besides possible reputational damage. Anthropic argues that the policy has created a 'race to the top' between AI companies, causing them to compete to build the best safety systems. But as the multi-billion dollar race for AI supremacy heats up, critics worry the RSP and its ilk may be left by the wayside when they matter most. Still, in the absence of any frontier AI regulation from Congress, Anthropic's RSP is one of the few existing constraints on the behavior of any AI company. And so far, Anthropic has kept to it. If Anthropic shows it can constrain itself without taking an economic hit, Kaplan says, it could have a positive effect on safety practices in the wider industry. Anthropic's ASL-3 safety measures employ what the company calls a 'defense in depth' strategy—meaning there are several different overlapping safeguards that may be individually imperfect, but in unison combine to prevent most threats. One of those measures is called 'constitutional classifiers:' additional AI systems that scan a user's prompts and the model's answers for dangerous material. Earlier versions of Claude already had similar systems under the lower ASL-2 level of security, but Anthropic says it has improved them so that they are able to detect people who might be trying to use Claude to, for example, build a bioweapon. These classifiers are specifically targeted to detect the long chains of specific questions that somebody building a bioweapon might try to ask. Anthropic has tried not to let these measures hinder Claude's overall usefulness for legitimate users—since doing so would make the model less helpful compared to its rivals. 'There are bioweapons that might be capable of causing fatalities, but that we don't think would cause, say, a pandemic,' Kaplan says. 'We're not trying to block every single one of those misuses. We're trying to really narrowly target the most pernicious.' Another element of the defense-in-depth strategy is the prevention of jailbreaks—or prompts that can cause a model to essentially forget its safety training and provide answers to queries that it might otherwise refuse. The company monitors usage of Claude, and 'offboards' users who consistently try to jailbreak the model, Kaplan says. And it has launched a bounty program to reward users for flagging so-called 'universal' jailbreaks, or prompts that can make a system drop all its safeguards at once. So far, the program has surfaced one universal jailbreak which Anthropic subsequently patched, a spokesperson says. The researcher who found it was awarded $25,000. Anthropic has also beefed up its cybersecurity, so that Claude's underlying neural network is protected against theft attempts by non-state actors. The company still judges itself to be vulnerable to nation-state level attackers—but aims to have cyberdefenses sufficient for deterring them by the time it deems it needs to upgrade to ASL-4: the next safety level, expected to coincide with the arrival of models that can pose major national security risks, or which can autonomously carry out AI research without human input. Lastly the company has conducted what it calls 'uplift' trials, designed to quantify how significantly an AI model without the above constraints can improve the abilities of a novice attempting to create a bioweapon, when compared to other tools like Google or less advanced models. In those trials, which were graded by biosecurity experts, Anthropic found Claude Opus 4 presented a 'significantly greater' level of performance than both Google search and prior models, Kaplan says. Anthropic's hope is that the several safety systems layered over the top of the model—which has already undergone separate training to be 'helpful, honest and harmless'—will prevent almost all bad use cases. 'I don't want to claim that it's perfect in any way. It would be a very simple story if you could say our systems could never be jailbroken,' Kaplan says. 'But we have made it very, very difficult.' Still, by Kaplan's own admission, only one bad actor would need to slip through to cause untold chaos. 'Most other kinds of dangerous things a terrorist could do—maybe they could kill 10 people or 100 people,' he says. 'We just saw COVID kill millions of people.' Write to Billy Perrigo at

Exclusive: New Claude Model Triggers Stricter Safeguards at Anthropic
Exclusive: New Claude Model Triggers Stricter Safeguards at Anthropic

Time​ Magazine

time22-05-2025

  • Science
  • Time​ Magazine

Exclusive: New Claude Model Triggers Stricter Safeguards at Anthropic

Today's newest AI models might be capable of helping would-be terrorists create bioweapons or engineer a pandemic, according to the chief scientist of the AI company Anthropic. Anthropic has long been warning about these risks—so much so that in 2023, the company pledged to not release certain models until it had developed safety measures capable of constraining them. Now this system, called the Responsible Scaling Policy (RSP), faces its first real test. On Thursday, Anthropic launched Claude 4 Opus, a new model that, in internal testing, performed more effectively than prior models at advising novices on how to produce biological weapons, says Jared Kaplan, Anthropic's chief scientist. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Kaplan says. Accordingly, Claude 4 Opus is being released under stricter safety measures than any prior Anthropic model. Those measures—known internally as AI Safety Level 3 or 'ASL-3'—are appropriate to constrain an AI system that could 'substantially increase' the ability of individuals with a basic STEM background in obtaining, producing or deploying chemical, biological or nuclear weapons, according to the company. They include beefed-up cybersecurity measures, jailbreak preventions, and supplementary systems to detect and refuse specific types of harmful behavior. To be sure, Anthropic is not entirely certain that the new version of Claude poses severe bioweapon risks, Kaplan tells TIME. But Anthropic hasn't ruled that possibility out either. 'If we feel like it's unclear, and we're not sure if we can rule out the risk—the specific risk being uplifting a novice terrorist, someone like Timothy McVeigh, to be able to make a weapon much more destructive than would otherwise be possible—then we want to bias towards caution, and work under the ASL-3 standard,' Kaplan says. 'We're not claiming affirmatively we know for sure this model is risky … but we at least feel it's close enough that we can't rule it out.' If further testing shows the model does not require such strict safety standards, Anthropic could lower its protections to the more permissive ASL-2, under which previous versions of Claude were released, he says. This moment is a crucial test for Anthropic, a company that claims it can mitigate AI's dangers while still competing in the market. Claude is a direct competitor to ChatGPT, and brings in some $1.4 billion in annualized revenue. Anthropic argues that its RSP thus creates an economic incentive for itself to build safety measures in time, lest it lose customers as a result of being prevented from releasing new models. 'We really don't want to impact customers,' Kaplan told TIME earlier in May while Anthropic was finalizing its safety measures. 'We're trying to be proactively prepared.' But Anthropic's RSP—and similar commitments adopted by other AI companies—are all voluntary policies that could be changed or cast aside at will. The company itself, not regulators or lawmakers, is the judge of whether it is fully complying with the RSP. Breaking it carries no external penalty, besides possible reputational damage. Anthropic argues that the policy has created a 'race to the top' between AI companies, causing them to compete to build the best safety systems. But as the multi-billion dollar race for AI supremacy heats up, critics worry the RSP and its ilk may be left by the wayside when they matter most. Still, in the absence of any frontier AI regulation from Congress, Anthropic's RSP is one of the few existing constraints on the behavior of any AI company. And so far, Anthropic has kept to it. If Anthropic shows it can constrain itself without taking an economic hit, Kaplan says, it could have a positive effect on safety practices in the wider industry. Anthropic's new safeguards Anthropic's ASL-3 safety measures employ what the company calls a 'defense in depth' strategy—meaning there are several different overlapping safeguards that may be individually imperfect, but in unison combine to prevent most threats. One of those measures is called 'constitutional classifiers:' additional AI systems that scan a user's prompts and the model's answers for dangerous material. Earlier versions of Claude already had similar systems under the lower ASL-2 level of security, but Anthropic says it has improved them so that they are able to detect people who might be trying to use Claude to, for example, build a bioweapon. These classifiers are specifically targeted to detect the long chains of specific questions that somebody building a bioweapon might try to ask. Anthropic has tried not to let these measures hinder Claude's overall usefulness for legitimate users—since doing so would make the model less helpful compared to its rivals. 'There are bioweapons that might be capable of causing fatalities, but that we don't think would cause, say, a pandemic,' Kaplan says. 'We're not trying to block every single one of those misuses. We're trying to really narrowly target the most pernicious.' Another element of the defense-in-depth strategy is the prevention of jailbreaks—or prompts that can cause a model to essentially forget its safety training and provide answers to queries that it might otherwise refuse. The company monitors usage of Claude, and 'offboards' users who consistently try to jailbreak the model, Kaplan says. And it has launched a bounty program to reward users for flagging so-called 'universal' jailbreaks, or prompts that can make a system drop all its safeguards at once. So far, the program has surfaced one universal jailbreak which Anthropic subsequently patched, a spokesperson says. The researcher who found it was awarded $25,000. Anthropic has also beefed up its cybersecurity, so that Claude's underlying neural network is protected against theft attempts by non-state actors. The company still judges itself to be vulnerable to nation-state level attackers—but aims to have cyberdefenses sufficient for deterring them by the time it deems it needs to upgrade to ASL-4: the next safety level, expected to coincide with the arrival of models that can pose major national security risks, or which can autonomously carry out AI research without human input. Lastly the company has conducted what it calls 'uplift' trials, designed to quantify how significantly an AI model without the above constraints can improve the abilities of a novice attempting to create a bioweapon, when compared to other tools like Google or less advanced models. In those trials, which were graded by biosecurity experts, Anthropic found Claude 4 Opus presented a 'significantly greater' level of performance than both Google search and prior models, Kaplan says. Anthropic's hope is that the several safety systems layered over the top of the model—which has already undergone separate training to be 'helpful, honest and harmless'—will prevent almost all bad use cases. 'I don't want to claim that it's perfect in any way. It would be a very simple story if you could say our systems could never be jailbroken,' Kaplan says. 'But we have made it very, very difficult.' Still, by Kaplan's own admission, only one bad actor would need to slip through to cause untold chaos. 'Most other kinds of dangerous things a terrorist could do—maybe they could kill 10 people or 100 people,' he says. 'We just saw COVID kill millions of people.'

Health chiefs set to make 'difficult decisions' due to £350m NHS deficit
Health chiefs set to make 'difficult decisions' due to £350m NHS deficit

Yahoo

time14-05-2025

  • Health
  • Yahoo

Health chiefs set to make 'difficult decisions' due to £350m NHS deficit

COUNCILLORS are next week due to get an update on a Cumbrian integrated care board (ICB) which is described as 'one of the most financially challenged systems in the NHS'. Members of Cumberland Council's health overview and scrutiny committee are due to meet at Carlisle's Civic Centre on Thursday (May 22) to hear the Lancashire and South Cumbria ICB update. The report states: 'Without support funding in 2024/25, the end of year system deficit would have been approximately £350 million. 'The ICB's annual budget for 2025/26 is approximately £5.4 billion and immediate action is required to reduce spending. 'The challenge heading into 2025/26 is considerable and will result in the need to plan service provision and work with partners in a different way. 'Despite the challenging financial position, the ICB has set out plans to reduce waste and duplication and reconfigure services. 'This will improve experience and outcomes for people living in Lancashire and South Cumbria and make services fit for the future. 'Working with Cumberland Health Overview and Scrutiny Committee is crucial because it ensures that the ICB services align with the needs and priorities of the population and enhancing the quality of care we provide to the residents of South Cumbria.' According to the report the ICB has entered into a National Recovery Support Programme and in 2024/25, Lancashire and South Cumbria was one of several systems subjected to an 'investigation and intervention process'. It adds: 'Despite support from NHS England North West, and the hard work of colleagues across our system, we were not able to make the progress we had hoped to with our spending. 'As a result, NHS England (NHSE) recommended four organisations were placed into NHS Oversight Framework (NOF) Segment Four and for each organisation to receive intensive support from the national Recovery Support Programme (RSP).' The affected organisations are: NHS Lancashire and South Cumbria ICB; Blackpool Teaching Hospitals NHS Foundation Trust; East Lancashire Hospitals NHS Trust; Lancashire Teaching Hospitals NHS Foundation Trust. The report states: 'The main concerns from NHSE centre around our system's difficulty in managing financial deficits and meeting financial recovery plans. 'We recognise the severity of the situation that we are in – as such we welcome this additional support from NHS England for the ICB and our trusts.' According to the report The RSP comprises of five strands: The diagnostic (stocktake to identify the drivers); Oversight and challenge: appointment of an NHS England employed improvement director; and regular board to board meetings; Financial support to drive the improvement plan; Targeted support for delivery (access to subject matter experts); and Continuous review of the plan (review and monitoring meetings). The report highlights a number 'difficult decisions' and it states: 'Our financial position means we will have to make some difficult decisions over the next few months. 'As a consequence, there will be some services that we are not able to make available, even if they are being rolled out across the country. 'Other examples may include ward closures where we are not providing value for money or working in the most efficient way, or delivering services in a different way to how they're currently provided. 'These decisions will require careful management, may not be quick to implement and, in some instances, will require engagement and consultation as per the NHSE service change process. 'Any proposed changes will be clinically led, subject to the necessary impact assessments, and we will commit to undertake the appropriate level of engagement with patients, members of the pubic, our staff and wider stakeholders. 'Despite significant financial challenges faced in the system, there are excellent services, positive examples of collaboration and passionate and dedicated staff to build upon to achieve the system's vision for improving health and care. 'There are opportunities to reduce waste, duplication and reconfigure services to outcomes for the population of Lancashire and South Cumbria and make services fit for the future.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store