Latest news with #AILiabilityDirective


Euronews
09-04-2025
- Business
- Euronews
EU Tech Commissioner defends scrapping of AI Liability rules
ADVERTISEMENT The AI Liability Directive would not have led to one set of uniform rules across the EU, Henna Virkkunen, the EU Commissioner responsible for tech, told members of the European Parliament's Legal Affairs committee (JURI) on Wednesday. 'With a directive, member states implement the rules in different ways,' Virkkunen said. 'I favour more regulations to make sure we have one single market,' she added, referring to the legal instrument that is equally binding across all member states. The committee members had ask ed the Commission why it withdrew the AI Liability Directive after the EU executive said it saw 'no foreseeable agreement' on the proposal in its 2025 work program published in February. The rules were intended to offer consumers a harmonised means of redress when they experience harm arising from AI products or services. They were proposed in 2022 but no significant progress has been made since. 'We need to fully implement the AI Act before we propose new rules – in the last years the European Commission has proposed a lot of digital rules and we need to simplify them before presenting something new,' Virkkunen said. Related EU Commission presents plans to boost AI uptake, protect critical sectors EU Commission to launch consultation on grand AI strategy Lawmakers have been divided over the need for the rules. The rapporteur in JURI Axel Voss (Germany/EPP), wants to keep working on the dossier. His counterpart in the Internal Market and Consumer Protection committee (IMCO), Kosma Złotowski (Poland/ECR), said in his draft opinion published in January that the 'adoption of an AI Liability Directive at this stage is premature and unnecessary.' Voss said in the JURI hearing on Wednesday that simplification is a trend 'but liability rules are needed anyway to create a true digital single market'. Sergey Lagodinsky (Germany/Greens) said he was 'very puzzled' about the reasons for withdrawal and said that co-legislators needed to be consulted. On the other hand, both Diego Solier (Spain/ECR) and Svenja Hahn (Germany/Renew) – members of the IMCO committee -- spoke out in favour of the Commission decision. Hahn said that the existing product liability laws and national tort laws are enough and that consumers have enough opportunities to file claims. In a letter to Virkkunen sent earlier this week, civil society and consumer groups called upon the Commission to work on new AI liability rules to fill 'legal gaps'. The Commission has until August to make a final decision on the matter.


Euronews
07-04-2025
- Business
- Euronews
Commission should work on new AI Liability rules, civil society groups say
ADVERTISEMENT The European Commission has been urged to work on new AI liability rules after it pulled out of an existing proposal, in a letter sent to the EU executive on Monday by civil society and consumer groups. The signatories — which include consumer group BEUC, and privacy advocates Article 19 and Mozilla – said that the 'EU needs rules to address the legal gaps left by the AILD withdrawal and ensure a fairer, simpler path to compensation for all people affected by AI systems, including consumers in the EU, in case of harm by an AI system.' In the Commission's 2025 work program, presented in February, the EU executive said it plans to scrap the AI Liability Directive because 'no foreseeable agreement' is expected on the proposal. The rules were intended to offer consumers a harmonised means of redress when they experience harm arising from AI products or services. They were proposed in 2022 but no significant progress has been made since. The joint letter – addressed to EU Tech Commissioner Henna Virkkunen and Michael McGrath, the commissioner for consumer affairs – argues that new EU AI liability rules would prevent fragmentation across the Bloc and enhance trust in AI. 'Certain national regimes might adequately protect consumers and individuals affected by AI, but not all do. This leads to an unfair scenario where people harmed by an AI system in one member state would have better chances at being compensated than those located in a different member state,' the letter said. Other rules, like the revised Product Liability Directive (PLD) also apply to AI systems, but the letter argues that legal gaps remain. EU Tech Commissioner Henna Virkkunen has been invited to the Legal Affairs Committee on 9 April for a discussion on the topic. The file has not yet been officially withdrawn, but the Commission will aim to do so by August, when it publishes a final work program. Lawmakers are divided on the issue, and member states have not officially decided on their position.


Euronews
19-03-2025
- Science
- Euronews
NASA astronauts return to Earth after 9-month space mission
NASA astronauts Butch Wilmore and Suni Williams returned to Earth on Tuesday, closing a saga that began with a bungled test flight more than nine months ago. Their SpaceX capsule flew down into the Gulf of Mexico in the evening, a few hours after leaving the International Space Station, ending 286 days in space from what was supposed to be just a week-long mission due to problems with their Boeing Starliner capsule. A replacement crew with NASA's Nick Hague and Russia's Alexander Gorbunov flew up on a SpaceX capsule last fall with two empty seats to bring Wilmore and Williams back to Earth. 'On behalf of SpaceX, welcome home,' radioed SpaceX Mission Control in California as the crew stepped out of the capsule. 'What a ride,' replied Hague, the capsule's commander. "I see a capsule full of grins ear to ear". During their mission, Wilmore and Williams circled Earth 4,576 times and travelled 195 million kilometres by the time of splashdown. Williams even set a record on the flight for the woman with the most time spacewalking in their career. Wilmore and Williams quickly transitioned from guests to full-fledged station crew members, conducting experiments, fixing equipment, and doing spacewalks together. 'This has been nine months in the making, and I couldn't be prouder of our team's versatility, our team's ability to adapt and really build for the future of human spaceflight,' NASA's commercial crew programme manager Steve Stich said. Wilmore and Williams will have to wait until they're off the SpaceX recovery ship and flown to Houston before reuniting with their loved ones. The three NASA astronauts will be checked out by flight surgeons as they adjust to gravity, officials said, and should be allowed to go home after a day or two. The Starliner is still under an engineering investigation, according to the AP. So SpaceX will launch the next crew for NASA as soon as July. The European Commission will decide whether to definitely scrap its planned liability rules for artificial intelligence systems by August, a Commission official told lawmakers in the European Parliament on Tuesday. In the Commission's 2025 work program, presented in February, the EU executive said it plans to scrap the AI Liability Directive because 'no foreseeable agreement' is expected on the proposal. The rules were intended to offer consumers a harmonised means of redress when they experience harm arising from AI products or services. They were proposed in 2022 but no significant progress has been made since. The Commission indicated, however, that the file could stay on the table if the EU Parliament and Council undertake to do extensive work on it over the coming year. The Commission official told the Internal Market and Consumer Protection Committee (IMCO) on Tuesday that it is waiting for the official views of both the Parliament and the member states and will then 'carefully think about the withdrawal'. The file has not been officially withdrawn, but the Commission need to do this in six months, starting from the publication of the work program. The Parliament itself is divided about the plans. The lawmaker responsible for steering the AI Liability proposal through the parliament, German MEP Axel Voss of the Legal Affairs committee, said the Commission's move was a 'strategic mistake'. The rapporteur in the IMCO committee, Kosma Złotowski (Poland/ECR), said in his draft opinion published in January that the 'adoption of an AI Liability Directive at this stage is premature and unnecessary.' The lawmakers in favour of withdrawing the rules say the consumers are protected by the Product Liability Rules as well as by the AI Act, which started to enter into force gradually. EU Tech Commissioner Henna Virkkunen has been invited to the Legal Affairs committee on 9 April for a discussion on the topic. Member states have not yet discussed the proposal to get rid of the rules at working party level.


Euronews
26-02-2025
- Business
- Euronews
European Parliament to grill Commission over ditched AI liability rules
The Legal Affairs committee in the European Parliament will ask European Commission representatives to explain why it wants to withdraw its AI Liability Directive proposal, Parliament sources have told Euronews. In the Commission's 2025 work program, presented earlier this month in Strasbourg, the EU executive said it plans to scrap the AI Liability Directive because 'no foreseeable agreement' is expected on the proposal. The rules were intended to offer consumers a harmonised means of redress when they experience harm arising from AI products or services. They were proposed in 2022 but no significant progress has been made since. Last week, group coordinators from the Internal Market and Consumer Protection Committee (IMCO) voted to keep working on liability rules for the time being. The Legal Affairs committee, which leads the Parliament's work on the issue, has not yet officially decided what to do, but will get together early March to decide on the next steps. The Parliament is divided over the need for the rules, however, with the centre-left, left and greens in favour of continuing with the file, whereas the centre-right and conservative groups, including ECR and EPP, broadly behind scrapping the plan. The rapporteur in the IMCO committee, Kosma Złotowski (Poland/ECR), said in his draft opinion published in January that the 'adoption of an AI Liability Directive at this stage is premature and unnecessary.' Regarding the member states, Euronews understands that there are no plans to discuss the Commission's proposal to get rid of the rules, at working party level. The question of what value they would add has been raised several times by member states during the examination of the proposal in the Council, the EU official said. The Brussels tech lobby and consumer organisations were likewise divided about the need for additional rules.

Associated Press
19-02-2025
- Business
- Associated Press
From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI
AUSTIN, Texas--(BUSINESS WIRE)--Feb 19, 2025-- The Responsible AI Institute (RAI Institute) is taking bold action to reshape and accelerate the future of responsible AI adoption. In response to rapid regulatory shifts, corporate FOMO, and the rise of agentic AI, RAI Institute is expanding beyond policy advocacy to deploy AI-driven tools, agentic AI services, and new AI verification, badging, and benchmarking programs. Backed by a new partner ecosystem, university collaborations in the U.S., U.K., and India, and a pledge from private foundations, RAI Institute is equipping organizations to confidently adopt and govern multi-vendor agent ecosystems. This press release features multimedia. View the full release here: (Graphic: Business Wire) THE AI LANDSCAPE HAS CHANGED — AND RAI INSTITUTE IS MOVING FROM POLICY TO IMPACT Global AI policy and adoption are at an inflection point. AI adoption is accelerating, but trust and governance have not kept pace. Regulatory rollbacks, such as the revocation of the U.S. AI Executive Order and the withdrawal of the EU's AI Liability Directive, signal a shift away from oversight, pushing businesses to adopt AI without sufficient safety frameworks. 51% of companies have already deployed AI agents, with another 78% planning implementation soon ( LangChain, 2024). 42% of workers say accuracy and reliability are top priorities for improving agentic AI tools ( Pegasystems, 2025). 67% of IT decision-makers across the U.S., U.K., France, Germany, Australia, and Singapore report adopting AI despite reliability concerns, driven by FOMO (fear of missing out) ( ABBYY Survey, 2025). At the same time, AI vendors like OpenAI and Microsoft are urging businesses to 'accept imperfection,' a stance that directly contradicts the principles of responsible AI governance. AI-driven automation is already reshaping the workforce, yet most organizations lack structured transition plans, leading to job displacement, skill gaps, and growing concerns over AI's economic impact. The RAI Institute sees this moment as a call to action, going beyond policy frameworks. It's about creating concrete, operational tools, sharing real-world experiences, and learning from real-world member experiences to safeguard AI deployment at scale. STRATEGIC SHIFT: FROM POLICY TO PRACTICE Following a six month review of its operations and strategy, RAI Institute is realigning its mission around three core pillars: 1. EMBRACING HUMAN-LED AI AGENTS TO ACCELERATE RAI ENABLEMENT The Institute will lead by example, integrating AI-powered processes across its operations as 'customer zero.' From AI-driven market intelligence to verification and assessment acceleration, RAI Institute is actively testing the power and exposing the limitations of agentic AI, ensuring it is effective, safe, and accountable in real-world applications. 2. SHIFTING FROM AI POLICY TO AI OPERATIONALIZATION RAI Institute is shifting from policy to action by deploying AI-driven risk management tools and real-time monitoring agents to help companies automate evaluation and 3rd party verification against frameworks like NIST RMF, ISO 42001, and the EU AI Act. Additionally, RAI Institute is partnering with leading universities and research labs in the U.S., U.K., and India to co-develop, stress-test, and pilot responsible agentic AI, ensuring enterprises can measure agent performance, alignment, and unintended risks in real-world scenarios. 3. LAUNCHING THE RAISE AI PATHWAYS PROGRAM RAI Institute is accelerating responsible AI adoption with the RAISE AI Pathways Program, delivering a suite of new human-augmented AI agent-powered insights, assessments, and benchmarking to help businesses evaluate AI maturity, compliance, and readiness for agentic AI ecosystems. This program will leverage collaborations with industry leaders, including the Green Software Foundation and FinOps Foundation and be backed by a matching grant pledge from private foundations, with further funding details to be announced later this year. 'The rise of agentic AI isn't on the horizon — it's already here, and we are shifting from advocacy to action to meet member needs,' said Jeff Easley, General Manager, Responsible AI Institute. 'AI is evolving from experimental pilots to large-scale deployment at an unprecedented pace. Our members don't just need policy recommendations — they need AI-powered risk management, independent verification, and benchmarking tools to help deploy AI responsibly without stifling innovation.' RAISE AI PATHWAYS: LEVERAGING HUMAN-LED AGENTIC AI FOR ACCELERATED IMPACT Beginning in March, RAI Institute will begin a phased launch of its six AI Pathways Agents, developed in collaboration with leading cloud and AI tool vendors and university AI labs in the U.S., U.K., and India. These agents are designed to help enterprises access external tools to independently evaluate, build, deploy, and manage responsible agentic AI systems with safety, trust, and accountability. The phased rollout will ensure real-world testing, enterprise integration, and continuous refinement, enabling organizations to adopt AI-powered governance and risk management solutions at scale. Early access will be granted to select partners and current members, with broader availability expanding throughout the year. Sign up now to join the early access program! Introducing the RAI AI Pathways Agent Suite: RAI Watchtower Agent – Real-time AI risk monitoring to detect compliance gaps, model drift, and security vulnerabilities before they escalate. RAI Corporate AI Policy Copilot – An intelligent policy assistant that helps businesses develop, implement, and maintain AI policies aligned with global policy and standards. RAI Green AI eVerification – A benchmarking program for measuring and optimizing AI's carbon footprint, in collaboration with the Green Software Foundation. RAI AI TCO eVerification – Independent Total Cost of Ownership verification for AI investments, in collaboration with the FinOps Foundation. RAI Agentic AI Purple Teaming – Proactive adversarial testing and defense strategies using industry standards and curated benchmarking data. This AI security agent identifies vulnerabilities, stress-tests AI systems, and mitigates risks such as hallucinations, attacks, bias, and model drift. RAI Premium Research – Access exclusive, in-depth analysis on responsible AI implementation, governance, and risk management. Stay ahead of emerging risks, regulatory changes, and AI best practices. MOVING FORWARD: BUILDING A RESPONSIBLE AI FUTURE The Responsible AI Institute is not merely adapting to AI's rapid evolution — it is leading the charge in defining how AI should be integrated responsibly. Over the next few months, RAI Institute will introduce: Scholarships, hackathons, and long-term internships funded by private foundations. A new global advisory board focused on Agentic AI regulations, safety, and innovation. Upskilling programs to equip organizations with the tools to navigate the next era of AI governance. JOIN THE MOVEMENT: THE TIME FOR RESPONSIBLE AI IS NOW! Join us in shaping the future of responsible AI. Sign up for early access to the RAI AI Agents and RAISE Pathways Programs. About the Responsible AI Institute Since 2016, the Responsible AI Institute has been at the forefront of advancing responsible AI adoption across industries. As a non-profit organization, RAI Institute partners with policymakers, industry leaders, and technology providers to develop responsible AI benchmarks, governance frameworks, and best practices. With the launch of RAISE Pathways, RAI Institute equips organizations with expert-led training, real-time assessments, and implementation toolkits to strengthen AI governance, enhance transparency, and drive innovation at scale. Members include leading companies such as Boston Consulting Group, AMD, KPMG, Chevron, Ally, Mastercard and many others dedicated to bringing responsible AI to all industry sectors. CONTACT: Media Contact Nicole McCaffrey Head of Strategy & Marketing, RAI Institute [email protected] +1 (440) 785-3588 SOURCE: Responsible AI Institute Copyright Business Wire 2025. PUB: 02/19/2025 09:11 AM/DISC: 02/19/2025 09:12 AM