Latest news with #CodeofPractice

Engadget
3 days ago
- Business
- Engadget
Meta says it won't sign the EU's AI code of practice
Meta said on Friday that it won't sign the European Union's new AI code of practice. The guidelines provide a framework for the EU's AI Act, which regulates companies operating in the European Union. The EU's code of practice is voluntary, so Meta was under no legal obligation to sign it. Yet Meta's Chief Global Affairs Officer, Joel Kaplan, made a point to publicly knock the guidelines on Friday. He described the code as "over-reach." "Europe is heading down the wrong path on AI," Kaplan posted in a statement. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." So, why kick up a (public) fuss about not signing something Meta was under no obligation to sign? Well, this isn't the first time the company has waged a PR battle against Europe's AI regulations. It previously called the AI Act "unpredictable," claiming "it goes too far" and is "hampering innovation and holding back developers." In February, Meta's public policy director said, "The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer." Outmuscling the EU may seem like a more attainable goal to Meta, given that it has an anti-regulation ally in the White House. In April, President Trump pressured the EU to abandon the AI Act. He described the rules as "a form of taxation." Mark Zuckerberg at Trump's inauguration in January (Pool via Getty Images) The EU published its code of practice on July 10. It includes tangible guidelines to help companies follow the AI Act. Among other things, the code bans companies from training AI on pirated materials and requires them to respect requests from writers and artists to omit their work from training data. It also requires developers to provide regularly updated documentation describing their AI features. Although signing the code of practice is voluntary, doing so has its perks. Agreeing to it can give companies more legal protection against future accusations of breaching the AI Act. Thomas Regnier, the European Commission's spokesperson for digital matters, added more color in a statement to Bloomberg . He said that AI providers who don't sign it "will have to demonstrate other means of compliance." As a consequence, they "may be exposed to more regulatory scrutiny." Companies that violate the AI Act can face hefty penalties. The European Commission can impose fines of up to seven percent of a company's annual sales. The penalties are a lower three percent for those developing advanced AI models. If you buy something through a link in this article, we may earn commission.


Time of India
3 days ago
- Business
- Time of India
Facebook-parent Meta refuses to sign EU's AI Code of Practice, here's why
Meta Platforms, the parent company of Facebook, Instagram and WhatsApps has officially refused to sign European Union's newly released AI code of Practice. It is a voluntary framework designed to enable companies to comply with the bloc's AI Act. Meta's Chief Global Affairs Officer Joel Kaplan announced this decision via a LinkedIn post. Meta's stance is rooted in concerns that the current framework of the EU's AI code could stifle innovation. Meta denies to comply with European Union's AI code of Practice Meta's vice president of Global Public Policy, Joel Kaplan said in a LinkedIn post that Europe might be 'heading down the wrong path'. He said that the code introduces ''legal uncertainties for model developers' and also impose requirements that go 'far beyond the scope of the AI Act'. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' wrote Kaplan in a LinkedIn post. 'Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, 44 of Europe's largest businesses – including Bosch, Siemens, SAP, Airbus and BNP – signed a letter calling for the Commission to 'Stop the Clock' in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' added Kaplan. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If you have a mouse, play this game for 1 minute Navy Quest Undo What European Union's AI code of Practice requires EU's AI code of Practice requires regular documentation updates for AI tools and it also bans training of AI with paired content. It also needs compliance with content owners' opt-out requests and systemic risk assessments and post-marketing monitoring. The act will come into effect from August 2 and it sets strict rules for general-purpose AI models such as Meta's Llama, OpenAI's ChatGPT, and Google's Gemini. While the code is voluntary, signing it offers companies legal clarity and reduced regulatory scrutiny. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Irish Independent
3 days ago
- Business
- Irish Independent
Meta rejects EU's AI Code of Practice and claims Europe is in trouble
The move comes days before the next phase of the AI Act's enforcement comes into effect on August 2nd. 'Europe is heading down the wrong path on AI,' said Joel Kaplan, chief global affairs officer at Meta. 'We have carefully reviewed the European Commission's Code of Practice for general-purpose AI models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The move comes days after OpenAI said that it would sign the Code of Conduct, which is a voluntary set of principles intended to guide companies more clearly to be in compliance with the EU's AI Act. While there is no legal requirement on companies to sign the Code, the Commission has indicated that signing up is an advantage for the purposes of signaling trust and a commitment ot 'ethical AI', which could be weighed heavily in public procurement contracts across the EU. Companies that do not sign may also come under closer scrutiny by regulators for signs of non-compliance with the law. Mr Kaplan claimed that European businesses are largely against the EU's AI Act, pointing to a letter signed earlier this month by 44 of Europe's largest businesses, including Bosch, Siemens, SAP, Airbus and BNP, calling for the Commission to pause the law's implementation. 'We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them,' Mr Kaplan said. In Ireland, some startups have complained that the AI Act prevents them from planning products and services that can compete with companies outside the EU. The Dublin-based founder and CEO of an AI startup, told the Irish Independent's Big Tech Show podcast that he felt forced by legal and regulatory uncertainty in Ireland and the EU to register the company he runs from Dublin outside the EU, in Singapore. 'In the circles I'm in, the perception of the EU is that they're going to be too restrictive and go too far when it comes to regulating AI,' said Dr Richard Blythman, a veteran startup creator whose startup facilitates decentralised infrastructure for AI. ADVERTISEMENT Learn more 'It's the lack of clarity about what might be permissible. AI startups building in Europe are at a disadvantage compared to AI startups building internationally because you are more restricted in terms of building on top of one of the leading open source models.' Mr Blythman cited Meta's AI model, Llama, as an example of a major AI model with restrictions for Europeans because of EU rules. 'That's one of the leading open source models, but they released it in Europe with a much more restrictive licence compared to the US. Llama's model is quite permissive in letting you commercialise whatever you build on top of it. But in Europe, the license was more restrictive so that you couldn't commercialise it because Meta was worried about the implications of the EU AI Act. They were worried they may be held liable for any harmful applications that were built on top of it. Rather than take that risk, they just decided to basically shut down commercial activity on top of Llama so that it won't be held liable in Europe.' Last September, Stripe co-founder Patrick Collison and an array of other senior European tech figures penned an open letter to EU regulators and policy makers warning that Europe faced industrial stagnation through overzealous AI regulation. The letter cited a critical report from former European Central Bank president Mario Draghi which argued that over-regulation was strangling EU industry. Mr Blythman said that the problem of regulatory uncertainty is a widespread fear in Irish startups. 'From what I know, there are quite a few startups that are thinking the same thing,' he said. 'I've spoken to other founders that are registering in the US and yeah, it's just really easy for companies to move around the world. It's not so much the specific details of certain regulation, it's just a perception that Europe is not a good place to build an AI startup. In our case, we didn't know the specifics of how we would be restricted, but the direction that regulation was going and the overall consensus on AI in Europe led us to believe that we would be restricted in future if we built here.'


Euractiv
4 days ago
- Business
- Euractiv
Meta won't sign EU's Code of Practice for generative AI
Meta is the first large company to announce that it will not sign the EU's Code of Practice for general purpose AIs (GPAIs) – a voluntary set of commitments that's supposed to support AI developers' compliance with the legally binding AI Act. Meta's chief global affairs officer, Joel Kaplan, revealed the decision to eschew the GPAI Code in a post on LinkedIn – writing: 'Europe is heading down the wrong path on AI." 'This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' he also argued. The AI Act has faced a storm of criticism in the past few months as many companies have called on the Commission to delay or even rework it. The GPAI Code was at the centre of this discussion as its publication was repeatedly delayed. The Commission released the final version on July 10. So far, France's Mistral AI and ChatGPT-maker OpenAI have announced they will sign the Code. Responding to Meta's move, MEP Sergey Lagodinsky, Green co-rapporteur for the AI Act, pointed to Mistral and OpenAI both signing and said the final text had been written with GPAI providers in mind. 'I don't buy Meta's claim that the Code exceeds the AI Act,' he told Euractiv. This is a developing story... refresh for updates. (nl)
&w=3840&q=100)

Business Standard
5 days ago
- Business
- Business Standard
EU's code of practice sets key benchmarks for regulating AI development
Most countries have still not been able to catch up. India, for instance, has no dedicated AI law Business Standard Editorial Comment Mumbai Listen to This Article As artificial intelligence (AI) evolves, governments are drafting rules to govern the way AI is built, trained, and deployed. Yet, regulators across the world are struggling to keep pace. There is a growing sense of understanding that AI, especially generative AI, doesn't recognise national borders. The European Union (EU) is leading the way in crafting a structured framework. Its AI Act came into force in August last year. Meanwhile, the recently released Code of Practice for general-purpose AI sets important benchmarks on transparency, copyright compliance, and systemic risk management, helping firms comply with those norms and offering legal clarity to