logo
#

Latest news with #Code

Meta says it won't sign the EU's AI code of practice
Meta says it won't sign the EU's AI code of practice

Engadget

time21 hours ago

  • Business
  • Engadget

Meta says it won't sign the EU's AI code of practice

Meta said on Friday that it won't sign the European Union's new AI code of practice. The guidelines provide a framework for the EU's AI Act, which regulates companies operating in the European Union. The EU's code of practice is voluntary, so Meta was under no legal obligation to sign it. Yet Meta's Chief Global Affairs Officer, Joel Kaplan, made a point to publicly knock the guidelines on Friday. He described the code as "over-reach." "Europe is heading down the wrong path on AI," Kaplan posted in a statement. "We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act." So, why kick up a (public) fuss about not signing something Meta was under no obligation to sign? Well, this isn't the first time the company has waged a PR battle against Europe's AI regulations. It previously called the AI Act "unpredictable," claiming "it goes too far" and is "hampering innovation and holding back developers." In February, Meta's public policy director said, "The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer." Outmuscling the EU may seem like a more attainable goal to Meta, given that it has an anti-regulation ally in the White House. In April, President Trump pressured the EU to abandon the AI Act. He described the rules as "a form of taxation." Mark Zuckerberg at Trump's inauguration in January (Pool via Getty Images) The EU published its code of practice on July 10. It includes tangible guidelines to help companies follow the AI Act. Among other things, the code bans companies from training AI on pirated materials and requires them to respect requests from writers and artists to omit their work from training data. It also requires developers to provide regularly updated documentation describing their AI features. Although signing the code of practice is voluntary, doing so has its perks. Agreeing to it can give companies more legal protection against future accusations of breaching the AI Act. Thomas Regnier, the European Commission's spokesperson for digital matters, added more color in a statement to Bloomberg . He said that AI providers who don't sign it "will have to demonstrate other means of compliance." As a consequence, they "may be exposed to more regulatory scrutiny." Companies that violate the AI Act can face hefty penalties. The European Commission can impose fines of up to seven percent of a company's annual sales. The penalties are a lower three percent for those developing advanced AI models. If you buy something through a link in this article, we may earn commission.

Meta rejects EU's AI Code of Practice and claims Europe is in trouble
Meta rejects EU's AI Code of Practice and claims Europe is in trouble

Irish Independent

timea day ago

  • Business
  • Irish Independent

Meta rejects EU's AI Code of Practice and claims Europe is in trouble

The move comes days before the next phase of the AI Act's enforcement comes into effect on August 2nd. 'Europe is heading down the wrong path on AI,' said Joel Kaplan, chief global affairs officer at Meta. 'We have carefully reviewed the European Commission's Code of Practice for general-purpose AI models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The move comes days after OpenAI said that it would sign the Code of Conduct, which is a voluntary set of principles intended to guide companies more clearly to be in compliance with the EU's AI Act. While there is no legal requirement on companies to sign the Code, the Commission has indicated that signing up is an advantage for the purposes of signaling trust and a commitment ot 'ethical AI', which could be weighed heavily in public procurement contracts across the EU. Companies that do not sign may also come under closer scrutiny by regulators for signs of non-compliance with the law. Mr Kaplan claimed that European businesses are largely against the EU's AI Act, pointing to a letter signed earlier this month by 44 of Europe's largest businesses, including Bosch, Siemens, SAP, Airbus and BNP, calling for the Commission to pause the law's implementation. 'We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe and stunt European companies looking to build businesses on top of them,' Mr Kaplan said. In Ireland, some startups have complained that the AI Act prevents them from planning products and services that can compete with companies outside the EU. The Dublin-based founder and CEO of an AI startup, told the Irish Independent's Big Tech Show podcast that he felt forced by legal and regulatory uncertainty in Ireland and the EU to register the company he runs from Dublin outside the EU, in Singapore. 'In the circles I'm in, the perception of the EU is that they're going to be too restrictive and go too far when it comes to regulating AI,' said Dr Richard Blythman, a veteran startup creator whose startup facilitates decentralised infrastructure for AI. ADVERTISEMENT Learn more 'It's the lack of clarity about what might be permissible. AI startups building in Europe are at a disadvantage compared to AI startups building internationally because you are more restricted in terms of building on top of one of the leading open source models.' Mr Blythman cited Meta's AI model, Llama, as an example of a major AI model with restrictions for Europeans because of EU rules. 'That's one of the leading open source models, but they released it in Europe with a much more restrictive licence compared to the US. Llama's model is quite permissive in letting you commercialise whatever you build on top of it. But in Europe, the license was more restrictive so that you couldn't commercialise it because Meta was worried about the implications of the EU AI Act. They were worried they may be held liable for any harmful applications that were built on top of it. Rather than take that risk, they just decided to basically shut down commercial activity on top of Llama so that it won't be held liable in Europe.' Last September, Stripe co-founder Patrick Collison and an array of other senior European tech figures penned an open letter to EU regulators and policy makers warning that Europe faced industrial stagnation through overzealous AI regulation. The letter cited a critical report from former European Central Bank president Mario Draghi which argued that over-regulation was strangling EU industry. Mr Blythman said that the problem of regulatory uncertainty is a widespread fear in Irish startups. 'From what I know, there are quite a few startups that are thinking the same thing,' he said. 'I've spoken to other founders that are registering in the US and yeah, it's just really easy for companies to move around the world. It's not so much the specific details of certain regulation, it's just a perception that Europe is not a good place to build an AI startup. In our case, we didn't know the specifics of how we would be restricted, but the direction that regulation was going and the overall consensus on AI in Europe led us to believe that we would be restricted in future if we built here.'

Meta won't sign EU's Code of Practice for generative AI
Meta won't sign EU's Code of Practice for generative AI

Euractiv

timea day ago

  • Business
  • Euractiv

Meta won't sign EU's Code of Practice for generative AI

Meta is the first large company to announce that it will not sign the EU's Code of Practice for general purpose AIs (GPAIs) – a voluntary set of commitments that's supposed to support AI developers' compliance with the legally binding AI Act. Meta's chief global affairs officer, Joel Kaplan, revealed the decision to eschew the GPAI Code in a post on LinkedIn – writing: 'Europe is heading down the wrong path on AI." 'This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' he also argued. The AI Act has faced a storm of criticism in the past few months as many companies have called on the Commission to delay or even rework it. The GPAI Code was at the centre of this discussion as its publication was repeatedly delayed. The Commission released the final version on July 10. So far, France's Mistral AI and ChatGPT-maker OpenAI have announced they will sign the Code. Responding to Meta's move, MEP Sergey Lagodinsky, Green co-rapporteur for the AI Act, pointed to Mistral and OpenAI both signing and said the final text had been written with GPAI providers in mind. 'I don't buy Meta's claim that the Code exceeds the AI Act,' he told Euractiv. This is a developing story... refresh for updates. (nl)

Meta rebuffs EU's AI Code of Practice
Meta rebuffs EU's AI Code of Practice

Euronews

timea day ago

  • Business
  • Euronews

Meta rebuffs EU's AI Code of Practice

US social media company Meta will not sign the EU's AI Code of Practice on General Purpose AI (GPAI), the company's Chief Global Affairs Officer Joel Kaplan said in a statement on Friday. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for GPAI models and Meta won't be signing it,' he said, adding that the Code 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The Commission last week released the Code, a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the Act and can anticipate more legal certainty, others will face more inspections. The AI Act's provisions affecting GPAI systems enter into force on 2 August. It will take another two years before the AI Act, which regulates AI systems according to the risk they pose to society, will become fully applicable. OpenAI, the parent company of ChatGPT, has said it will sign up to the Code once its ready. Criticism from tech giants The drafting process of the Code was criticised by Big Tech companies as well as CEOs of European companies, claiming they need more time to comply with the rules. 'We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' Kaplan said. The Code requires sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The member states are expected to give a green light as early as 22 July. The EU executive said it will publish the list of signatories on 1 August. On Friday the Commission published further guidance to help companies comply with the GPAI rules.

Combatting anti-Indigenous discrimination and harassment in retail settings
Combatting anti-Indigenous discrimination and harassment in retail settings

Cision Canada

time2 days ago

  • Business
  • Cision Canada

Combatting anti-Indigenous discrimination and harassment in retail settings

TORONTO, July 17, 2025 /CNW/ - Today, the Ontario Human Rights Commission (OHRC), and the Indigenous Human Rights Program (a partnership between Pro Bono Students Canada (PBSC) and the Ontario Federation of Indigenous Friendship Centres (OFIFC)) released a guide and two fact sheets, to address anti-Indigenous discrimination and harassment in retail. These resources provide practical human rights information to retailers and Indigenous people (shoppers or retail staff members) on recognizing, preventing, and remedying anti-Indigenous discrimination and harassment in retail settings. Indigenous people often experience racism and consumer racial profiling in different retail settings (for example, in department stores, supermarkets, pharmacies, convenience stores, malls, shopping centres, independent stores.) The guide and fact sheets explore experiences, which may amount to racial discrimination or harassment or both and can violate Ontario's Human Rights Code (the Code). "The Code requires retailers to ensure a safe, non-discriminatory environment for people to shop, buy products, or receive personal services. The OHRC is aware that Indigenous shoppers often face racial profiling, being labeled as 'suspicious' or potential shoplifters based on racist stereotypes. Frequently, they experience verbal and physical mistreatment, and receive lower-quality service once identified as Indigenous, particularly when First Nations customers show their Status cards. The guidance tool released today is intended for duty-holders and rights-holders. Its aim is to clarify their responsibilities and help them maintain safe retail spaces for Indigenous people and a safe and welcoming shopping environments everyone," said Patricia DeGuire, Chief Commissioner of the Ontario Human Rights Commission. The guide and fact sheets offer comprehensive information about the protections provided by the Code, how Indigenous people experience discrimination and harassment in retail settings, and suggested practices to help prevent and address discriminatory actions. "PBSC is grateful for the OHRC's longstanding partnership with the Indigenous Human Rights Program, including our collaboration on high-quality educational resources addressing discrimination against Indigenous people in retail settings," said Jason Goodman, Former Director, Family Justice, Pro Bono Students Canada. "These resources will be a valuable support within the program's Human Rights Clinics and, more broadly, raise awareness and empower action against these too-common injustices across the province." The two fact sheets summarize key information from the guide to help rights-holders (Indigenous consumers) on one hand, and duty-holders (retailers) on the other, understand: What anti-Indigenous discrimination and harassment may look like in retail settings. What to do if someone witnesses or experiences anti-Indigenous discrimination and harassment. What to do to prevent discrimination in violation of the Code. "It is unfortunate that anti-Indigenous racism continues to be a common experience for many in our community," said Sean Longboat, Co-Executive Director, OFIFC. "It is hoped that by building awareness about anti-Indigenous racism – what it is, how to prevent it, what to do if you see or experience it – that we will create a safer, more equitable society for Indigenous people to live and thrive." Our organizations will continue to promote this guide and fact sheets to help prevent anti-Indigenous discrimination in retail settings, so we can create a more inclusive Ontario. Quick Facts A guide and two fact sheets have been jointly released by the OHRC, PBSC, and OFIFC to address anti-Indigenous discrimination in retail settings. The OHRC collaborated with OFIFC, University of Toronto Indigenous law students and staff from PBSC on the development of these resources. Retail settings include, department stores, grocery stores, supermarkets, pharmacies, convenience stores, malls, shopping centres, and independent stores. Neither the guide nor the fact sheets should be considered legal advice. Guide Identifying and addressing anti-Indigenous discrimination in retail settings Fact Sheets Recognizing anti-Indigenous discrimination and harassment in retail settings Preventing anti-Indigenous discrimination and harassment in retail settings SOURCE Ontario Human Rights Commission

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store