logo
#

Latest news with #CodeofPracticeonGeneralPurposeAI

'Thank you for the copyright': ABBA legend warns against AI code
'Thank you for the copyright': ABBA legend warns against AI code

Euronews

time22-05-2025

  • Business
  • Euronews

'Thank you for the copyright': ABBA legend warns against AI code

ABBA member Björn Ulvaeus warned MEPs in Brussels on Tuesday that he is concerned about 'proposals driven by Big Tech' that weaken creative rights under the EU's AI Act. 'I am pro-tech, but I am concerned about current proposals that are being driven by the tech sector to weaken creative rights,' Ulvaeus told a hearing in the European Parliament's Committee on Culture and Education on Tuesday. The comments from the singer songwriter - who is the president of the International Confederation of Societies of Authors and Composers (CISAC) - add to concerns voiced by the creative industry, including publishers and rights holders in recent months, on the drafting process of a voluntary Code of Practice on General Purpose AI (GPAI) for large language models like ChatGPT under the AI Act. The European Commission appointed 13 experts to consider the issue last September, using plenary sessions and workshops to allow some 1,000 participants to share feedback. The draft texts since published aimed at helping providers of AI models comply with the EU's AI rulebook, but publishers criticised them for the interplay with copyright rules, while US tech giants complained about the 'restrictive' and burdensome effects. 'The argument that AI can only be achieved if copyright is weakened is false and dangerous. AI should not be built on theft, it would be an historic abandonment of principles,' Ulvaeus said. 'The EU has been a champion of creative rights. But now we see that the Code ignores calls from the creative sector for transparency. What we want is for the EU to lead on AI regulation, not to backslide,' he said, adding that the implementation of the act should 'stay true to the original objective'. The latest draft, due early May, was delayed because the Commission received a large number of requests to leave the consultations open longer than originally planned. The aim is to publish the latest draft before the summer. On 2 August, the rules on GP AI tools enter into force. The administration led by US President Donald Trump has said the EU's digital rules, including the Code, stifle innovation. The US Mission to the EU sent a letter to the EU executive pushing back against the Code in April. Similar concerns were voiced by US Big Tech companies: Meta's global policy chief, Joel Kaplan, said in February that it would not sign the code because it had issues with the text as it then stood. A senior official working at the Commission's AI Office told Euronews earlier this month however, that US companies 'are very proactive' and that there is not the sense that 'they are pulling back because of a change in the administration.' The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027. The EU executive can decide to formalise the Code under the AI Act, through an implementing act. Sir Elton John recently described the UK government as "absolute losers" and said he felt "incredibly betrayed" over plans to exempt technology firms developing AI from copyright laws. The European Commission will offer relief to small mid-cap companies burdened by the current scope of the General Data Protection Regulation (GDPR) in a rule simplification package known as an Omnibus to be published on Wednesday, according to a working document seen by Euronews. Currently, companies with fewer than 250 employees are exempt from the data privacy rules to reduce their administrative costs, the Commission now proposes to extend this derogation to the so-called small mid-cap companies. Small mid-cap companies can employ up to 500 employees and make higher turnovers. Under the plan - the Commission's fourth such Omnibus - such companies will only have to keep a record of the processing of the users' data when it's considered 'high risk', for example private medical information. The change comes seven years after the GDPR took effect. Since then the rulebook has shielded consumer data from US tech giants but is also perceived as burdensome for smaller and mid-sized companies that often did not have the means to hire data protection lawyers. The biggest fine issued under the rules so far is €1.2 billion on US tech giant Meta: the Irish data protection authority fined the company in 2023 for invalid data transfers. Although fines are generally lower for smaller businesses, at up to €20 million or 4% of annual turnover they remain significant. In the Netherlands for example, VoetbalTV, a video platform for amateur football games, was fined €575,000 by the Dutch privacy regulator in 2018. Although the company appealed and the court overturned the fine, it had to file for bankruptcy. Both EU lawmaker Axel Voss (Germany/EPP), who was involved in steering the legislation through the European Parliament, and Austrian privacy activist Max Schrems, whose organisation NOYB filed numerous data protection complaints with regulators, called for different rules for smaller companies earlier this year. Under the plan, 90% of the businesses – small retailers and manufacturers -- would just face minor compliance tasks and would not need an in-house data protection officer anymore, no excessive documentation and lower administrative fines, capped at €500,000. Voss said his proposal would not weaken the EU's privacy standards, but make it 'more enforceable, and more proportionate'. Similar calls are coming from the member states: the new German government stressed in its coalition plan that it will work on EU level to ensure that 'non-commercial activities (for example, associations), small and medium-sized enterprises, and low-risk data processing are exempt from the scope of the GDPR.' By contrast, civil society and consumer groups have warned that the Commission's plan to ease GDPR rules could have unintended consequences. On Tuesday, privacy advocacy group EDRi stated in an open letter that the change risks 'weakening key accountability safeguards' by making data protection obligations depend on company size rather than the actual risk to people's rights. It also fears this could lead to further pressure to roll back other parts of the GDPR. Consumer advocates share similar concerns, in a letter from late April, pan-European consumer group BEUC warned that even small companies can cause serious harm through data breaches. It argued that using headcount or turnover as a basis for exemptions could create legal uncertainty and go against EU fundamental rights. Both groups say the focus should instead be on better enforcement of existing rules and more practical support for small companies. Meanwhile reforms of the data privacy law are under negotiation between the Council and the European Parliament. A new round of political discussions on the GDPR Procedural Regulation is expected to take place on Wednesday. EU institutions are attempting to finalise a long-awaited deal to improve cooperation between national data protection authorities. The regulation is meant to address delays and inconsistencies in how cross-border cases are handled under the GDPR, by harmonising procedures and timelines. According to experts familiar with the file, one of the main sticking points is whether to introduce binding deadlines for national authorities to act on complaints. While the Parliament has pushed for clearer timelines to speed up enforcement, some member states argue that fixed deadlines could overwhelm authorities and increase legal risks. This change is however not expected to impact the Commission's 4th Omnibus package.

US companies still engaging with AI Code despite Trump: EU official
US companies still engaging with AI Code despite Trump: EU official

Euronews

time08-05-2025

  • Business
  • Euronews

US companies still engaging with AI Code despite Trump: EU official

US companies do not seem to have changed their attitude towards the EU's Code of Practice on General Purpose AI – which should help providers of AI models comply with the EU's AI Act – after the change in US administration, an official working in the European Commission's AI Office told Euronews. The drafting process of the voluntary Code is still ongoing after the Commission appointed thirteen experts last September, using plenary sessions and workshops to allow some 1,000 participants to share feedback. The final version was planned for 2 May but has not yet been published. The official said the deadline was not met because the Commission 'received a number of requests to leave the consultations open longer than originally planned.' The process has been criticised throughout, by Big Tech companies, but also publishers and rightsholders whose main concern is that the rules are a violation of the EU's Copyright laws. The US government's Mission to the EU sent a letter to the EU executive pushing back against the Code in April. The administration led by Republican President Donald Trump has been critical of the EU's digital rules, claiming that it stifles innovation. Meta's global policy chief, Joel Kaplan, said in February that it will not sign the code because it had issues with the then latest version. The document has been updated since. The EU official said that US companies 'are very proactive' and that there is not the sense that 'they are pulling back because of a change in the administration.' 'I cannot say that the attitude of the companies engaging the code of practice has changed because of the change in the administration. I did not perceive that in the process.' The plan is still to get the rules out before 2 August, when the rules on GP AI tools - such as large language models like ChatGPT - enter into force. The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027. The Commission will assess companies' intentions to sign the code, and carry out an adequacy assessment with the member states. The EU executive wants to know 'by August' whether the institutions also accept the code. The EU executive can then decide to formalise the Code through an implementing act. A report by Corporate Europe Observatory (CEO) and LobbyControl published last week suggests that Big Tech companies put pressure on the Commission to water down the Code of Practice, and that they 'enjoyed structural advantages' in the drafting process. In a reaction, the Commission said all participants "had the same opportunity to engage in the process through the same channels." The EU official could not say whether it seems likely that companies will sign, but stressed that it's important that they do. An alternative option, where businesses commit only to specific parts of the Code, has not been put on the table yet. If that becomes a reality, however, they would need to 'fulfil their obligation in a different way,' the official said. The last global gathering on artificial intelligence (AI) at the Paris AI Action Summit in February saw countries divided, notably after the US and UK refused to sign a joint declaration for AI that is "open, inclusive, transparent, ethical, safe, secure, and trustworthy". AI experts at the time criticised the declaration for not going far enough and being "devoid of any meaning," the reason countries cited for not signing the pact, as opposed to their being against AI safety. The next global AI summit will be held in India next year, but rather than wait until then, Singapore's government held a conference called the International Scientific Exchange on AI Safety on April 26. "Paris [AI Summit] left a misguided impression that people don't agree about AI safety," said Max Tegmark, MIT professor and contributor to the Singapore report. "The Singapore government was clever to say yes, there is an agreement,' he told Euronews Next. Representatives from leading AI companies, such as OpenAI, Meta, Google DeepMind, and Anthropic, as well as leaders from 11 countries, including the US, China, and the EU, attended. The result of the conference was published in a paper released on Thursday called 'The Singapore Consensus on Global AI Safety Research Priorities'. The document lists research proposals to ensure that AI does not become dangerous to humanity. It identifies three aspects to promote a safe AI: assessing, developing trustworthiness, and controlling AI systems, which include large language models (LLMs), ​​multimodal models that can work with multiple types of data, often including text, images, video, and lastly, AI agents. The main research that the document argues should be assessed is the development of risk thresholds to determine when intervention is needed, techniques for studying current impacts and forecasting future implications, and methods for rigorous testing and evaluation of AI systems. Some of the key areas of research listed include improving the validity and precision of AI model assessments and finding methods for testing dangerous behaviours, which include scenarios where AI operates outside human control. The paper calls for a definition of boundaries between acceptable and unacceptable behaviours. It also says that when building AI systems, they should be developed with truthful and honest systems and datasets. And once built, these AI systems should be checked to ensure they meet agreed safety standards, such as tests against jailbreaking. The final area the paper advocates for is the control and societal resilience of AI systems. This includes monitoring, kill switches, and non-agentic AI serving as guardrails for agentic systems. It also calls for human-centric oversight frameworks. As for societal resilience, the paper said that infrastructure against AI-enabled disruptions should be strengthened, and it argued that coordination mechanisms for incident responses should be developed. The release of the report comes as the geopolitical race for AI intensifies and AI companies thrash out their latest models to beat their competition. However, Xue Lan, Dean of Tsinghua University, who attended the conference, said: "In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future". Tegmark added that there is a consensus for AI safety between governments and tech firms, as it is in everyone's interest. "OpenAI, Antropic, and all these companies sent people to the Singapore conference; they want to share their safety concerns, and they don't have to share their secret sauce," he said. "Rival governments also don't want nuclear blow-ups in opposing countries, it's not in their interest," he added. Tegmark hopes that before the next AI summit in India, governments will treat AI like any other powerful tech industry, such as biotech, whereby there are safety standards in each country and new drugs are required to pass certain trials. "I'm feeling much more optimistic about the next summit now than after Paris," Tegmark said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store