Latest news with #ArtificialIntelligenceAct
Yahoo
7 days ago
- Business
- Yahoo
The EU AI Act aims to create a level playing field for AI innovation. Here's what it is.
The European Union's Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as 'the world's first comprehensive AI law.' After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU. The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool, and to a bank that buys that tool. Now, all of these parties have a legal framework that sets the stage for their use of AI. Why does the EU AI Act exist? As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. Now that the regulation is in place, it should 'ensure the free movement, cross-border, of AI-based goods and services' without diverging local restrictions. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn't do for society more broadly. What is the purpose of the EU AI Act? According to European lawmakers, the framework's main goal is to 'promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.' Yes, that's quite a mouthful, but it's worth parsing carefully. First, because a lot will depend on how you define 'human centric' and 'trustworthy' AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details. How does the EU AI Act balance its different goals? To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of 'unacceptable risk' use cases; flagging a set of 'high-risk' uses calling for tight regulation; and applying lighter obligations to 'limited risk' scenarios. Has the EU AI Act come into effect? Yes and no. The EU AI Act rollout started on August 1, 2024, but it will only come into force through a series of staggered compliance deadlines. In most cases, it will also apply sooner to new entrants than to companies that already offer AI products and services in the EU. The first deadline came into effect on February 2, 2025, and focused on enforcing bans on a small number of prohibited uses of AI, such as untargeted scraping of internet or CCTV for facial images to build up or expand databases. Many others will follow, but unless the schedule changes, most provisions will apply by mid-2026. What changed on August 2, 2025? Since August 2, 2025, the EU AI Act applies to 'general-purpose AI models with systemic risk.' GPAI models are AI models trained with a large amount of data, and that can be used for a wide range of tasks. That's where the risk element comes in. According to the EU AI Act, GPAI models can come with systemic risks; 'for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous [GPAI] models.' Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players such as Anthropic, Google, Meta, and OpenAI. But since these companies already have models on the market, they will also have until August 2, 2027, to comply, unlike new entrants. Does the EU AI Act have teeth? The EU AI Act comes with penalties that lawmakers wanted to be simultaneously 'effective, proportionate and dissuasive' — even for large global players. Details will be laid down by EU countries, but the regulation sets out the overall spirit — that penalties will vary depending on the deemed risk level — as well as thresholds for each level. Infringement on prohibited AI applications leads to the highest penalty of 'up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).' The European Commission can also inflict fines of up to €15 million or 3% of annual turnover on providers of GPAI models. How fast do existing players intend to comply? The voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how companies may engage with the framework law until forced to do so. In July 2025, Meta announced it wouldn't sign the voluntary GPAI code of practice meant to help such providers comply with the EU AI Act. However, Google soon after confirmed it would sign, despite reservations. Signatories so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. But as we have seen with Google's example, signing does not equal a full-on endorsement. Why have (some) tech companies been fighting these rules? While stating in a blog post that Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. 'We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI,' he wrote. Meta was more radical, with its chief global affairs officer Joel Kaplan stating in a post on LinkedIn that 'Europe is heading down the wrong path on AI.' Calling the EU's implementation of the AI Act 'overreach,' he stated that the code of practice 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' European companies have expressed concerns as well. Arthur Mensch, the CEO of French AI champion Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to 'stop the clock' for two years before key obligations of the EU AI Act came into force. Will the schedule change? In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, saying it would still stick to its timeline for implementing the EU AI Act. It went ahead with the August 2, 2025, deadline as planned, and we will update this story if anything changes. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

ABC News
7 days ago
- Business
- ABC News
Productivity Commission says government must pause plan for 'mandatory guardrails' on AI
The Productivity Commission has opposed the introduction of tough laws to control AI being considered by the government, warning its plan for "mandatory guardrails" should be paused until gaps in the law are properly identified. The government has been working on a comprehensive response to the rapid rise of artificial intelligence tools, with one option being a dedicated AI Act that would set rules for all AI technologies based on their risk to society, including possible bans on the most risky technologies. AI will be one of the central issues debated at the government's upcoming productivity round table late this month. Ahead of that, the commission has cautioned against a heavy-handed approach from government, though it agreed gaps in existing law "exposed by AI" should be closed as soon as possible. Its warning comes just two days after former industry minister Ed Husic, who began the government's years-long review into AI laws, publicly backed the creation of a dedicated Artificial Intelligence Act. It also puts the commission at odds with unions, who have toughened their stance on AI ahead of this month's productivity roundtable, saying not only is an AI Act needed, but protections from job losses to AI should also be on the agenda. In a report released ahead of that round table, the commission conservatively estimated AI could add more than $116 billion to Australia's economy over the next decade, or $4,400 per capita, driving a boost to productivity as large or even larger than the internet and mobile phones did 20 years ago. But it warned that such a boost would be at risk if regulation was introduced as anything other than a last resort. "Adding economy-wide regulations that specifically target AI could see Australia fall behind the curve, limiting a potentially enormous growth opportunity," commissioner Stephen King wrote. "The Australian government should only apply the proposed 'mandatory guardrails for high-risk AI' in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology-neutral regulation is not possible." To give a sense of scale, the 20-year average for labour productivity growth is sitting at about 0.9 per cent a year, and the Productivity Commission expects AI alone could add about 4.3 per cent to labour productivity growth over the next decade. The past few years of inflation and cost-of-living pain have proven why it matters: growing the economy means wages and living standards can grow too, or fall backward if the economy stagnates. It is the kind of growth Treasurer Jim Chalmers has described as potentially "the most transformative technology in human history", which is why it has become such a focus of the coming round table. However, the Productivity Commission said there was considerable uncertainty around how big AI would prove to be, saying at the lower end it could provide just a tiny 0.05 per cent annual boost, or it could cause a 1.3 percentage point annual lift — an almost unimaginable explosion in growth. The commission also acknowledged that opportunity would not arrive without "painful transitions" for workers made redundant as sectors reshape around AI. It said that while the picture was uncertain, the World Economic Forum expected nine million jobs could be displaced globally, and the Australian government may have to consider support for workers to be retrained. Responding to the Productivity Commission's report, Mr Chalmers said the government could ensure AI was a force for good by treating it as "an enabler, not an enemy". "We're optimistic about the role AI can play in strengthening our economy and lifting living standards for more Australians at the same time as we're realistic about the risks," Mr Chalmers said. "AI will be a key concern of the economic reform round table I'm convening this month because it has major implications for economic resilience, productivity, and budget sustainability." The AI industry has yet to win the public's trust. Repeated surveys have found skepticism among the public, most of whom say they fear AI will do more harm than good. The sector and the government know the public must be brought along for the potential of AI to be realised, and for Australia to keep pace with the world as it changes. But investors have warned the Productivity Commission that delays in a comprehensive response from the government to AI are leading to a "wait-and-see" approach. The federal government has said little about its AI response since former minister Ed Husic told reporters in January the government was in "the final stages" of developing mandatory guardrails. The treasurer wrote on Sunday that the government intended to regulate "as much as necessary" to protect Australians, "but as little as possible" to encourage the industry. "It is not beyond us to chart a responsible middle course on AI, which maximises the benefits and manages the risks," he wrote.

The Hindu
31-07-2025
- Business
- The Hindu
Google to sign EU's AI code of practice despite concerns
Alphabet's Google will sign the European Union's code of practice which aims to help companies comply with the bloc's landmark artificial intelligence rules, its global affairs president said in a blog post on Wednesday, though he voiced some concerns. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Artificial Intelligence Act (AI Act), such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. "We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available," Kent Walker, who is also Alphabet's chief legal officer, said in the blog post. He added, however, that Google was concerned that the AI Act and code of practice risk slowing Europe's development and deployment of AI. "In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," Walker said. Microsoft will likely sign the code, its president, Brad Smith, told Reuters earlier this month, while Meta Platforms declined to do so and cited the legal uncertainties for model developers.


Indian Express
31-07-2025
- Business
- Indian Express
Google to sign EU's AI code of practice despite concerns
Alphabet's Google will sign the European Union's code of practice which aims to help companies comply with the bloc's landmark artificial intelligence rules, its global affairs president said in a blog post on Wednesday, though he voiced some concerns. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Artificial Intelligence Act (AI Act), such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. 'We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available,' Kent Walker, who is also Alphabet's chief legal officer, said in the blog post. He added, however, that Google was concerned that the AI Act and code of practice risk slowing Europe's development and deployment of AI. 'In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness,' Walker said. Microsoft will likely sign the code, its president, Brad Smith, told Reuters earlier this month, while Meta Platforms declined to do so and cited the legal uncertainties for model developers. The European Union enacted the guardrails for the use of artificial intelligence in an attempt to set a potential global benchmark for a technology used in business and everyday life and dominated by the United States and China.


Time of India
30-07-2025
- Business
- Time of India
Google to sign EU's AI code of practice despite concerns
By Foo Yun Chee BRUSSELS: Alphabet's Google will sign the European Union's code of practice which aims to help companies comply with the bloc's landmark artificial intelligence rules, its global affairs president said in a blog post on Wednesday, though he voiced some concerns. The voluntary code of practice, drawn up by 13 independent experts, aims to provide legal certainty to signatories on how to meet requirements under the Artificial Intelligence Act (AI Act), such as issuing summaries of the content used to train their general-purpose AI models and complying with EU copyright law. "We do so with the hope that this code, as applied, will promote European citizens' and businesses' access to secure, first-rate AI tools as they become available," Kent Walker, who is also Alphabet's chief legal officer, said in the blog post. He added, however, that Google was concerned that the AI Act and code of practice risk slowing Europe's development and deployment of AI. "In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness," Walker said. Microsoft will likely sign the code, its president, Brad Smith, told Reuters earlier this month, while Meta Platforms declined to do so and cited the legal uncertainties for model developers.