logo
#

Latest news with #AIOffice

EU AI Office Issues Next Guidance on Foundation Models, Downstream Compliance Strategies
EU AI Office Issues Next Guidance on Foundation Models, Downstream Compliance Strategies

Time Business News

time8 hours ago

  • Business
  • Time Business News

EU AI Office Issues Next Guidance on Foundation Models, Downstream Compliance Strategies

Vancouver, Canada — The European Union's AI Office has published its most detailed guidance yet on the regulatory expectations for foundation models under the EU Artificial Intelligence Act (AI Act), marking a pivotal moment in the staged rollout of the bloc's sweeping AI framework. The guidance, aimed at both upstream developers and downstream deployers, clarifies that compliance responsibilities extend through the entire AI value chain, with an emphasis on high-risk applications such as identity verification, Know Your Customer (KYC) processes, fraud detection, and biometric authentication. While foundation models have been widely celebrated for their adaptability and efficiency, the EU AI Office has made it clear that their general-purpose nature is no excuse for regulatory gaps. Whether these models are developed by a major U.S. tech firm, an EU-based AI lab, or an open-source consortium, any deployment in high-risk contexts within the EU will be subject to strict performance, transparency, and governance obligations. The AI Office's latest guidance is particularly significant for regulated industries, where downstream services integrate foundation models into decision-making processes that affect individuals' legal rights, financial access, or physical security. In these scenarios, compliance is not just a matter of upstream assurances; it requires active oversight and testing by downstream deployers. Understanding the EU's Regulatory Position on Foundation Models Foundation models are large-scale, pre-trained AI systems that can be adapted for a wide range of applications. They form the backbone of many downstream services, from automated loan assessments to biometric border controls. Under the AI Act, the developers of these models must meet transparency and documentation requirements. Still, the deployers who adapt them for specific purposes, particularly in high-risk sectors, must conduct their risk assessments, conformity checks, and monitoring. The EU AI Office has now formally stated that compliance is a shared responsibility: upstream developers cannot 'wash their hands' of downstream risks, and downstream deployers cannot rely solely on vendor claims of compliance. This shared responsibility framework is intended to close loopholes where responsibility could otherwise be passed between parties, leading to gaps in oversight. It mirrors principles in other EU regulatory frameworks, such as GDPR's joint controller obligations. It is expected to fundamentally change how AI model procurement, integration, and lifecycle management are approached in the EU market. Key Elements of the New Guidance 1. Mandatory Technical Documentation Transfer Developers must provide downstream deployers with detailed information about a foundation model's architecture, training methodology, dataset sources, risk profiles, and performance metrics across relevant demographic groups. Downstream deployers must keep these records, adapt them to their operational context, and include them in their conformity assessment filings. 2. No Liability Laundering Through Contracts While contracts may allocate operational responsibilities, they cannot eliminate legal obligations under the AI Act. Both parties remain directly accountable to regulators. 3. Context-Specific Testing Requirements Even if a foundation model has been tested by its developer, downstream deployers must test it under real-world conditions relevant to their application. For example, a model used for verifying ID documents must be tested with authentic local document types, lighting conditions, and demographic variations. 4. Continuous Monitoring and Drift Detection Deployers must monitor for model drift (changes in performance over time), especially when models are updated or retrained by the upstream developer. 5. Public AI Database Registration High-risk deployments of foundation models must be listed in the EU's public AI database, including details on both upstream and downstream entities. Sector-Specific Compliance Implications Financial Services Banks using AI-driven fraud detection or credit scoring models must integrate AI governance checks into their vendor risk management processes. Procurement teams will need to request complete compliance documentation and ensure that models are tested for fairness, explainability, and reliability under operational conditions. Identity and KYC Providers These providers are in the direct path of enforcement, as identity verification is a designated high-risk use case. A KYC platform adapting a foundation model for biometric face matching will need to run localized accuracy tests, integrate human-in-the-loop reviews for borderline cases, and ensure that demographic bias is eliminated or mitigated. E-Commerce Platforms using AI to verify seller identities, detect counterfeit goods, or flag fraudulent transactions must confirm that the models they use meet the AI Act's transparency and testing requirements. Border and Travel Security Government agencies and airlines using foundation models for passenger verification must confirm that systems work reliably across all demographic groups, avoid over-reliance on a single vendor's performance claims, and maintain independent audit logs. Case Study 1: Cross-Border Banking and Shared Liability A large EU-based bank uses a biometric verification service that incorporates a U.S.-developed foundation model. The bank's vendor provides a compliance statement. Still, under the new guidance, the bank must independently validate the model's accuracy and fairness in its operational environment, including for customers in rural EU regions whose identity documents may be older or less machine-readable. Case Study 2: E-Commerce Fraud Detection A central e-commerce platform integrates a foundation language model to scan communications between buyers and sellers for scam patterns. While the upstream developer provides a list of known biases and error rates, the platform must conduct its testing to ensure that cultural and linguistic differences across EU member states do not lead to false positives that unfairly penalize legitimate sellers. Strategic Recommendations from Amicus International Consulting For Downstream Deployers Maintain a Model Registry — Track all foundation models in use, their origins, versions, and compliance documentation. Integrate AI Governance into Procurement — Require AI Act compliance proof as part of vendor onboarding. Test Locally, Not Just Globally — Conduct independent testing tailored to your operational jurisdiction and demographic profile. Create Feedback Loops — Develop processes that enable customers and end users to challenge or appeal AI-driven decisions. For Upstream Developers Standardize Documentation — Provide a compliance packet for downstream partners containing all required technical and risk information. Support Downstream Testing — Offer tools and datasets to help deployers run localized performance checks. Communicate Updates Proactively — Notify downstream clients when retraining or model updates could alter compliance status. Geopolitical and Competitive Context The EU's foundation model guidance is part of a broader trend in global AI regulation. The U.S. and UK are focusing on voluntary frameworks, while Singapore and Canada have begun shaping mandatory compliance rules. However, none currently match the AI Act's enforceable obligations for foundation models. This creates a competitive advantage for companies that meet EU standards early, as they will be prepared for similar frameworks elsewhere. Conversely, vendors who cannot meet the EU's documentation and testing requirements risk losing access to one of the world's largest markets. Long-Term Outlook Foundation models are likely to remain at the center of both innovation and regulatory scrutiny. As the AI Act moves toward full enforcement in 2026, the EU AI Office is expected to issue additional guidance refining the shared responsibility model and possibly expanding obligations for models with systemic impact. For identity verification, KYC, and financial services, the guidance means compliance work must start now, not in 2026. The ability to demonstrate early adoption of AI Act principles could serve as both a regulatory shield and a market differentiator. Amicus International Consulting advises all affected businesses to treat the AI Office's guidance as a baseline for global AI governance strategy. The most resilient organizations will integrate upstream and downstream compliance into a single operational framework, ensuring that no part of the AI lifecycle is left without oversight. Contact Information Phone: +1 (604) 200-5402 Email: info@ Website: TIME BUSINESS NEWS

EU's AI Code ready for companies to sign next week
EU's AI Code ready for companies to sign next week

Euronews

time14-07-2025

  • Business
  • Euronews

EU's AI Code ready for companies to sign next week

Member states' formal approval of the Code of Practice for General Purpose AI (GPAI) could come as early as 22 July, paving the way for providers of AI systems to sign up, sources familiar with the matter told Euronews. It will be just days before the entry into force of the AI Act's provisions affecting GPAI systems, on 2 August. The European Commission last week presented the Code, a voluntary set of rules drafted by experts appointed by the EU executive, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the AI Act and are expected to have more legal certainty, others will face more inspections. The Code requires a sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The 27 EU countries are expected to finalise assessment of the Code next week and if the Commission also completes its assessment by then too, the providers can formally sign up. The document, which was supposed to come out in May, faced delays and heavy criticism. Tech giants as well as publishers and rights-holders are concerned that the rules violate the EU's Copyright laws, and restrict innovation. The EU's AI Act, that regulates AI systems according to the risk they pose to society and is coming into force in stages beginning in August last year. Mixed reactions In the meantime, OpenAI, the parent company of ChatGPT, has said it will sign up to the code once its ready. 'The Code of Practice opens the door for Europe to move forward with the EU AI Continent Action Plan that was announced in April—and to build on the impact of AI that is already felt today,' the statement said. The publication drew mixed reactions, with consumer group BEUC and Center for Democracy and Technology (CDT) hesitant about the final version of the code. BEUC Senior Legal Officer, Cláudio Teixeira called the development a 'step in the right direction', but underlined that voluntary initiatives like the Code of Practice 'can be no substitute for binding EU legislation: they must complement and reinforce, not dilute, the law's core protections for consumers.' CDT Europe's Laura Lazaro Cabrera said the final draft 'stops short of requiring their in-depth assessment and mitigation in all cases.' 'The incentive for providers to robustly identify these risks will only be as strong as the AI Office's commitment to enforce a comprehensive, good-faith approach,' she said.

EU unveils AI code of practice to help businesses comply with bloc's rules
EU unveils AI code of practice to help businesses comply with bloc's rules

The Hindu

time11-07-2025

  • Business
  • The Hindu

EU unveils AI code of practice to help businesses comply with bloc's rules

The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc's landmark AI rule book. The EU code is voluntary and complements the EU's AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases. The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems The AI Act's rules on general purpose artificial intelligence are set to take force on August 2. The bloc's AI Office, under its executive Commission, won't start enforcing them for at least a year. General purpose AI, exemplified by chatbots like OpenAI's ChatGPT, can do many different tasks and underpin many of the AI systems that companies are using across the EU. Under the AI Act, uses of artificial intelligence face different levels of scrutiny depending on the level of risk they pose, with some uses deemed unacceptable banned entirely. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company's global revenue. Some Big Tech companies such as Meta have resisted the regulations, saying they're unworkable, and U.S. Vice President JD Vance, speaking at a Paris summit in February, criticised 'excessive regulation' of AI, warning it could kill "a transformative industry just as it's taking off.' More recently, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years. They say more time is needed to simplify 'unclear, overlapping and increasingly complex EU regulations' that put the continent's competitiveness in the global AI race at risk. There was no sign that Brussels was prepared to stop the clock. 'Today's publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent," the commission's executive vice president for tech sovereignty, security and democracy, Henna Virkkunen, said in a news release.

EU Commission still lacking AI scientific advisor despite applications
EU Commission still lacking AI scientific advisor despite applications

Euronews

time16-05-2025

  • Business
  • Euronews

EU Commission still lacking AI scientific advisor despite applications

The European Commission hasn't yet found a lead scientific adviser for its AI Office despite receiving 'dozens of applications', and the rules on General-Purpose AI (GPAI) kicking in on 2 August. The recruitment process is still ongoing despite the job vacancy being opened between November and December last year, a senior official working at the AI Office told Euronews. The adviser's role will be to 'ensure an advanced level of scientific understanding on General-Purpose AI (GPAI).' 'They will lead the scientific approach on General-Purpose AI on all aspects of the work of the AI Office, ensuring scientific rigor and integrity of AI initiatives,' the ad said, adding that 'they will particularly focus on the testing and evaluation of General-Purpose AI models, in close collaboration with the 'Safety Unit' of the AI Office.' The Commission's work on the GPAI – such as large language models like ChatGPT – is also still ongoing: the 2 May deadline for publication of the voluntary Code of Practice on GPAI, which should help providers of AI models comply with the EU's AI Act, was not met. The EU executive appointed thirteen experts last September, using plenary sessions and workshops to allow some 1,000 participants to share feedback. An official told Euronews the process is delayed because the Commission 'received a number of requests to leave the consultations open longer than originally planned.' The previous texts were criticised by publishers about the interplay with copyright rules, and from US Big Tech companies about 'being restrictive' and burdensome to innovation. Thomas Regnier, a Commission spokesperson on digital matters, said at a press conference on Friday that the aim is to publish the latest draft 'before the summer'. On 2 August, the rules on GP AI tools enter into force. The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027. The Commission said it would prefer a candidate from a European country for the lead scientific advisor role. In 2023, the EU executive was criticised for appointing US national Fiona Scott Morton as chief economist. This post would have given her a say in any fines or sanctions imposed on Big Tech firms, the US digital giants, under the EU's digital markets rules. The Commission said in response that EU rules allow all institutions, under certain conditions, to employ non-EU staff members based on the needs of the service. Scott Morton ultimately withdrew from her appointment after a political backlash including criticism from French President Emmanuel Macron about the choice of a non-EU candidate for the role. The Commission's AI Office is planning to grow to 140 full time positions by the end of this year: up from 100 currently.

Don't water down Europe's AI rules to please Trump, EU lawmakers warn
Don't water down Europe's AI rules to please Trump, EU lawmakers warn

Yahoo

time26-03-2025

  • Business
  • Yahoo

Don't water down Europe's AI rules to please Trump, EU lawmakers warn

Lawmakers who helped shape the European Union's landmark AI Act are worried that the 27-member bloc is considering watering down aspects on the AI rules in the face of lobbying from U.S. technology companies and pressure from the Trump administration. The EU's AI Act was approved just over a year ago, but its rules for general-purpose AI models like OpenAI's GPT-4o will only come into effect in August. Ahead of that, the European Commission—which is the EU's executive arm—has tasked its new AI Office with preparing a code of practice for the big AI companies, spelling out how exactly they will need to comply with the legislation. But now a group of European lawmakers, who helped to refine the law's language as it passed through the legislative process, is voicing concern that the AI Office will blunt the impact of the EU AI Act in 'dangerous, undemocratic' ways. The leading American AI vendors have amped up their lobbying against parts of the EU AI Act recently and the lawmakers are also concerned that the Commission may be looking to curry favor with the Trump administration, which has already made it clear it sees the AI Act as anti-innovation and anti-American. The EU lawmakers say the third draft of the code, which the AI Office published earlier this month, takes obligations that are mandatory under the AI Act and inaccurately presents them as 'entirely voluntary.' These obligations include testing models to see how they might allow things like the wide-scale discrimination and the spread of disinformation. In a letter sent Tuesday to European Commission vice-president and tech chief Henna Virkkunen, first reported by the Financial Times but published in full for the first time below, current and former lawmakers said making these model tests voluntary could potentially allow AI providers who "adopt more extreme political positions" to warp European elections, restrict freedom of information and disrupt the EU economy. 'In the current geopolitical situation, it is more important than ever that the EU rises to the challenge and stands strong on fundamental rights and democracy,' they wrote. Brando Benifei, who was one of the European Parliament's lead negotiators on the AI Act text and the first signatory on this week's letter, told Fortune Wednesday that the political climate may have something to do with the watering-down of the code of practice. The second Trump administration is antagonistic towards European tech regulation; Vice-President JD Vance warned in a fiery speech at the Paris AI Action Summit in February that 'tightening the screws on U.S. tech companies' would be a 'terrible mistake' for European countries. 'I think there is pressure coming from the United States, but it would be very naïve [to think] that we can make the Trump administration happy by going in this direction, because it would never be enough,' Benifei, who currently chairs the European Parliament's delegation for relations with the U.S., said. Benifei said he and other former AI Act negotiators had met with the Commission's AI Office experts, who are drafting the code of practice, on Tuesday. On the basis of that meeting, he expressed optimism that the offending changes could be rolled back before the code is finalized. 'I think the issues we raised have been considered and so there is space for improvement,' he said. 'We will see that in the next weeks.' Virkkunen had not provided a response to the letter, nor to Benifei's comment about U.S. pressure, at the time of publication. However, she has previously insisted that the EU's tech rules are fairly and consistently applied to companies from any country. Competition Commissioner Teresa Ribera has also maintained that the EU 'cannot transact on human rights [or] democracy and values' to placate the U.S. The key part of the AI Act here is Article 55, which places significant obligations on the providers of general-purpose AI models that come with 'systemic risk'—a term that the law defines as meaning the model could have a major impact on the EU economy or has 'actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale.' The Act says that a model can be presumed to have systemic risk if the computational power used in its training 'measured in floating point operations [FLOPs] is greater than 1025.' This likely includes many of today's most powerful AI models, though the European Commission can also designate any general-purpose model as having systemic risk if its scientific advisors recommend doing so. Under the law, providers of such models have to evaluate them 'with a view to identifying and mitigating' any systemic risks. This evaluation has to include adversarial testing—in other words, trying to get the model to do bad things, to figure out what needs to be safeguarded against. They then have to tell the European Commission's AI Office about the evaluation and what it found. This is where the third version of the draft code of practice becomes problematic. The first version of the code was clear that AI companies need to treat large-scale disinformation or misinformation as systemic risks when evaluating their models, because of their threat to democratic values and their potential for election interference. The second version didn't specifically talk about disinformation or misinformation, but still said that 'large-scale manipulation with risks to fundamental rights or democratic values,' such as election interference, was a systemic risk. Both the first and second versions were also clear that model providers should consider the possibility of large-scale discrimination as a systemic risk. But the third version only lists risks to democratic processes, and to fundamental European rights such as non-discrimination, as being 'for potential consideration in the selection of systemic risks.' The official summary of changes in the third draft maintains that these are 'additional risks that providers may choose to assess and mitigate in the future.' In this week's letter, the lawmakers who negotiated with the Commission over the final text of the law insisted that 'this was never the intention' of the agreement they struck. 'Risks to fundamental rights and democracy are systemic risks that the most impactful AI providers must assess and mitigate,' the letter read. 'It is dangerous, undemocratic and creates legal uncertainty to fully reinterpret and narrow down a legal text that co-legislators agreed on, through a Code of Practice.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store