logo
#

Latest news with #AIOffice

EU Commission still lacking AI scientific advisor despite applications
EU Commission still lacking AI scientific advisor despite applications

Euronews

time16-05-2025

  • Business
  • Euronews

EU Commission still lacking AI scientific advisor despite applications

The European Commission hasn't yet found a lead scientific adviser for its AI Office despite receiving 'dozens of applications', and the rules on General-Purpose AI (GPAI) kicking in on 2 August. The recruitment process is still ongoing despite the job vacancy being opened between November and December last year, a senior official working at the AI Office told Euronews. The adviser's role will be to 'ensure an advanced level of scientific understanding on General-Purpose AI (GPAI).' 'They will lead the scientific approach on General-Purpose AI on all aspects of the work of the AI Office, ensuring scientific rigor and integrity of AI initiatives,' the ad said, adding that 'they will particularly focus on the testing and evaluation of General-Purpose AI models, in close collaboration with the 'Safety Unit' of the AI Office.' The Commission's work on the GPAI – such as large language models like ChatGPT – is also still ongoing: the 2 May deadline for publication of the voluntary Code of Practice on GPAI, which should help providers of AI models comply with the EU's AI Act, was not met. The EU executive appointed thirteen experts last September, using plenary sessions and workshops to allow some 1,000 participants to share feedback. An official told Euronews the process is delayed because the Commission 'received a number of requests to leave the consultations open longer than originally planned.' The previous texts were criticised by publishers about the interplay with copyright rules, and from US Big Tech companies about 'being restrictive' and burdensome to innovation. Thomas Regnier, a Commission spokesperson on digital matters, said at a press conference on Friday that the aim is to publish the latest draft 'before the summer'. On 2 August, the rules on GP AI tools enter into force. The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027. The Commission said it would prefer a candidate from a European country for the lead scientific advisor role. In 2023, the EU executive was criticised for appointing US national Fiona Scott Morton as chief economist. This post would have given her a say in any fines or sanctions imposed on Big Tech firms, the US digital giants, under the EU's digital markets rules. The Commission said in response that EU rules allow all institutions, under certain conditions, to employ non-EU staff members based on the needs of the service. Scott Morton ultimately withdrew from her appointment after a political backlash including criticism from French President Emmanuel Macron about the choice of a non-EU candidate for the role. The Commission's AI Office is planning to grow to 140 full time positions by the end of this year: up from 100 currently.

Don't water down Europe's AI rules to please Trump, EU lawmakers warn
Don't water down Europe's AI rules to please Trump, EU lawmakers warn

Yahoo

time26-03-2025

  • Business
  • Yahoo

Don't water down Europe's AI rules to please Trump, EU lawmakers warn

Lawmakers who helped shape the European Union's landmark AI Act are worried that the 27-member bloc is considering watering down aspects on the AI rules in the face of lobbying from U.S. technology companies and pressure from the Trump administration. The EU's AI Act was approved just over a year ago, but its rules for general-purpose AI models like OpenAI's GPT-4o will only come into effect in August. Ahead of that, the European Commission—which is the EU's executive arm—has tasked its new AI Office with preparing a code of practice for the big AI companies, spelling out how exactly they will need to comply with the legislation. But now a group of European lawmakers, who helped to refine the law's language as it passed through the legislative process, is voicing concern that the AI Office will blunt the impact of the EU AI Act in 'dangerous, undemocratic' ways. The leading American AI vendors have amped up their lobbying against parts of the EU AI Act recently and the lawmakers are also concerned that the Commission may be looking to curry favor with the Trump administration, which has already made it clear it sees the AI Act as anti-innovation and anti-American. The EU lawmakers say the third draft of the code, which the AI Office published earlier this month, takes obligations that are mandatory under the AI Act and inaccurately presents them as 'entirely voluntary.' These obligations include testing models to see how they might allow things like the wide-scale discrimination and the spread of disinformation. In a letter sent Tuesday to European Commission vice-president and tech chief Henna Virkkunen, first reported by the Financial Times but published in full for the first time below, current and former lawmakers said making these model tests voluntary could potentially allow AI providers who "adopt more extreme political positions" to warp European elections, restrict freedom of information and disrupt the EU economy. 'In the current geopolitical situation, it is more important than ever that the EU rises to the challenge and stands strong on fundamental rights and democracy,' they wrote. Brando Benifei, who was one of the European Parliament's lead negotiators on the AI Act text and the first signatory on this week's letter, told Fortune Wednesday that the political climate may have something to do with the watering-down of the code of practice. The second Trump administration is antagonistic towards European tech regulation; Vice-President JD Vance warned in a fiery speech at the Paris AI Action Summit in February that 'tightening the screws on U.S. tech companies' would be a 'terrible mistake' for European countries. 'I think there is pressure coming from the United States, but it would be very naïve [to think] that we can make the Trump administration happy by going in this direction, because it would never be enough,' Benifei, who currently chairs the European Parliament's delegation for relations with the U.S., said. Benifei said he and other former AI Act negotiators had met with the Commission's AI Office experts, who are drafting the code of practice, on Tuesday. On the basis of that meeting, he expressed optimism that the offending changes could be rolled back before the code is finalized. 'I think the issues we raised have been considered and so there is space for improvement,' he said. 'We will see that in the next weeks.' Virkkunen had not provided a response to the letter, nor to Benifei's comment about U.S. pressure, at the time of publication. However, she has previously insisted that the EU's tech rules are fairly and consistently applied to companies from any country. Competition Commissioner Teresa Ribera has also maintained that the EU 'cannot transact on human rights [or] democracy and values' to placate the U.S. The key part of the AI Act here is Article 55, which places significant obligations on the providers of general-purpose AI models that come with 'systemic risk'—a term that the law defines as meaning the model could have a major impact on the EU economy or has 'actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale.' The Act says that a model can be presumed to have systemic risk if the computational power used in its training 'measured in floating point operations [FLOPs] is greater than 1025.' This likely includes many of today's most powerful AI models, though the European Commission can also designate any general-purpose model as having systemic risk if its scientific advisors recommend doing so. Under the law, providers of such models have to evaluate them 'with a view to identifying and mitigating' any systemic risks. This evaluation has to include adversarial testing—in other words, trying to get the model to do bad things, to figure out what needs to be safeguarded against. They then have to tell the European Commission's AI Office about the evaluation and what it found. This is where the third version of the draft code of practice becomes problematic. The first version of the code was clear that AI companies need to treat large-scale disinformation or misinformation as systemic risks when evaluating their models, because of their threat to democratic values and their potential for election interference. The second version didn't specifically talk about disinformation or misinformation, but still said that 'large-scale manipulation with risks to fundamental rights or democratic values,' such as election interference, was a systemic risk. Both the first and second versions were also clear that model providers should consider the possibility of large-scale discrimination as a systemic risk. But the third version only lists risks to democratic processes, and to fundamental European rights such as non-discrimination, as being 'for potential consideration in the selection of systemic risks.' The official summary of changes in the third draft maintains that these are 'additional risks that providers may choose to assess and mitigate in the future.' In this week's letter, the lawmakers who negotiated with the Commission over the final text of the law insisted that 'this was never the intention' of the agreement they struck. 'Risks to fundamental rights and democracy are systemic risks that the most impactful AI providers must assess and mitigate,' the letter read. 'It is dangerous, undemocratic and creates legal uncertainty to fully reinterpret and narrow down a legal text that co-legislators agreed on, through a Code of Practice.' This story was originally featured on

Emirati Human Resources Development Council in Dubai signs MoU with American University in Dubai
Emirati Human Resources Development Council in Dubai signs MoU with American University in Dubai

Zawya

time18-03-2025

  • Business
  • Zawya

Emirati Human Resources Development Council in Dubai signs MoU with American University in Dubai

Launched by the AI Office to enhance the readiness of talents H.E. Omar Sultan AlOlama: 'The UAE places empowering minds and talents at the core of its strategies and goals to develop the digital economy'. H.E. Al-Falasi: 'Strengthening partnerships in the education sector is essential to empowering the UAE youth with cutting-edge skills and essential tools to keep pace with the rapid technological advancements.' Dr. Long: 'This partnership signifies a strategic milestone in equipping Emirati students with digital skills that enhance their competitive edge in the global technology job market.' Dubai, UAE: The Emirati Human Resources Development Council in Dubai has signed a Memorandum of Understanding (MoU) with the American University in Dubai (AUD) to empower 5,000 Emirati students over the next five years with digital skills and essential tools, enhancing their readiness to join leading global technology companies. The initiative also aims to develop their capabilities to contribute to advancing innovation in future technologies and the UAE's digital transformation. The partnership aims to empower national talent through the 5,000 Digital Talents initiative, launched last year by the Artificial Intelligence, Digital Economy, and Remote Work Applications Office in partnership with the Emirati Human Resources Development Council in Dubai, in line with the directives of H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, Deputy Prime Minister and Minister of Defence of the UAE, Chairman of The Executive Council of Dubai, and Chairman of the Higher Committee for Future Technology and Digital Economy. The partnership serves as an extension of the UAE government's efforts to build a sustainable digital economy that prioritises knowledge, technological innovation, and academic excellence, while also enhancing the national human capital. The agreement was held in the presence of H.E. Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications and was signed by H.E. Abdullah Bin Zayed Al-Falasi, Deputy Chairman of the Council and Director General of Dubai Government Human Resources Department, and Dr. Kyle Long, President of AUD. H.E. Omar Sultan Al Olama stated that the UAE places empowering minds and talents at the core of its strategies and goals to develop the digital economy under the guidance of its leadership. This commitment reflects the leadership's belief that its people are its most valuable asset and highlights its understanding that digital innovation, with artificial intelligence at its core, is key to shaping the future and driving sustainable growth. H.E. Omar Sultan Al Olama stated that the UAE continues to strengthen the supportive environment for talents by launching initiatives and strategies that contribute to the UAE's global leadership and reinforce its position as a global hub for the digital economy. H.E. Abdullah Bin Zayed Al-Falasi underlined how this strategic move is an integral part of the Council's dedication to developing local talent and equipping them to take the lead in the digital economy of the future. His Excellency also emphasised the significance of strengthening collaboration with the education sector, especially in digital fields, as a fundamental element in equipping Emirati youth with advanced capabilities and the necessary tools to stay current with the rapid advancements in technology. These efforts align with Dubai's ambitious vision of establishing itself as a global hub that fosters innovation and establishes a sustainable economic and developmental model based on knowledge and technology. Dr. Kyle Long, President of AUD, said, "This MoU signifies a strategic step in empowering Emirati students with digital skills that enhance their competitive edge in the global technology job market. The American University in Dubai is committed to delivering comprehensive educational programs that are tailored to meet the demands of the digital economy. Likewise, we work with leading international technology businesses to integrate the newest technologies into our academic programs and provide students with practical training prospects. These initiatives aim to equip students with the essential skills needed to enhance their capabilities and make a lasting impact in the technological landscape." The MoU seeks to develop a new generation of innovators to propel the nation's digital transformation by providing national talents with the necessary tools to improve their preparedness for leading prominent global technology companies. The agreement further establishes a framework for both parties to conduct informative workshops for students in their third and fourth years of technology-related disciplines, as well as intensive training programs to prepare them for the digital job market. In addition, it offers career coaching and guidance for students, assisting them in developing their digital professional profiles on social media platforms and increasing their chances of success in accordance with the expectations of the technology job market. -Ends-

DeepIP announces New AI-Powered Office Actions Module
DeepIP announces New AI-Powered Office Actions Module

Associated Press

time21-02-2025

  • Business
  • Associated Press

DeepIP announces New AI-Powered Office Actions Module

DeepIP unveils its groundbreaking AI Office Action Module, designed to transform the traditional, labor-intensive process of responding to office actions. NEW YORK, NY, UNITED STATES, February 21, 2025 / / -- Today marks a significant milestone for patent practitioners as DeepIP unveils its groundbreaking AI Office Action Module, designed to transform the traditional, labor-intensive process of responding to USPTO office actions. Building on the foundation of our trusted AI solutions, this module is not merely an upgrade but a complete overhaul—providing patent attorneys with a smarter, faster, and more intuitive tool to enhance their practice. How It Works: This innovative module integrates seamlessly into DeepIP's existing platform. Users can access the features directly within their workflow, enabling them to manage office actions effectively without disrupting their routine. The AI's capabilities extend beyond simple automation, providing strategic insights that help attorneys prioritize their responses and refine their arguments. Key Benefits: 1. Intelligent Rejection Analysis: instant insight into the validity of rejections to focus your efforts effectively. 2. Efficient Reference Verification: quickly access necessary documentation without extensive manual searching. 3. Smart AI Drafting Assistance: develop compelling and compliant responses efficiently. Client Testimonials: Early adopters of the new Office Action Module have reported significant improvements in their workflow, with one prominent IP attorney stating, 'DeepIP's new module has not only sped up our response times but also enhanced the quality of our submissions. It's like having a seasoned colleague who's always available to point out the best approach.' General Availability: The Office Action Module is available now for all DeepIP users. Patent practitioners interested in experiencing the future of patent prosecution are invited to request a trial. For additional details on how DeepIP can revolutionize your patent practice, visit our website at About DeepIP: DeepIP leads the IP industry with innovative patent drafting and prosecution solutions. By integrating advanced AI into patent practitioners' workflow, DeepIP enhances quality and saves hours of work, reducing team pressure and improving retention. Our commitment is steadfast: to empower IP professionals with technology that elevates their practice to new heights. Exclusive Webinar Presentation: Join us for an exclusive live webinar on February 25th at 12 P.M. (ET), where we will showcase the capabilities of the new Office Action Module. This event offers attendees the first look at how the module works in real time, highlighting its seamless integration and powerful features. For media inquiries, please contact: Thomas Chazot: [email protected] Thomas Chazot DeepIP +33 7 78 88 17 31 LinkedIn Legal Disclaimer:

EU lacks guidance on AI prohibitions, as rules start to apply
EU lacks guidance on AI prohibitions, as rules start to apply

Euronews

time28-01-2025

  • Politics
  • Euronews

EU lacks guidance on AI prohibitions, as rules start to apply

Civil society groups are concerned about the lack of European Commission guidance on banned artificial intelligence systems as the 2 February starting date for provisions of the AI Act dealing with these tools approaches. Companies still have until mid-next year to bring their policies in line with most of the provisions of the EU's AI Act, but the ban on AI systems such as social scoring, profiling and facial recognition systems will kick in earlier. The Commission's AI Office unit responsible fordealing with the issue said it would develop guidelines to help providers with compliance by early 2025, following a consultation on prohibited practices it carried out last November. However, those documents have not yet been published. A spokesperson for the institution told Euronews that the aim is to publish the guidelines 'in time for the entry into application of these provisions on 2 February'. Ella Jakubowska, head of policy, at advocacy group EDRi said that there are 'significant gaps and many open questions around the AI Office'. 'It is really worrying that interpretive guidelines still have not been published. We hope this will not be a harbinger of how the AI Act will be enforced in the future,' she added. Loopholes The AI Act foresees prohibitions for systems deemed to pose risks due to their potential negative impacts on society. However, the AI Act also foresees some exceptions where the public interest outweighs the potential risk, such as in law enforcement cases. Caterina Rodelli, EU policy analyst at global human rights organization Access Now is sceptical of these exceptions: 'If a prohibition contains exceptions, it is not a prohibition anymore.' 'The exceptions mainly benefit law enforcement and migration authorities, allowing them to use unreliable and dangerous systems such as lie-detectors, predictive policing applications, or profiling systems in migration procedures,' she said. EDRi's Jakubowska has similar concerns, and fears that 'some companies and governments will try to exploit this to continue developing and deploying unacceptably harmful AI systems.' The issue was heavily debated when the EU AI Act was negotiated, with lawmakers calling for strict bans on facial recognition systems. National regulators The AI Act will have extra-territorial scope, which means that companies that are not based in the EU can still be subject to its provisions. Businesses can be fined up to 7% of global annual turnover for breaches of the Act. Most of the AI Act provisions will apply next year, allowing for standards and guidance to be prepared. In the meantime, member states have until August of this year to set up their national regulators who will be tasked with overseeing the AI Act. Some countries have already started preparatory steps and tasked data protection or telecom bodies with oversight. '[This] seems to be a bit of a patchwork, with little to nothing known in several countries about either the market surveillance authorities or the notified bodies that will oversee the rules nationally,' said Jakubowska.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store