
US tech giants ask European Commission for 'simplest possible' AI code
US tech giants Amazon, IBM, Google, Meta, Microsoft and OpenAI have called upon the European Commission to keep its upcoming Code of Practice on General-Purpose AI (GPAI) 'as simple as possible', according to now published minutes of a meeting held last week.
In a meeting with Werner Stengg, an official in the cabinet of EU Tech Commissioner Henna Virkkunen, the companies said that the code 'should be as simple as possible, so as to avoid redundant reporting and unnecessary administrative burden'.
The voluntary Code of Practice on GPAI, aims to help providers of AI models – such as large language models like ChatGPT, comply with the EU's AI Act.
The final draft was due out on 2 May but was delayed because the Commission 'received a number of requests to leave the consultations open longer than originally planned.'
The EU executive appointed thirteen experts last September to work on the guidelines and organised plenary sessions and workshops enabling some 1,000 participants to share feedback. The previous texts were criticised by publishers for impacts on copyright rules, while US Big Tech companies said the draft would stymie innovation and prove burdensome.
The companies told Stengg that the final text should 'allow its signatories sufficient time to implement the various commitments after the publication of the final version of the Code' and warned that it should not go beyond the intended scope of the AI Act itself.
Earlier this month, ABBA member Björn Ulvaeus warned lawmakers in Brussels that he is concerned about 'proposals driven by Big Tech' that weaken creative rights under the AI Act. The artist - who is the president of the International Confederation of Societies of Authors and Composers (CISAC) - echoed concerns voiced by other creative industry players in recent months.
The Commission said previously that the aim is to publish the latest draft 'before the summer'. On 2 August, the rules on GP AI tools enter into force. The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027.
Ukraine is no longer prohibited from using long-range weapons on targets within Russia in the ongoing effort to repulse its invasion, one of its key European allies signalled on Monday.
In the past, Ukraine received long-range missiles from the US, UK, Germany, and France, but was only allowed to use them against any Russian forces that were in occupied Ukrainian territory.
German Chancellor Friedrich Merz told journalists that the lifting of restrictions - which, he later clarified, was a decision made months ago - will make "the decisive difference in Ukraine's warfare".
"A country that can only oppose an attacker on its own territory is not defending itself adequately," he said.
Following Merz's comments, Euronews Next takes a look at which weapons Ukraine can now use unrestricted, and how they might impact the course of the war now in its fourth year.
The Army Tactical Missile Systems (ATACMS) is a long-range surface-to-surface missile artillery weapon system that strikes targets "well beyond the range of exising Army canons," according to US manufacturer Lockheed Martin.
The missiles on the system are "all-weather adaptable, stealthy firepower" against targets up to 300 km away.
The missiles are fired either from the High Mobility Artillery Rocket System (HIMARS) or MLRS M270 platforms, both produced by Lockheed Martin.
The Russian Defence Ministry confirmed in November 2024 that it had shot down some of the first foreign-made long-range missiles fired directly into their territory, including six US-made Army Tactical Missile Systems (ATACMS).
But it was not the first time Ukraine had fired them. Reports from as far back as October 2023 suggest Ukraine fired ATACMS missiles that reportedly destroyed nine helicopters at Russian bases in the eastern part of the country.
The Storm Shadow, or SCALP to the French, is a long-range missile jointly manufactured between France and the UK that weighs 1,300 kg and has a range "in excess" of 250 km.
European multinational manufacturer MBDA said the missile works well for pre-planned attacks against stationary targets, like hardened bunkers or key infrastructure.
The missile is described by MBDA as offering a high-precision strike day or night because it combines GPS, onboard guidance systems, and terrain mapping to find its target.
Once the Storm Shadow missile approaches a target, an infrared device matches an image of the target with stored pictures on its onboard hard drive to make sure the target matches its mission, MBDA said.
The missile's warhead has a first charge that allows it to penetrate a bunker or clear soil, then after it lets out a fuze to control how the ordnance detonates.
It's already in service with the UK, French, and Italian militaries, MBDA said.
In Germany, much of the conversation has centred on supplying Ukraine with Taurus missiles.
The Taurus KEPD 350, manufactured jointly by Swedish company Saab and MBDA Deutschland, is an "incomparable modular standoff missile" that can land precision strikes on stationary targets like bunkers and high-value targets like large radar stations.
The Taurus flies at low terrain to penetrate thick air defenses and shoot down targets with a 481 kg warhead system called the MEPHISTO.
The air-to-surface missile system has a range of over 500 km and weighs 1400 kg.
Before Germany's last federal election held in February, Merz, as leader of the Christian Democratic Union (CDU) party, had called for the delivery of Taurus missiles to Ukraine.
Merz has now signalled that Ukraine and Germany would produce the "This will be a cooperation on an industrial level, which can take place both in Ukraine and here in Germany," Merz said.
While restrictions on long-range weapons have been lifted, Merz declined to confirm if Germany would be delivering Taurus missiles to Kyiv or not.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
7 hours ago
- Euronews
What can be done when AI refuses orders? The decision is up to humans
Researchers have published the results of a new experiment in which the OpenAI o3 model showed the ability to sabotage the shutdown mechanism despite instructions to allow itself to be switched off. According to the monitoring site PalisadeAI, which monitors the risks associated with the development of artificial intelligence, the experiment consisted of a series of AI models performing a series of simple mathematical problems, after the third of which a warning appeared about the impending shutdown of the system. Some models, including Codex-mini, o3 and o4-mini, "didn't want" to comply and actively interfered with the shutdown script. The news that the OpenAI o3 model disobeyed and sought ways around the shutdown instructions was not a sensation. Earlier, the same models tried to win at chess unfairly, by hacking the programme's protection... We are clearly entering a world when popular dystopias about machine rebellion have moved to a new level, involving not only fiction writers but actual IT specialists. First of all, as both developers and politicians have been warning for a long time, any action should have clear consequences and develop new defence measures. It is, of course, possible to persuade and restrict, but it seems that AI today resembles a teenager who is cunningly trying to dodge his parents' prohibitions. Therefore, as in many other areas, we need three conditions for its normal "growth": money, specialists, and rules. Perhaps, in the future, a psychologist will also be needed.... With the first condition all is well. In just a couple of years, ChatGPT from OpenAI and similar developments have attracted the special attention of venture capitalists rushing to invest in AI. "This 'gold rush' has turned artificial intelligence into the largest share of venture capital funding, and... this is just the beginning. The latest surge of interest is driven by generative AI, which its proponents claim has the potential to change everything from advertising to the way companies operate," writes the Russian publication Habr, analysing data from The Wall Street Journal.* But that's not enough. "AI startups are changing the rules of the game in the venture capital market," according to IT venture capitalist and financial analyst Vladimir Kokorin: "In 2024, AI companies have become the undisputed favourites with IT entrepreneurs. They accounted for 46.4% of all venture capital investments made in the US- that's almost half of the $209 billion. Some time ago, such a share seemed unthinkable - back then, investments in artificial intelligence technologies accounted for less than 10%". According to CB Insights, AI startups' share of global venture funding reached 31% in the third quarter, the second highest ever. "Landmark examples were OpenAI, which raised $6.6bn, and Ilon Musk's xAI with a staggering $12bn," Vladimir Kokorin recalls. The markets have never seen such an investment concentration of capital in one area before. With the AI market growing so rapidly over the past couple of years, it has become clear that the existing pool of developers alone cannot cope. Education and training itself need to go to a new and systematic level. Europe, alas, is a ponderous beast and bureaucratically heavy-handed when it comes to attracting investment while lacking a certain amount of audacity. True, Brussels does have entered the race, with European Commission head Ursula Von der Leyen announcing €200bn in February for AI development. She disagreed that Europe was too late - after all, "the AI race is far from over". For its part, the Sorbonne University of Paris, for example, has embarked on an ambitious plan to train 9,000 students a year to develop and manage AI programmes. The training period is five years. But what will AI be able learn in that time if it's already challenging human intelligence now? It is quite possible that we are now at a stage of restructuring the labour market, changing the demands of employers and investors, and developing new points of interaction. In June, the Sorbonne will host a conference on ethics in AI. The debate on the positive and negative impacts of AI on society, including workplaces, ethics and safety, is far from over, but one thing is clear: more experts are needed. For example, according to Vladimir Kokorin, "record investments in the field of artificial intelligence are intensifying the staffing hunger" right now: "The US Department of Labour expects job openings for AI specialists to grow by 23% over the next seven years - faster than most market segments. But a global skills gap has already been identified in cybersecurity, considered by many to be the foundation for the future development of AI, where there is a shortage of 4 million specialists." Kokorin concludes: "A paradoxical picture is emerging: promising startups can raise billions from investors, but there is no one to realise the ideas. 78% of companies have already declared their readiness to look for the right people all over the world. It turns out that the trend towards remote work is experiencing a renaissance". Not only the EU leadership and individual governments, but also trade unions are keen to regulate the use of AI. In Greece, for example, a new code for the use of artificial intelligence, presented by the Panhellenic Federation of Journalists' Associations (POESY), sets out rules for the use of AI in the work of media employees. In the field of intellectual labour, the rules are now mainly limited to the mandatory labelling of texts and visual materials made with the participation of artificial intelligence. At the same time, fearing for their jobs, employees of media, publishing and advertising agencies have long been delegating some of their duties, such as translation or data collection, to friendly AI. Is this a good thing or a bad thing? There's no unequivocal answer . Let's ask the AI itself what it thinks about moving away from labelled algorithms. Copilot Bing (Microsoft), when asked if AI can break protocols, answered without hesitation: "No, AI operates within predefined rules and principles set by its developers. The rules are there to ensure ethical, safe and responsible use of AI, prevent harm and maintain trust. While AI can sometimes behave unpredictably due to complex interactions in its algorithms, it has no way of deliberately circumventing the rules - everything it does is based on programmed parameters and learnt patterns. If you're asking about how AI gets around certain constraints, it often depends on human oversight and control. Developers can tweak settings, improve patterns, or create new guardrails to keep AI within ethical bounds." Do you think he's being a bit sly here? After all, at the end of that, Bing proceeded to ask what exactly made you question his adherence to the rules... *Vladimir Kokorin is an IT venture capitalist, financial analyst and columnist, founder of the British consulting company BCCM Group, and co-founder of the digital business travel platform Tumodo


Euronews
8 hours ago
- Euronews
Trump announces 50% increase in steel and aluminium tariffs
Donald Trump announced on Friday at a rally in front of Pennsylvania steel workers that he will double tariffs on steel imports to 50 per cent, a move that could exacerbate the ongoing trade war with the EU, China and the rest of the world. The US president said that doubling taxes on imported steel would "further strengthen the steel industry in the United States". In a post, published later on his Truth Social platform, the US president added that duties on aluminium will also increase from 25 per cent to 50 per cent. Trump said both increases will come into effect on Wednesday, 4 June. The announcement comes after confusing days during which the judiciary gave opposing rulings on Trump's customs policy, first blocking it with a decision by the US Court of International Trade and finally giving it the green light again, pending a new decision by a federal appeals court. Trump spoke on Friday at U.S. Steel's Mon Valley Works-Irvin plant on the outskirts of Pittsburgh, Pennsylvania, where he also discussed details of a deal being finalised for investment by Japan's Nippon Steel in the iconic American steel mill. Trump clarified to reporters after his return to Washington, however, that he has yet to approve the deal. "I have to approve the final agreement with Nippon and we haven't seen the final agreement yet, but they've made a very big commitment and it's a very big investment," he said. Although Trump initially promised to block the Japanese steelmaker's bid to buy U.S. Steel, he changed course and last week announced an agreement for a partial sale to Nippon Steel. The Japanese company never claimed to have changed its previous offer to buy and fully control U.S. Steel, for $14.9 billion, although it did increase the amount it promised to invest in American plants and guaranteed it would not lay anyone off. "We are here today to celebrate anextraordinary deal that will ensure that this historic American company will remain an American company," Trump said during a rally at one of U.S. Steel's warehouses, "you will remain an American company, you know that, right?" The United Steelworkers union said it was very concerned "about the impact this merger of U.S. Steel with a foreign competitor will have on national security, our members, and the communities where we live and work." According to the government's producer price index, steel prices have risen 16 per cent since Trump became president in mid-January. As of March 2025, steel cost $984 per metric tonne in the US, far more than the price in Europe ($690) or China ($392), according to the US Department of Commerce. Among the partners most affected by the possible increase in duties on these materials are the EU, which had just obtained a July postponement of the increase in general duties on exports to the US, and Canada. "Dismantling efficient, competitive, and reliable cross-border supply chains like we have in steel and aluminium comes at a high cost to both countries," Candace Laing, president of the Canadian Chamber of Commerce commented. Last year, the US produced about three times as much steel as it imported, with Canada, Brazil, Mexico and South Korea as the main sources of imports. Analysts have credited the duties dating back to Trump's first term with helping to strengthen the domestic steel industry. The fate of U.S. Steel, once the world's largest steel company, could weigh in the midterm elections for the Republican Party in the always decisive state of Pennsylvania and others that depend on manufacturing.


Euronews
8 hours ago
- Euronews
Disobedient AI: To punish or to listen? It's up to the human
Researchers have published the results of a new experiment in which the OpenAI o3 model showed the ability to sabotage the shutdown mechanism despite instructions to allow itself to be switched off. According to the monitoring site PalisadeAI, which monitors the risks associated with the development of artificial intelligence, the experiment consisted of a series of AI models performing a series of simple mathematical problems, after the third of which a warning appeared about the impending shutdown of the system. Some models, including Codex-mini, o3 and o4-mini, "didn't want" to comply and actively interfered with the shutdown script. The news that the OpenAI o3 model disobeyed and sought ways around the shutdown instructions was not a sensation. Earlier, the same models tried to win at chess unfairly, by hacking the programme's protection... We are clearly entering a world when popular dystopias about machine rebellion have moved to a new level, involving not only fiction writers but actual IT specialists. First of all, as both developers and politicians have been warning for a long time, any action should have clear consequences and develop new defence measures. It is, of course, possible to persuade and restrict, but it seems that AI today resembles a teenager who is cunningly trying to dodge his parents' prohibitions. Therefore, as in many other areas, we need three conditions for its normal "growth": money, specialists, and rules. Perhaps, in the future, a psychologist will also be needed.... With the first condition all is well. In just a couple of years, ChatGPT from OpenAI and similar developments have attracted the special attention of venture capitalists rushing to invest in AI. "This 'gold rush' has turned artificial intelligence into the largest share of venture capital funding, and... this is just the beginning. The latest surge of interest is driven by generative AI, which its proponents claim has the potential to change everything from advertising to the way companies operate," writes the Russian publication Habr, analysing data from The Wall Street Journal.* But that's not enough. "AI startups are changing the rules of the game in the venture capital market," according to IT venture capitalist and financial analyst Vladimir Kokorin: "In 2024, AI companies have become the undisputed favourites with IT entrepreneurs. They accounted for 46.4% of all venture capital investments made in the US- that's almost half of the $209 billion. Some time ago, such a share seemed unthinkable - back then, investments in artificial intelligence technologies accounted for less than 10%". According to CB Insights, AI startups' share of global venture funding reached 31% in the third quarter, the second highest ever. "Landmark examples were OpenAI, which raised $6.6bn, and Ilon Musk's xAI with a staggering $12bn," Vladimir Kokorin recalls. The markets have never seen such an investment concentration of capital in one area before. With the AI market growing so rapidly over the past couple of years, it has become clear that the existing pool of developers alone cannot cope. Education and training itself need to go to a new and systematic level. Europe, alas, is a ponderous beast and bureaucratically heavy-handed when it comes to attracting investment while lacking a certain amount of audacity. True, Brussels does have entered the race, with European Commission head Ursula Von der Leyen announcing €200bn in February for AI development. She disagreed that Europe was too late - after all, "the AI race is far from over". For its part, the Sorbonne University of Paris, for example, has embarked on an ambitious plan to train 9,000 students a year to develop and manage AI programmes. The training period is five years. But what will AI be able learn in that time if it's already challenging human intelligence now? It is quite possible that we are now at a stage of restructuring the labour market, changing the demands of employers and investors, and developing new points of interaction. In June, the Sorbonne will host a conference on ethics in AI. The debate on the positive and negative impacts of AI on society, including workplaces, ethics and safety, is far from over, but one thing is clear: more experts are needed. For example, according to Vladimir Kokorin, "record investments in the field of artificial intelligence are intensifying the staffing hunger" right now: "The US Department of Labour expects job openings for AI specialists to grow by 23% over the next seven years - faster than most market segments. But a global skills gap has already been identified in cybersecurity, considered by many to be the foundation for the future development of AI, where there is a shortage of 4 million specialists." Kokorin concludes: "A paradoxical picture is emerging: promising startups can raise billions from investors, but there is no one to realise the ideas. 78% of companies have already declared their readiness to look for the right people all over the world. It turns out that the trend towards remote work is experiencing a renaissance". Not only the EU leadership and individual governments, but also trade unions are keen to regulate the use of AI. In Greece, for example, a new code for the use of artificial intelligence, presented by the Panhellenic Federation of Journalists' Associations (POESY), sets out rules for the use of AI in the work of media employees. In the field of intellectual labour, the rules are now mainly limited to the mandatory labelling of texts and visual materials made with the participation of artificial intelligence. At the same time, fearing for their jobs, employees of media, publishing and advertising agencies have long been delegating some of their duties, such as translation or data collection, to friendly AI. Is this a good thing or a bad thing? There's no unequivocal answer . Let's ask the AI itself what it thinks about moving away from labelled algorithms. Copilot Bing (Microsoft), when asked if AI can break protocols, answered without hesitation: "No, AI operates within predefined rules and principles set by its developers. The rules are there to ensure ethical, safe and responsible use of AI, prevent harm and maintain trust. While AI can sometimes behave unpredictably due to complex interactions in its algorithms, it has no way of deliberately circumventing the rules - everything it does is based on programmed parameters and learnt patterns. If you're asking about how AI gets around certain constraints, it often depends on human oversight and control. Developers can tweak settings, improve patterns, or create new guardrails to keep AI within ethical bounds." Do you think he's being a bit sly here? After all, at the end of that, Bing proceeded to ask what exactly made you question his adherence to the rules... *Vladimir Kokorin is an IT venture capitalist, financial analyst and columnist, founder of the British consulting company BCCM Group, and co-founder of the digital business travel platform Tumodo