logo
German government urged to start proper supervision of AI

German government urged to start proper supervision of AI

Euronewsa day ago
ADVERTISEMENT
German consumer groups and regulators have called upon the government to formally appoint a national authority to begin oversight of artificial intelligence providers.
Germany missed the EU deadline of 2 August to notify the European Commission which market surveillance authorities it has appointed to oversee business compliance with the AI Act. The local regulators once appointed will then keep an eye on local providers of AI systems, ensuring they follow the Act.
The Hamburg data protection commissioner, Thomas Fuchs, called on the federal government to quickly designate the AI market surveillance authorities – which in some areas also include the data protection supervisory authorities.
"Due to the delay, companies and authorities are now missing their binding contact person for questions about the AI regulation. This is also a disadvantage for Germany as a location for AI innovation," Fuchs said.
These concerns were echoed by Lina Ehrig, head of digital at the Federation of German Consumer Organisations (VZBV). Without supervision, companies could use AI to manipulate consumers or exploit individual weaknesses, for example via real-time voice analysis in call centres, VZBV warned.
'There needs to be a supervisory authority that keeps an eye on this and acts against violations. That hasn't happened so far," says Ehrig.
According to a Commission official, some of the 27 EU member states have sent notifications about the appointments – which are now under consideration – however, it seems that most member states have missed the deadline.
Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated.
Despite the lack of national regulation, the Hamburg data watchdog said it started building capability and training personnel for the complex tests of AI systems to be ready for the moment of legal designation. The regulator earlier this year asked Meta questions about its AI tools.
The AI Act entered into force in August 2024, but the provisions start to apply gradually. This month, national authorities need to be appointed, and rules on general purpose providers – such as ChatGPT, Claud AI and Gemini – start to apply.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

France 24

time7 hours ago

  • France 24

Grok, is that Gaza? AI image checks mislocate news photographs

But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

US government gets a year of ChatGPT Enterprise for $1
US government gets a year of ChatGPT Enterprise for $1

France 24

time8 hours ago

  • France 24

US government gets a year of ChatGPT Enterprise for $1

Federal workers in the executive branch will have access to ChatGPT Enterprise in a partnership with the US General Services Administration, according to the pioneering San Francisco-based artificial intelligence (AI) company. "By giving government employees access to powerful, secure AI tools, we can help them solve problems for more people, faster," OpenAI said in a blog post announcing the alliance. ChatGPT Enterprise does not use business data to train or improve OpenAI models and the same rule will apply to federal use, according to the company. Earlier this year, OpenAI announced an initiative focused on bringing advanced AI tools to US government workers. The news came with word that the US Department of Defense awarded OpenAI a $200 million contract to put generative AI to work for the military. OpenAI planned to show how cutting-edge AI can improve administrative operations, such as how service members get health care, and also has cyber defense applications, the startup said in a post. OpenAI has also launched an initiative to help countries build their own AI infrastructure, with the US government a partner in projects. The tech firm's move to put its technology at the heart of national AI platforms around the world comes as it faces competition from Chinese rival DeepSeek. DeepSeek's success in delivering powerful AI models at a lower cost has rattled Silicon Valley and multiplied calls for US big tech to protect its dominance of the emerging technology. The OpenAI for Countries initiative was launched in June under the auspices of a drive -- dubbed "Stargate" -- announced by US President Donald Trump to invest up to $500 billion in AI infrastructure in the United States. OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers
Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Euronews

time11 hours ago

  • Euronews

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam' centres targeting people online around that world, its parent company Meta said. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams, including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world. Too-good-to-be-true offers and unsolicited messages attempt to steal consumers' information or money, with scams filling our phones, social media, and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centres, which often span from forced labour operated by organised crime – and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, Meta said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps – as well as TikTok, Telegram, and AI-generated messages made using ChatGPT – to offer payments for fake likes, to enlist people into a pyramid scheme, or to lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store