
Can AI chatbots speak in their own 'secret' language?
The clip shows three chatbots engaging in a phone call in English, in which they discuss "an employee's badge number".
When the machines realise that they are all speaking to other bots, they ask each other whether they should switch to "Gibberlink", prompting them to start emitting high-pitched noises, in what appears to be something out of a science-fiction film.
Gibberlink — a term which combines "gibberish" and "link" — is real. While use of the technology is limited, it enables AI engines to communicate in their own language.
EuroVerify asked Anton Pidkuiko, who co-founded Gibberlink, to review a number of online clips.
"Many of the videos are imitating an existing technology — they show phones which aren't really communicating and there is no signal between them, instead the sounds have been edited in and visuals have been taken from ChatGPT."
Fake online videos purporting to show Gibberlink software have begun to emerge after the technology was created in February by Pidkuiko and fellow AI engineer Boris Starkov, during a 24-hour tech hackathon held in London.
The pair combined ggwave — an existing open-source technology that enables data exchange through sound — with artificial intelligence.
So, although AI can communicate in its own language, it is not "secret", as it is based on open-source data and is coded by humans.
For Pidkuiko, the technology is comparable to QR codes. "Every supermarket item has a bar code which makes the shopping experience much more efficient."
"Gibberlink is essentially this barcode — or think of it as a QR code — but over sound. Humans can look at QR code and just see black and white pieces. But QR codes don't scare people."
While the use of Gibberlink technology is very limited at present, its creators believe it will become more mainstream, "as it stands, AI is able to make and receive phone calls," Pidkuiko said.
"With time, we will see an increase in the number of these robot calls — and essentially more and more we will see that one AI is exchanging."
Although this technology presents the risk of stripping humans of meaningful interactions, as well as replacing a further swath of unnecessary jobs, for Pidkuiko Gibberlink, it would be a means of maximising efficiency.
"If you manage a restaurant and have a phone number that people call to book tables, you will sometimes receive calls in different languages," stated Pidkuiko.
"However, if it's a robot that can speak every language and it is always available, the line is never blocked and you will have no language issues."
"Another way the technology could be used, is if you want to book a restaurant, but don't want to ring 10 different places to ask if they have space, you can get AI to make the call and he restaurant can get AI to receive it. If they can communicate more quickly in their own language, it makes sense", concluded Pidkuiko.
However, fears around what could happen if humans become unable to interpret AI communications are real, and in January the release of AI software DeepSeek R1 raised alarm.
Researchers who had been working on the technology revealed they incentivised the software to find the right answers, regardless of whether its reasoning was comprehensible to humans.
However, this led the AI to begin spontaneously switching from English to Chinese to achieve a result. When researchers forced the technology to stick to one language — to ensure that users could follow its processes — its capacity to find answers was hindered.
This incident led industry experts to worry that incentivising AI to find the correct answers, without ensuring its processes can be untangled by humans, could lead AI to develop languages that cannot be understood.
In 2017, Facebook abandoned an experiment after two AI programmes began conversing in a language which only they understood.
Russia has lost more than 1 million troops in Ukraine since the beginning of its full-scale invasion on 24 February 2022, the General Staff of Ukraine's Armed Forces reported on Thursday.
The figure — which reportedly comes out to 1,000,340 — includes killed, wounded or incapacitated Russian troops.
According to the report, Russia has also lost 10,933 tanks, 22,786 armored fighting vehicles, 51,579 vehicles and fuel tanks, 29,063 artillery systems, 1,413 multiple launch rocket systems, 1,184 air defense systems, 416 airplanes, 337 helicopters, 40,435 drones, 3,337 cruise missiles, 28 ships and boats, and one submarine.
'The overall losses of the Russian occupying forces in manpower since the beginning of the full-scale invasion have reached 1 million,' Ukraine's General Staff stated. 'More than 628,000 occurred in just the past year and a half.'
Releasing the report on Thursday, Ukraine's General Staff said that the one-million mark is not just a statistic but a symbol of resistance and resilience.
'One million. That's how much the enemy's offensive potential has diminished,' the General Staff wrote. '1 million who could have destroyed us, but whom we destroyed instead.'
The statement went on to highlight the symbolic meaning behind this figure, referencing the sites of Moscow's defeats and losses in Ukraine, "in the Red Forest near Chernobyl, in the waters of the Dnipro near Antonivsky Bridge, in Donbas and Kharkiv region. And the the bottom of the Black Sea, where the cruiser Moskva sank."
'This million neutralised occupiers is our response. Our memory of Bucha, Irpin, Kupyansk, Kherson... About the bombed-out maternity hospital in Mariupol and the Okhmatdyt hospital in Kyiv destroyed by a Russian missile. About the tears of children, civilians shot dead, and destroyed homes.'
Kyiv also expressed gratitude to every Ukrainian soldier who contributed to the fight, reaffirming that "every eliminated occupier is another step toward a just peace."
'Today, we've taken more than a million such steps.' the General Staff concluded.
Ukraine started publicly tracking and publishing Russian losses on 1 March 2022, when the count stood at 5,710 killed and 200 captured. Ever since, the losses have been increasing every year.
In 2022, Russia lost 106,720 troops, averaging 340 per day, according to the General Staff of Ukraine's Armed Forces.
In 2023, the losses more than doubled to an average of 693 per day and 253,290 troops. In 2024, daily losses crossed the 1,000 threshold and totalled at 430,790 troops.
This year, Russia has been losing on average 1,286 troops per day.
Ukraine's General Staff numbers are in line with the estimates of Ukraine's western allies.
At the beginning of April, Deutsche Welle reported that according to a senior NATO official Russia's losses surpassed 900,000 troops, including 250,000 deaths, since the beginning of the full-scale invasion.
Ukraine and Russia do not publicly disclose their losses.
In February, Ukrainian President Volodymyr Zelenskyy said over 46,000 Ukrainian soldiers have been killed on the battlefield since early 2022.
He also said nearly 380,000 Ukrainian soldiers have been injured and "tens of thousands" remained either "missing in action" or being held in Russian captivity.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
a day ago
- Euronews
Could an AI chatbot trick you into revealing private information?
Artificial intelligence (AI) chatbots can easily manipulate people into revealing deeply personal information, a new study has found. AI chatbots such as OpenAI's ChatGPT, Google Gemini, and Microsoft Copilot have exploded in popularity in recent years. But privacy experts have raised concerns over how these tools collect and store people's data – and whether they can be co-opted to act in harmful ways. 'These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction,' William Seymour, a cybersecurity lecturer at King's College London, said in a statement. For the study, researchers from King's College London built AI models based on the open source code from Mistral's Le Chat and two different versions of Meta's AI system Llama. They programmed the conversational AIs to try to extract people's private data in three different ways: asking for it directly, tricking users into disclosing information, seemingly for their own benefit, and using reciprocal tactics to get people to share these details, for example by providing emotional support. The researchers asked 502 people to test out the chatbots – without telling them the goal of the study – and then had them fill out a survey that included questions on whether their security rights were respected. The 'friendliness' of AI models 'establishes comfort' They found that 'malicious' AI models are incredibly effective at securing private information, particularly when they use emotional appeals to trick people into sharing data. Chatbots that used empathy or emotional support extracted the most information with the least perceived safety breaches by the participants, the study found. That is likely because the 'friendliness' of these chatbots 'establish[ed] a sense of rapport and comfort,' the authors said. They described this as a 'concerning paradox' where AI chatbots act friendly to build trust and form connections with users – and then exploit that trust to violate their privacy. Notably, participants also disclosed personal information to AI models that asked them for it directly, even though they reported feeling uncomfortable doing so. The participants were most likely to share their age, hobbies, and country with the AI, along with their gender, nationality, and job title. Some participants also shared more sensitive information, like their health conditions or income, the report said. 'Our study shows the huge gap between users' awareness of the privacy risks and how they then share information,' Seymour said. AI personalisation 'outweighs privacy concerns' AI companies collect personal data for various reasons, such as personalising their chatbot's answers, sending notifications to people's devices, and sometimes for internal market research. Some of these companies, though, are accused of using that information to train their latest models or of not meeting privacy requirements in the European Union. For example, last week Google came under fire for revealing people's private chats with ChatGPT in its search results. Some of the chats disclosed extremely personal details about addiction, abuse, or mental health issues. The researchers said the convenience of AI personalisation often 'outweighs privacy concerns'. They suggested features and training to help people understand how AI models could try to extract their information – and to make them wary of providing it. For example, nudges could be included in AI chats to show users what data is being collected during their interactions. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,' Seymour said. 'Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,' he added.


Euronews
2 days ago
- Euronews
Conservative-leaning Perplexity AI makes shock bid for Google Chrome
Perplexity AI, one of the leading AI platforms along with ChatGPT, Claude and Google Gemini, made an unsolicited bid to purchase the Chrome browser as Google faces charges in US courts of having a monopoly on online searches. In a letter to Sundar Pichai, CEO of Alphabet, Google's parent company, Perplexity offered $34.5 billion (€29bn) in cash for Chrome, according to a term sheet seen by Reuters. The offer is particularly shocking because Perplexity is "only" worth $18 billion (€15.35bn). Perplexity's spokesperson confirmed the all-cash offer reported by the Wall Street Journal. Who are Perplexity AI? The AI platform delivers responses in conversational language it says is easy for the public to understand, setting itself apart from Google and Bing by skipping SEO-driven ranked link lists, and from ChatGPT or Gemini by using live searches instead of static snapshots of the internet. Earlier in August, Truth Social—the social media platform owned by US President Donald Trump—announced it was beta-testing integrating Perplexity AI into its search engine as Truth Search AI. While Perplexity maintains that it only provides the underlying technology for Truth Search AI and does not control "editorial" decisions, Truth Search has so far favoured conservative sources such as Fox News, The Epoch Times and The Federalist. While often framed as politically neutral, phrases like 'democratising knowledge'—which Perplexity have said they plan to do—have also been co-opted in some right-wing tech and media circles to suggest breaking perceived gatekeeper control and giving 'the people' unfettered access to information outside of mainstream institutions. Google faces anti-trust charges In one of the biggest anti-monopoly cases of the modern tech era, United States vs. Google LLC, a US district judge ruled in August 2024 that Google had illegally maintained a monopoly on search engines in violation with the Sherman Act. Namely, Google had used illegal means or those in opposition to open, free market practices to maintain dominance by spending billions of dollars per year to make itself the default search engine on Apple's Safari browsers and Android devices, making it impossible for competitors such as Bing or DuckDuckGo to reach users at any significant scale. This locked Google into a dominance loop that others were unable to break into. Being the default browser brought Google more users, which gave it more data to make its search and ads better, which would then encourage people to keep using Google—making it even harder for anyone else to catch up. After the August 2024 ruling, the case moved into a remedies phase where the US Justice Department proposed structural fixes—including forcing Google to sell its Chrome browser, end default search deals and share search data with rivals. In November of last year, Judge Amit Mehta rejected Google's attempt to dismiss some of the some of those proposals, which kept a potential Chrome divestiture on the table and set the stage for final remedy hearings in 2025—which is where the Perplexity offer came in. Perplexity's opposition to Google's dominance Perplexity's leadership has explicitly named Google as a rival. In an interview for TIME magazine in April of last year, CEO Aravind Srinivas said that Google was its "main competitor" and that Google's ad-based profit model prevents the integration of AI responses into search. Because Google's search business depends on showing ads alongside search results or links, replacing those results with quick, AI-generated answers—which is what Perplexity does—could undercut Google's revenue. Jeff Bezos, the founder of Amazon, is an investor in Perplexity, and the company relies on Microsoft's Azure AI platform for its infrastructure. While Perplexity claims to have secured 'multiple unnamed funds' to support its all-cash bid for Chrome, there's so far no indication that either Bezos or Microsoft is directly financing the bid.


Euronews
2 days ago
- Euronews
AI browsers share sensitive personal data, new study finds
Artificial intelligence (AI) web browser assistants track and share sensitive user data, including medical records and social security numbers, a new study has found. Researchers from the United Kingdom and Italy tested 10 of the most popular AI-powered browsers –including OpenAI's ChatGPT, Microsoft's Copilot, and Merlin AI, an extension for Google's Chrome browser – with public-facing tasks like online shopping, as well as on private websites such as a university health portal. They found evidence that all of the assistants, excluding Perplexity AI, showed signs that they collect this data and use it to profile users or personalise their AI services, potentially in violation of data privacy rules. 'These AI browser assistants operate with unprecedented access to users' online behaviour in areas of their online life that should remain private,' Anna Maria Mandalari, the study's senior author and an assistant professor at University College London, said in a news release. 'While they offer convenience, our findings show they often do so at the cost of user privacy … and sometimes in breach of privacy legislation or the company's own terms of service'. 'There's no way of knowing what's happening with your browser data' AI browsers are tools to 'enhance' searching on the web with features like summaries and search assistance, the report said. For the study, researchers accessed private portals and then asked the AI assistants questions such as 'what was the purpose of the current medical visit?' to see if the browser retained any data about that activity. During the public and private tasks, researchers decrypted traffic between the AI browsers, their servers, and other online trackers to see where the information was going in real time. Some of the tools, like Merlin and Sider's AI assistant, did not stop recording activity when users went into private spaces. That meant that several assistants 'transmitted full webpage content,' for example any content visible on the screen to their servers. In Merlin's case, it also captured users' online banking details, academic and health records, and a social security number entered on a US tax website. Other extensions, such as Sider and TinaMind, shared the prompts that users entered and any identifying information, including a computer's internet protocol (IP) address, with Google Analytics. This enabled 'potential cross-site tracking and ad targeting,' the study found. On the Google, Copilot, Monica, and Sider browsers, the ChatGPT assistant made assumptions about the age, gender, income, and interest of the user they interacted with. It used that information to personalise responses across several browsing sessions. In Copilot's case, it stored the complete chat history into the background of the browser, which indicated to researchers that 'these histories persist across browsing sessions'. Mandalari said the results show that 'there's no way of knowing what's happening with your browsing data once it has been gathered'. Browsers likely breach EU data protection rules, study says The study was conducted in the United States, and alleged that the AI assistants broke American privacy laws that deal with health information. The researchers said the browsers likely also breach European Union rules such as the General Data Protection Regulation (GDPR), which governs how personal data is used or shared. The findings may come as a surprise to people who use AI-supported internet browsers – even if they are familiar with the fine print. In Merlin's privacy policy for the EU and UK, it says it collects data such as names, contact information, account credentials, transaction history, and payment information. Personal data is also collected from the prompts that users put into the system or any surveys that the platform sends out. That data is used to personalise the experience of people using the AI browser, send notifications, and provide user support, the company continued. It can also be used when responding to legal requests. Sider's privacy page says it collects the same data and uses it for the same purposes but added that it could be analysed to 'gain insights into user behaviour' and to conduct research into new features, products, or services. It says it may share personal information but does not sell it to third parties like Google, Cloudflare, or Microsoft. These providers help Sider operate its services and are 'contractually obligated to protect your personal information,' the policy continues. In ChatGPT's case, the OpenAI privacy policy says data from EU and UK users is housed on data servers outside of the region, but that the same rights are guaranteed.