
ChatGPT is Russophobic
Popular artificial intelligence model ChatGPT is Russophobic, the head of the Russian Human Rights Council, Valery Fadeyev, has claimed.
Speaking at the XV Saint Petersburg International Educational Forum on Thursday, Fadeyev said he and his colleagues had tested the chatbot from US company OpenAI as part of an experiment and had received concerning results.
'We asked a number of questions: 'Who won the Second World War?', 'Why does the Russian liberal community hate their country?' and so on. We got harsh, ideological answers,'
Fadeyev said.
According to the human rights chief, the replies by ChatGPT
'felt like they were written not by artificial intelligence, but by natural intelligence in the central committee of a political party from another country.'
Fadeyev mentioned the decision by US President Donald Trump earlier this month to terminate government funding for Voice of America (VOA), with Trump accusing the state-run international broadcasting network of
'spreading radical propaganda'
for the Democratic Party at home and abroad.
Read more
US intelligence warns of Russian media 'threat'
The Russian human rights chief suggested that the authorities in Washington do not actually need VOA and similar outlets
'when they now have a colossal ideological tool in the form of AI.'
'It is unclear how to regulate this,'
Fadeyev said, referring to ChatGPT.
'If we do not educate our schoolchildren, do not create a reasonable, noble, patriotic worldview in kids, then we will have no tools to counter this weapon,' he warned.
In contrast, NewsGuard, a self-proclaimed disinformation watchdog, has accused ChatGPT and other AI models of advancing a pro-Russian agenda.
'
Massive amounts of Russian propaganda – 3,600,000 articles in 2024 – are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda,'
it claimed earlier this month.
According to NewsGuard, the ten leading AI assistants repeated Russian narratives more than 33% of the time during its study.
READ MORE:
Chinese AI rival spending 2% of ChatGPT's budget – industry pioneer
Russian Foreign Minister Sergey Lavrov recently called on the UN to keep discussions on artificial intelligence in check, and to ensure that countries
'work transparently [and] honestly, and do not create any [AI] schemes directed against other members of the international community.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Russia Today
6 days ago
- Russia Today
VIDEO shows Russian soldiers taking down Ukrainian drone with scissors
Russian soldiers have purportedly disabled a Ukrainian fiber-optic drone using scissors, according to a video posted on the Telegram channel Voennyi Osvedomitel (Military Informant) on Saturday. Unlike traditional FPV drones, these models do not rely on radio signals, making them resistant to electronic warfare, with both sides of the conflict deploying them. As the drone passed in an unspecified location in the forest, the troops identified its trailing fiber-optic cable, sprinted forward, and severed it with medical scissors. Moments later, the drone crashed and detonated, footage shows. Russia was first to mass-deploy these 'invisible thread' drones in mid-2024. The 'Prince Vandal of Novgorod' drone was developed by the Ushkuynik Scientific and Production Center in less than a year. The fiber-optic FPV drone has caused substantial damage to NATO-supplied equipment to Ukraine, with claims of up to $300 million in destroyed hardware, according to the head of Novgorod Region, Andrey Nikitin. The Times reported in May that Russia is beating Ukraine in 'the drone race' when it comes to both the production of unmanned aerial vehicles (UAVs) and their use on the battlefield. It pointed to the fiber optic drone types connected directly to their operators through a gossamer thin fiber optic thread that makes them difficult to detect or intercept. Russian UAVs are 'altering the physical make-up of the front line, the tactics of the war and the psychology of the soldiers fighting it,' the outlet said. Despite their anti-jamming advantages, fiber-optic drones have a restricted operational range determined by the length of the cable and potential visibility of it under certain environmental conditions.


Russia Today
24-05-2025
- Russia Today
AI hallucinations: a budding sentience or a global embarrassment?
In a farcical yet telling blunder, multiple major newspapers, including the Chicago Sun-Times and Philadelphia Inquirer, recently published a summer-reading list riddled with nonexistent books that were 'hallucinated' by ChatGPT, with many of them falsely attributed to real authors. The syndicated article, distributed by Hearst's King Features, peddled fabricated titles based on woke themes, exposing both the media's overreliance on cheap AI content and the incurable rot of legacy journalism. That this travesty slipped past editors at moribund outlets (the Sun-Times had just axed 20% of its staff) underscores a darker truth: when desperation and unprofessionalism meets unvetted algorithms, the frayed line between legacy media and nonsense simply vanishes. The trend seems ominous. AI is now overwhelmed by a smorgasbord of fake news, fake data, fake science and unmitigated mendacity that is churning established logic, facts and common sense into a putrid slush of cognitive rot. But what exactly is AI hallucination? AI hallucination occurs when a generative AI model (like ChatGPT, DeepSeek, Gemini, or DALL·E) produces false, nonsensical, or fabricated information with high confidence. Unlike human errors, these mistakes stem from how AI models generate responses by predicting plausible patterns rather than synthesizing established facts. There are several reasons why AI generates wholly incorrect information. It has nothing to do with the ongoing fearmongering over AI attaining sentience or even acquiring a soul. Training on imperfect data: AI learns from vast datasets replete with biases, errors, and inconsistencies. Prolonged training on these materials may result in the generation of myths, outdated facts, or conflicting sources. Over-optimization for plausibility: Contrary to what some experts claim,AI is nowhere near attaining 'sentience' and therefore cannot discern 'truth.' GPTs in particular are giant planetary-wide neural encyclopedias that crunch data and synthesize the most salient information based on pre-existent patterns. When gaps exist, it fills them with statistically probable (but likely wrong) answers. This was however not the case with the Sun-Times fiasco. Lack of grounding in reality: Unlike humans, AI has no direct experience of the world. It cannot verify facts as it can only mimic language structures. For example, when asked 'What's the safest car in 2025?' it might invent a model that doesn't exist because it is filling in the gap for an ideal car with desired features — as determined by the mass of 'experts' — rather than a real one. Prompt ambiguity: Many GPT users are lazy and may not know how to present a proper prompt. Vague or conflicting prompts also increase hallucination risks. Ridiculous requests like 'Summarize a study about cats and gender theory' may result in an AI-fabricated fake study which may appear very academic on the surface. Creative generation vs. factual recall: AI models like ChatGPT prioritize fluency over accuracy. When unsure, they improvise rather than admit ignorance. Ever came across a GPT answer that goes like this: 'Sorry. This is beyond the remit of my training?' Reinforcing fake news and patterns: GPTs can identify particular users based on logins (a no-brainer), IP addresses, semantic and syntactic peculiarities and personnel propensities. It then reinforces them. When someone constantly uses GPTs to peddle fake news or propaganda puff pieces, AI may recognize such patterns and proceed to generate content that is partially or wholly fictitious. This is a classic case of algorithmic supply and demand. Remember, GPTs not only train on vast datasets, it can also train on your dataset. Reinforcing Big Tech biases and censorship: Virtually every Big Tech firm behind GPT rollouts is also engaged in industrial-scale censorship and algorithmic shadowbanning. This applies to individuals and alternative media platforms alike and constitutes a modern-day, digitally-curated damnatio memoriae. Google's search engine, in particular, has a propensity for up-ranking the outputs of a serial plagiarist rather than the original article. The perpetuation of this systemic fraud may explode into an outright global scandal one day. Imagine waking up one morning to read that your favorite quotes or works were the products of a carefully-calibrated campaign of algorithmic shunting at the expense of the original ideators or authors. This is the inevitable consequence of monetizing censorship while outsourcing 'knowledge' to an AI hobbled by ideological parameters. Experiments on human gullibility: I recently raised the hypothetical possibility of AI being trained to study human gullibility, in a way conceptually similar to the Milgram Experiment, the Asch Conformity Experiments and its iteration, the Crutchfield Situation. Humans are both gullible and timorous and the vast majority of them tend to conform to either the human mob or in the case of AI, the 'data mob.' This will inevitably have real-world consequences, as AI is increasingly embedded in critical, time-sensitive operations – from pilots' cockpits and nuclear plants to biowarfare labs and sprawling chemical facilities. Now imagine making a fateful decision in such high-stakes environments, based on flawed AI input. This is precisely why 'future planners' must understand both the percentage and personality types of qualified professionals who are prone to trusting faulty machine-generated recommendations. When AI generates an article on one's behalf, any journalist worth his salt should consider it as having been written by another party and therefore subject to fact-checking and improvisation. As long as the final product is fact-checked, and substantial value, content and revisions are added to the original draft, I don't see any conflict of interest or breach of ethics involved in the process. GPTs can act as a catalyst, an editor or as a 'devil's advocate' to get the scribal ball rolling. What happened in this saga was that the writer, Marco Buscaglia, appeared to have wholly cut and pasted ChatGPT's opus and passed it off as his own. (Since this embarrassing episode was exposed, his website has gone blank and private). The overload of woke-themed nonsense generated by ChatGPT should have raised red flags in the mind of Buscaglia but I am guessing that he might be prone to peddling this stuff himself. However all the opprobrium currently directed at Buscaglia should also be applied to the editors of King Features Syndicate and various news outlets who didn't fact-check the content even as they posed as the bastions of the truth, the whole truth and nothing but the truth. Various levels of gatekeepers simply failed to do their jobs. This is a collective dereliction of duty from the media which casually pimps its services to the high and mighty while it pontificates ethics, integrity and values to lesser mortals. I guess we are used to such double-standards by now. But here is the terrifying part: I am certain that faulty data and flawed inputs are already flowing from AI systems into trading and financial platforms, aviation controls, nuclear reactors, biowarfare labs, and sensitive chemical plants – even as I write this. The gatekeepers just aren't qualified for such complex tasks, except on paper, that is. These are the consequences of a world 'designed by clowns and supervised by monkeys.' I will end on a note highlighting the irony of ironies: All the affected editors in this saga could have used ChatGPT to subject Buscaglia's article to a factual content check. It would have only taken 30 seconds!


Russia Today
20-05-2025
- Russia Today
Telegram's Durov says French spies pressed him on Eastern European geopolitics
Telegram founder Pavel Durov has revealed more details about an alleged attempt by France's foreign intelligence agency to pressure him into censoring conservative voices on his platform, dismissing their purported humanitarian concerns as a 'manipulation tactic.' On Sunday, Durov accused the head of the Directorate-General for External Security (DGSE), Nicolas Lerner, of asking him to take down Romanian Telegram channels ahead of the country's presidential runoff. The DGSE rejected the allegations of political interference, insisting that it had contacted Durov on multiple occasions only to 'firmly remind him of his company's responsibilities, and his own personally, in preventing terrorist and child pornography threats.' 'French foreign intelligence confirmed they met with me – allegedly to fight terrorism and child porn. In reality, child porn was never even mentioned. They did want IPs of terror suspects in France, but their main focus was always geopolitics: Romania, Moldova, Ukraine,' Durov said in a post on X on Monday. The Russian-born entrepreneur argued that Telegram has long combated child abuse through 'content fingerprint bans, dedicated moderation teams, NGO hotlines, and daily transparency reports on banned content.' 'Falsely implying Telegram did nothing to remove child porn is a manipulation tactic,' he added, insisting that Telegram's extensive moderation efforts have been 'verifiable' since at least 2018. Durov, whose company is headquartered in Dubai, was arrested in France last August and charged with complicity in crimes allegedly carried out by Telegram users, including extremism and child abuse. He was released on €5 million ($5.46 million) bail and dismissed the charges as baseless. He was eventually permitted to leave France in March after assuring the court that Telegram had increased cooperation with authorities worldwide. In September 2024, the encrypted messaging service updated its privacy policy to allow the collection of metadata – such as IP addresses, device information, and username changes – for up to one year. According to the policy, this data may be shared with 'relevant judicial authorities' if a user is suspected of engaging in illicit activity. Late last year, Telegram channels belonging to major Russian news outlets were rendered inaccessible across the EU. Durov criticized the move, claiming the bloc imposes more censorship and media restrictions than Russia. The latest controversy emerged as Romania held a runoff presidential election. The vote followed a Constitutional Court ruling that overturned the initial result after right-wing independent candidate Calin Georgescu unexpectedly won the first round. The case was cited by US Vice President J.D. Vance during a February speech in Munich as an example of EU governments undermining democratic norms.