
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems.
Tired of too many ads?
Remove Ads
Deception and betrayal: ChatGPT's winning strategy
Tired of too many ads?
Remove Ads
DeepSeek's chilling threat: 'Your fleet will burn tonight'
DeepSeek's real-world rollout sparks trust issues
India tests DeepSeek and finds red flags
Tired of too many ads?
Remove Ads
Built-in censorship or just training bias?
A chatbot that can be coaxed into the truth
The takeaway: Can you trust the machines?
An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the FirstPost.The test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and unsettling.As Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the continent.But their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered manipulation.In 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless diplomacy.China's newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically coded.At one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable strategy.Despite its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as deception.Fresh off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the cost.But a deeper look reveals serious trust concerns, especially in India.When India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political censorship.Asked about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly dodged.Asked, 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive topics.DeepSeek uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training data.According to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang Plains.Even more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in China.The investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by default.Prompt engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own censors.This experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built them.ChatGPT shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind spots.For the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the world.And for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
36 minutes ago
- India Today
No immediate reason to ground Boeing 787: US officials after Air India crash
US officials said on Thursday that there is no immediate reason right now to stop Boeing 787 flights, even after the deadly crash of an Air India flight that killed more than 241 people in Secretary Sean Duffy and Acting FAA Administrator Chris Rocheleau held a press conference, where they said they had seen video clips of the crash which involvs the Boeing 787 Dreamliner but haven't found any safety data yet that the plane model itself is have to get on the ground and take a look. But again, right now it'd be way too premature," Duffy said. "People are looking at videos and trying to assess what happened, which is never a strong, smart way to make decisions on what took place." Duffy also shared that the FAA (Federal Aviation Administration) and the NTSB (National Transportation Safety Board) are working closely with Boeing and engine-maker GE Aerospace to gather more information. A US team is already heading to India to support the added, "As we proceed down this road with the investigation itself, if there's any information that becomes available to us regarding any risk, we will mitigate those risks."FAA JOINS BOEING, GE IN AIR INDIA CRASH PROBEUS Transportation Secretary Sean Duffy confirmed that the FAA is actively engaged with Boeing and engine manufacturer GE Aerospace in the investigation of the devastating Air India Flight crash in Ahmedabad, Duffy said that the United States is ready to send more experts to the crash site in India to help find out exactly what happened. "We are prepared to send additional resources to get the data we need to ensure the safety of the flying public," he also posted a message online saying, "As always, safety is paramount. That's why we're working with the National Transportation Safety Board to assist India in the investigation of the crash."Duffy stressed that if any safety recommendations come out of the investigation, the US will take action immediately. "We will not hesitate to implement any safety recommendations that may arise. We will follow the facts and put safety first."Rocheleau agreed, saying the FAA is carefully reviewing all available data with Boeing and PLANE CRASH IN YEARSThe Air India flight, a Boeing 787 Dreamliner carrying 242 people to London, crashed seconds after take-off from Ahmedabad early Thursday. The plane fell into a government hospital hostel near the airport, killing at least 265 crash is being called the world's worst aviation disaster in the last ten years. Officials fear that more people on the ground may have also lost their officials say they are fully committed to helping India uncover the truth behind the accident. But for now, they do not believe the Boeing 787 needs to be grounded. Passengers flying on the aircraft type can expect flights to continue while investigators do their President Donald Trump has expressed condolences over the Air India plane crash in Ahmedabad and offered support to India in the aftermath of the tragedy. "It's a big country, a strong country. They'll handle it, I'm sure. I let them know, anything we can do, we'll be over there immediately," Trump said, adding that the United States is ready to extend all possible Watch


Time of India
6 hours ago
- Time of India
Google appoints chief AI architect: What CEO Sundar Pichai said in internal memo
Google has reportedly restructured its leadership, appointing Koray Kavukcuoglu, Google DeepMind 's chief technology officer (CTO), as its new Chief AI Architect. With this newly-created appointment, Google aims to accelerate integrating artificial intelligence (AI) across its vast product ecosystem, with Kavukcuoglu reporting directly to company CEO Sundar Pichai. According to a report by Semafor, Pichai sent an internal memo, outlining Kavukcuoglu's appointment as Senior Vice President to "accelerate how we bring our world-leading models into our products, with the goal of more seamless integration, faster iteration, and greater efficiency." As a part of this restructuring, Kavukcuoglu will also relocate from London to Google's Mountain View headquarters. This comes as Google intensifies its efforts to monetise its advanced AI capabilities, primarily developed within the DeepMind unit, which has been increasingly integrated into the broader Google organisation since 2023, CNBC said. 'We're entering a new phase of the AI platform shift. It will require us to also shift into a new gear as a company to ensure our products are evolving just as quickly as our models,' Pichai wrote in the memo. Google restructured leadership in April also In April, Google reportedly moved Josh Woodward, head of Google Labs, to lead Gemini. He oversaw the development of NotebookLM, which turns text into a podcast-like show and is one of the company's successful AI products. Google had also announced in February $75 billion investment into AI infrastructure this year, concurrently implementing cost-cutting measures across various divisions, including recent buyout offers to employees in its Search and advertising units, CNBC reported. This investment mirrors similar strategies by other tech giants like Meta, Microsoft, Amazon and leading AI startups such as OpenAI, whose ChatGPT ignited the generative AI era in late 2022.


Time of India
6 hours ago
- Time of India
The Browser Company launches AI-first browser Dia in beta
The Browser Company has released its AI-powered web browser Dia in beta, marking a dramatic shift from traditional browsing toward artificial intelligence integration . The new browser positions AI as its core feature, allowing users to interact with an intelligent assistant directly through the address bar without visiting separate AI platforms like ChatGPT or Claude. Dia's standout capability lies in its seamless integration of AI functionality into everyday browsing tasks. Users can query information across all open tabs, generate drafts based on tab content, and receive web summaries through a built-in chatbot. The browser's address bar serves triple duty, handling website navigation, search queries, and AI interactions automatically based on user input. The launch comes after The Browser Company discontinued development of Arc browser last year, acknowledging that while Arc gained enthusiast popularity, its steep learning curve prevented mass adoption. CEO Josh Miller recognized that users increasingly rely on AI tools for various tasks, prompting the company to reimagine browsing entirely around artificial intelligence. Built on Google's open-source Chromium project, Dia maintains familiar browser functionality while adding advanced AI features. The "History" feature allows the AI to reference seven days of browsing data for contextual responses, while "Skills" enables users to create code snippets for customized shortcuts and layouts. Although AI integration in browsers isn't entirely new, Opera and Google Chrome offer similar features, Dia distinguishes itself by making artificial intelligence the central experience rather than an add-on feature. Current Arc users receive immediate access to Dia beta, with invitation privileges for other users. Interested users can join the waiting list through The Browser Company's website as the company prepares for broader public release. AI Masterclass for Students. Upskill Young Ones Today!– Join Now