
War Thunder forums hit again as players leak classified military data in another online mishap
The leak, labeled "Distribution Statement C," adds to a growing list of incidents in which classified or export-restricted military documents appeared in forums, forcing developers to remove posts and enforce strict rules, as per a report by UK Defence Journal.
Due to its legal limitations, the AV-8B Harrier handbook that was posted on the War Thunder forum is a controversial topic.
Gaijin Entertainment created the free-to-play combat simulation game War Thunder, which depicts air, land, and naval combat from several historical periods. A sizable community of military enthusiasts, including active and retired service members, is drawn to the game because of its realism and meticulous attention to technical detail.
ALSO READ: Iran strikes US base in Qatar, Saudi airspace closed — WWE Night of Champions now hanging by a thread Players frequently engage in lengthy discussions about the performance of different military vehicles, which has occasionally prompted some to share actual military documents in an attempt to "correct" the representation of vehicles, as quoted in a report by UK Defence Journal.The community manager for War Thunder verified that the document violated the forum's guidelines and was promptly deleted, as per the report.The use of any export-controlled, classified, or otherwise restricted documentation is strictly prohibited by the developer's policy. The community manager reaffirmed that regardless of the purpose of the post, no such content is allowed on the company's platforms and that these guidelines are strictly enforced.A portion of a NATOPS handbook for the US Navy and Marine Corps' AV-8B and TAV-8B Harrier was uploaded by a user in the most recent instance. The handbook is designated "Distribution Statement C," indicating that only authorized U.S. government employees and contractors may use it and that it has not been authorized for public release. An anonymous former RAF engineer told the UK Defence Journal, "It's unlikely that this is the largest leak ever. There are online manuals available, particularly the older ones. However, War Thunder and other platforms must take action if it is marked as restricted or controlled.Even if the documents are widely disseminated informally or are accessible online, export control laws still apply. Regardless of how "outdated" the content is, violations of Distribution Statement C, ITAR, and other national security frameworks may lead to legal action.According to Gaijin Entertainment, the creation of in-game vehicles is never influenced by such content. Developers have consistently dissuaded users from submitting or discussing restricted material and only use legally accessible, publicly available sources to modify vehicle performance.War Thunder's rules, which forbid posting restricted or export-controlled content, could result in this being removed and banned. Even if the document is leaked or posted on a file-sharing website, its legal status is unaffected, despite some fans doubting the restriction's gravity. Why was the Harrier manual removed from the forum?It was labeled "Distribution Statement C," which makes it legally restricted and violates forum rules.
Is this the first leak for War Thunder? No, this is at least the ninth instance in which restricted documents were posted by players.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 days ago
- Time of India
Can you trust AI to manage your mutual funds?: Bruce Keith on human vs AI debate
As artificial intelligence reshapes industries worldwide, the world of investing is no exception. In the U.S., over 35% of mutual funds are now powered by AI, a stark contrast to just 1% in India . So, what does this mean for Indian investors? Can AI really outperform human fund managers? And more importantly, can it be trusted? In this exclusive conversation, Bruce Keith, Co-Founder and CEO of InvestorAI , sits down with Neha Vashishth Mahajan to break down how AI is changing mutual funds, from cost and performance to risk profiling and regulation. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Play War Thunder now for free War Thunder Play Now Undo Excerpts: Q. How is AI changing the game for mutual funds, particularly at the front end for consumers? What's your view? Bruce Keith: From a global standpoint, especially looking at the U.S. as the frontrunner in mutual fund innovation, they've been using AI for several years now. In contrast, India is still catching up. Currently, around 35% of mutual funds in the U.S. are quant or AI-driven, while in India, that figure is closer to just 1%. That's a significant gap. In the U.S., AI is used for market sentiment analysis, stock selection, strategy reinforcement, and even writing research papers. In India, we're still mostly using AI to enhance backend operations rather than drive front-end investment research, but the direction is promising. Q. With that kind of gap, there's clearly a need for deeper understanding in India. What kind of AI models are mutual fund companies using, and how are they different from tools like ChatGPT? Bruce: ChatGPT is useful, but for asset management firms, differentiation is key. At InvestorAI, we've built our own foundational AI, we don't use ChatGPT, Gemini, or any off-the-shelf models. Everything is developed in-house, in India, from the servers to the code. It's similar to what firms like Renaissance or Jane Street do in the U.S. To build a real edge in this space, especially in asset management, you need to develop your own foundational AI. Live Events Q. Are there any real-world examples where AI-led mutual funds have actually outperformed traditional ones? Bruce: Absolutely. You don't reach 35% market share in the U.S. without delivering results. At InvestorAI, while we don't run mutual funds, we do manage equity baskets. Since our product went live in April 2021, we've delivered a CAGR of 45%, compared to the market's 17% over the same period. That's more than double, in live trading. In India, early forays into quant strategies weren't always successful, which is why many players are still cautious. Q. From a retail investor's point of view, how does AI help recommend mutual funds based on goals or risk appetite? Bruce: Regulations guide this process quite tightly and require firms to assess risk appetite through specific questions. That doesn't need heavy AI, it's well automated. But AI becomes valuable when comparing declared risk appetite with actual behavior. We found in a study that about 40% of people acted in ways inconsistent with their self-reported risk levels. AI can detect that gap and help investors make better-aligned decisions. Q. AI-driven advisory platforms are booming. What's fueling this surge? Bruce: AI is a hot buzzword. Every CEO today talks about it. Many businesses integrate GenAI tools for branding more than functionality. But foundational AI platforms, like ours, offer deeper value. Younger investors, in particular, are more open to trusting AI over legacy institutions. Also, AI doesn't sleep; it works around the clock, unlike fund managers. And with the ability to reduce manufacturing costs, AI allows financial services to be delivered instantly, much like ordering food on Zomato or Swiggy. Q. A concern here is accountability. In traditional systems, fund managers are accountable. With AI, who takes responsibility if something goes wrong? And where does SEBI stand? Bruce: A human-in-the-loop model is essential. At InvestorAI, all recommendations undergo final human review before being released. SEBI's latest circular focused on AI in the back office, not yet on investment strategy or manufacturing. But AI's biggest value lies in reducing cost and improving access, which benefits all stakeholders. As adoption grows, I expect more structured regulatory frameworks to emerge. Q. You mentioned AI reduces human bias, but does AI come with its own set of biases? Bruce: It absolutely does. All AI models are built on human-generated data and inherit those biases. Machines reflect whatever biases are embedded in the data and algorithms. For example, ask several GenAI models for a random number and you'll get the same answer across the board, that's bias in action. The challenge is transparency: AI systems need to be clearer about their data sources and potential biases so users can interpret results more effectively. Q. How does AI handle market volatility, especially given the unpredictability in global and domestic events? Bruce: AI can't predict unpredictable political actions or wars, but it can pick up signals. For example, just before the Israel-Iran tensions escalated, our India model shifted heavily into healthcare—a classic risk-off move. The AI sensed something was off through trading volumes and patterns, even though it didn't 'know' what was coming. With the ability to process a trillion data points daily and react instantly, AI offers unparalleled responsiveness to market shifts—something human analysts simply can't match in real time. Q. Any closing thoughts for retail investors who may still be hesitant about using AI? Bruce: Where Wall Street goes, Dalal Street follows. We'll see more AI integration in India. But as an industry, we need to develop transparent frameworks so retail investors can truly understand AI-based offerings. My advice: try it in small amounts. Never put your entire portfolio into AI. Diversify, have a portion in active funds, another in passive, and some in AI-driven strategies. I personally allocate about a third to AI. It's a growing space, and informed participation is the best way forward.


Time of India
2 days ago
- Time of India
334,000 Saudis trained in AI under national upskilling drive
More than 300,000 Saudis have been trained in AI concepts since 2024 under a national program/Representative Image A total of 334,000 Saudi citizens have now completed artificial intelligence (AI) training under a national initiative launched by the Saudi Data and Artificial Intelligence Authority (SDAIA), in collaboration with the Ministry of Education and the Ministry of Human Resources and Social Development. The 'One Million Saudis in AI' program, launched in September 2024, aims to equip citizens across all age groups and professional backgrounds with essential AI knowledge and skills. The initiative is part of Saudi Arabia's broader digital transformation strategy to prepare its workforce for the future. Speaking during the Non-Profit Sector Forum in Education and Training 2025, held in Riyadh on July 2, SDAIA President Abdullah Alghamdi highlighted the major milestones achieved so far. He emphasised the significance of inter-agency cooperation in enhancing Saudi Arabia's global position in AI empowerment, particularly in promoting women's participation and raising community awareness. Among the key outcomes of the program is the SAMAI initiative, launched at the Global AI Summit, which provides customised AI learning paths for a diverse audience. SDAIA also introduced the 'Road to ATHKA' initiative, which trained more than 570,000 students on foundational AI concepts. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Trending in in 2025: Local network access control [Click Here] Esseps Learn More Undo This was complemented by the 'Future Intelligence Programmers' program, which targeted educators, training over 10,000 teachers in AI instruction. Another major achievement was the National Olympiad for Programming and Artificial Intelligence (ATHKA), which saw the participation of more than 260,000 students from intermediate and secondary schools. Of those, 10,000 advanced to the final stage of the competition. To further strengthen AI integration into education, SDAIA and the Ministry of Education jointly established the Centre for Excellence in Education. The centre is focused on institutionalising AI-based learning, sustaining future-ready academic initiatives, and improving student outcomes through smart technologies. The program continues to serve as a cornerstone of Saudi Arabia's Vision 2030 objectives, aiming to boost digital literacy, foster innovation, and build a tech-enabled knowledge economy.


Time of India
2 days ago
- Time of India
How 'now-famous' Indian techie Soham Parekh managed to work for several companies simultaneously
Indian software engineer Soham Parekh has publicly responded to the growing controversy surrounding his professional conduct. In an interview with TBPN , Parekh admitted that the allegations of working with multiple startups simultaneously were true. He revealed that he had been managing an intense workload, putting in nearly 140 hours every week. According to Parekh, this decision stemmed not from ambition or opportunism, but from a need to navigate a tough financial situation. During the discussion, the hosts attempted to understand how Parekh maintained such a punishing schedule. They pointed out that he seemed to sleep only 6 to 8 hours per day while coding non-stop for 12 to 14 hours daily, including weekends. When asked whether the use of AI tools like code assistants contributed to him juggling multiple roles, Parekh agreed that these tools did enhance productivity, but he denied that they allowed him to take on more positions than he could handle. Unconventional Work Ethic and Lifestyle Parekh also spoke about his irregular routine, stating that he never followed a fixed work schedule for any company. Instead, he focused entirely on the immediate tasks before him. He admitted to operating on minimal sleep and said that friends and colleagues often remarked on his chronic lack of rest. Parekh described himself as someone who has become accustomed to sleeplessness, describing it almost as a part of his identity. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Play War Thunder now for free War Thunder Play Now "I think people around me will probably say this: that I am notoriously known for not sleeping. I am a… you know… I am a serial-like non-sleeper at this point, I would say..." he said. He stated that he worked for 12-14 hours every day. Reason Behind Moonlighting: Financial Crisis Expanding on his motivations, Parekh reiterated that his choice to work multiple jobs was rooted in pressing financial challenges. He emphasized that no one willingly chooses to work nearly 140 hours per week unless driven by extreme necessity. His decision, he said, was not born out of greed but survival, as he had bills to pay and responsibilities to manage. "Financial circumstances, essentially. No one really likes to work 140 hours a week, but I had to do it out of necessity.." he said. Denying Outsourcing Allegations As the controversy gained traction on social media, several users accused Parekh of transforming his multiple jobs into a business model by subcontracting his work to junior developers. These claims suggested that he was profiting off others' efforts while maintaining the illusion of handling all responsibilities himself. Parekh flatly rejected these allegations, asserting that he personally wrote every line of code assigned to him. Journey from Mumbai to the U.S. Parekh shared that he was originally from Mumbai and had moved to the United States in 2020. His initial plan was to relocate in 2018 to pursue graduate studies. However, financial difficulties forced him to delay that move. Once in the U.S., he finally enrolled and earned his master's degree in computer science from the prestigious Georgia Institute of Technology . Prior to this, he completed his undergraduate studies in computer engineering at the University of Mumbai, graduating with an outstanding GPA of 9.83 out of 10. Triggering the Backlash The allegations against Parekh surfaced when Suhail Doshi, co-founder of Mixpanel and Playground AI, issued a public advisory to other startup founders via X (formerly Twitter). Doshi accused Parekh of misleading several startups, many of which were funded by Y Combinator, by accepting roles with them simultaneously. Following Doshi's post, at least six other technology executives confirmed that Parekh had been employed at their companies and was removed once his dual employment status was discovered. Parekh's story has sparked widespread debate in the tech industry about burnout, financial pressures, and the growing ethical dilemmas in remote work environments.