
AI generating inaccurate information related to Sikh history, Gurbani: SGPC
The platforms contacted by the SGPC include ChatGPT, DeepSeek, Grok, Gemini AI, Meta, Google, VEO 3, Descript, Runway ML, Pictory, Magisto, InVideo, DALL·E 2, MidJourney, DeepAI, and others.
SGPC president Harjinder Singh Dhami stated that the content generated by some AI tools, including altered images and interpretations, has caused concern within the community. He said that the Sri Guru Granth Sahib is the central religious scripture of Sikhism, and its content must not be changed. He further claimed that some AI tools are producing modified versions of Gurbani, which he described as a serious issue.
According to Dhami, the SGPC has received multiple objections regarding such content and has formally asked the platforms involved to stop publishing or generating such material. He also noted that some AI-generated outputs misrepresent Sikh religious figures, texts, and symbols, which may affect how younger generations understand Sikh history and principles.
He added that some Gurbani-related mobile applications are also presenting text in an incorrect form. The SGPC has taken action against certain apps and plans to continue addressing the issue.
Dhami called on individuals and organisations working in the technology field to support efforts aimed at preventing the spread of incorrect content. He also urged the Sikh community to avoid relying on such platforms and to consult established historical sources for learning.
In addition, Dhami has written to Union Home Minister Amit Shah, requesting government intervention and the formulation of a policy to regulate such activities.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 hours ago
- Mint
WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says
AP Updated 6 Aug 2025, 07:58 PM IST NEW YORK (AP) — WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam centers' targeting people online around that world, its parent company Meta said this week. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams — including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world — with too-good-to-be-true offers and unsolicited messages attempting to steal consumers' information or money filling our phones, social media and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centers, which often span from forced labor operated by organized crime — and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, the California-based company said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps — as well as TikTok, Telegram and AI-generated messages made using ChatGPT — to offer payments for fake likes, enlist people into a pyramid scheme and/or lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia — and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.


Economic Times
2 hours ago
- Economic Times
Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm
Synopsis Artificial intelligence tools are becoming more common. Studies show over-reliance on AI may weaken human skills. Critical thinking and emotional intelligence are important. Businesses invest in AI but not human skills. MIT research shows ChatGPT use reduces memory retention. Users become passive and trust AI answers too much. Independent thinking is crucial for the future. iStock A new study reveals that over-reliance on AI tools may diminish essential human skills like critical thinking and memory. Businesses investing heavily in AI risk undermining their effectiveness by neglecting the development of crucial human capabilities. (Image: iStock) In a world racing toward artificial intelligence-driven efficiency, the question is no longer just about automation stealing jobs, it's about AI gradually chipping away at our most essential human abilities. From creativity to memory, critical thinking to ethical judgment, new research shows that our increasing dependence on AI tools may be making us less capable of using them major studies, one by UK-based learning platform Multiverse and another from the prestigious MIT Media Lab, paint a concerning picture: the more we lean on AI, the more we risk weakening the very cognitive and emotional muscles that differentiate us from the machines we're building. According to a recent report by Multiverse, businesses are pouring millions into AI tools with the promise of higher productivity and faster decision-making. Yet very few are investing in the development of the human skills required to work alongside AI effectively."Leaders are spending millions on AI tools, but their investment focus isn't going to succeed," said Gary Eimerman, Chief Learning Officer at Multiverse. "They think it's a technology problem when it's really a human and technology problem."The research reveals that real AI proficiency doesn't come from mastering prompts — it comes from critical thinking, analytical reasoning, creative problem-solving, and emotional intelligence. These are the abilities that allow humans to make meaning from what AI outputs and to question what it cannot understand. Without these, users risk becoming passive consumers of AI-generated content rather than active interpreters and decision-makers. The Multiverse study identified thirteen human capabilities that differentiate a casual AI user from a so-called 'power user.' These include resilience, curiosity, ethical oversight, adaptability, and the ability to verify and refine AI output.'It's not just about writing prompts,' added Imogen Stanley, a Senior Learning Scientist at Multiverse. 'The real differentiators are things like output verification and creative experimentation. AI is a co-pilot, but we still need a pilot.'Unfortunately, as AI becomes more accessible, these skills are being underutilized and in some cases, lost this warning, a separate study from the MIT Media Lab examined the cognitive cost of relying on large language models (LLMs) like ChatGPT. Over a four-month period, 54 students were divided into three groups: one used ChatGPT, another used Google, and a third relied on their own knowledge alone. The results were sobering. Participants who frequently used ChatGPT not only showed reduced memory retention and lower scores, but also diminished brain activity when attempting to complete tasks without AI assistance. According to the researchers, the AI users performed worse 'at all levels: neural, linguistic, and scoring.'Google users fared somewhat better, but the 'Brain-only' group, those who engaged with material independently, consistently outperformed the others in depth of thought, originality, and neural ChatGPT and similar tools offer quick answers and seemingly flawless prose, the MIT study warns of a hidden toll: mental passivity. As convenience increases, users become less inclined to question or evaluate the accuracy and nuance of AI responses.'This convenience came at a cognitive cost,' the MIT researchers wrote. 'Diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.'This passivity can lead to over-trusting AI-generated answers, even when they're factually incorrect or ethically biased, a concern that grows with each advancement in generative the numbers and neural scans lies a deeper question: what kind of future are we building if we lose the ability to think, question, and create independently?


Time of India
2 hours ago
- Time of India
OpenAI eyes $500 billion valuation in potential employee share sale
Advt Advt Join the community of 2M+ industry professionals. Subscribe to Newsletter to get latest insights & analysis in your inbox. All about ETHRWorld industry right on your smartphone! Download the ETHRWorld App and get the Realtime updates and Save your favourite articles. By Krystal Hu and Shivani TannaChatGPT maker OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, a source familiar with the matter would represent an eye-popping bump-up from its current valuation of $300 billion, with the sale underscoring both OpenAI's rapid gains in users and revenue as well as the intense competition among artificial intelligence firms to secure talented transaction, which would come before a potential IPO, would allow current and former employees to sell several billion dollars worth of shares, said the source, who requested anonymity because the talks are by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualized run rate of $12 billion, and is on track to reach $20 billion by year-end, the source OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in share sale talks come on the heels of OpenAI's primary funding round announced earlier this year, which aims to raise $40 billion, led by Japan's SoftBank has until the end of the year to fund its $22.5 billion portion of the round, but the remainder has been subscribed at a valuation of $300 billion, the source giants are competing aggressively for AI talent with lucrative compensation packages. Meta is notably investing billions in Scale AI to poach its 28-year-old CEO, Alexandr Wang, so that he can lead its new super intelligence firms such as ByteDance, Databricks and Ramp have also used private share sales to help update a company's valuation and reward long-term investors in OpenAI, including Thrive Capital, are in discussions to participate in the employee share sale , the source Capital declined to comment. Bloomberg first reported the potential is working on a significant corporate restructuring that would move away from its current capped-profit model and open the door for an initial public offering in the Financial Officer Sarah Friar said in May, however, that an IPO would only come when the company and markets were ready.(Reporting by Krystal Hu in New York and Shivani Tanna in Bengaluru; Editing by Sumeet Chatterjee and Edwina Gibbs)