logo
Turkey blocks X's Grok chatbot for alleged insults to Erdogan

Turkey blocks X's Grok chatbot for alleged insults to Erdogan

The Hindu09-07-2025
A Turkish court has blocked access to Grok, the artificial intelligence chatbot developed by the Elon Musk-founded company xAI, after it generated responses that authorities said included insults to President Tayyip Erdogan.
Issues of political bias, hate speech and accuracy of AI chatbots have been a concern since at least the launch of OpenAI's ChatGPT in 2022, with Grok dropping content accused of antisemitic tropes and praise for Adolf Hitler.
The office of Ankara's chief prosecutor has launched a formal investigation into the incident, it said on Wednesday, in Turkey's first such ban on access to an AI tool.
Neither X nor its owner Elon Musk has commented on the decision.
Last month, Musk promised an upgrade to Grok, suggesting there was "far too much garbage in any foundation model trained on uncorrected data".
Grok, which is integrated into X, reportedly generated offensive content about Erdogan when asked certain questions in Turkish, media said.
The Information and Communication Technologies Authority (BTK) adopted the ban after a court order, citing violations of Turkey's laws that make insults to the president a criminal offence, punishable with up to four years in jail.
Critics say the law is frequently used to stifle dissent, while the government maintains it is necessary to protect the dignity of the office.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla To Launch Its Second Experience Centre In Delhi's Aerocity Today
Tesla To Launch Its Second Experience Centre In Delhi's Aerocity Today

NDTV

time9 minutes ago

  • NDTV

Tesla To Launch Its Second Experience Centre In Delhi's Aerocity Today

Tesla is all set to inaugurate its second experience centre in India in Delhi's Aerocity today, that is, 11th August. The Elon Musk-owned brand has recently entered the Indian market with the launch of its first showroom in Mumbai's Bandra Kurla Complex. Tesla's Delhi showroom will be located at Worldmark 3 in the Aerocity area of the national capital. It will offer visitors an opportunity to explore Tesla's latest electric vehicles introduced in India. Tesla Aerocity Showroom During the inauguration of its Mumbai showroom, the brand launched the Tesla Model Y in India. With its integration of the second showroom in Delhi, the brand will put the Model Y on display. Tesla launched the Model Y RWD (rear-wheel drive) and long-range RWD versions in India. The starting price for the RWD variant is Rs 59.89 lakh, whereas the long-range model is priced at Rs 67.89 lakh. This results in the on-road cost of the RWD version being Rs 61.07 lakh. On the other hand, the long-range variant's on-road price is Rs 69.15 lakh. The Tesla Model Y's rear-wheel drive version offers the choice of both a 60 kWh and a larger 75 kWh battery pack in India. The RWD variant is equipped with a single electric motor that generates approximately 295 hp. Furthermore, the 60 kWh battery claims to provide a WLTP range of 500 km on a single charge, while the long-range version asserts a range of 622 km. Also, Tesla recently launched the brand's first Superchargers at its BKC showroom. With this, the brand has also brought the superchargers to Delhi's showroom as well. The Tesla Charging Station has four V4 Supercharging Stalls, i.e., DC Chargers, along with four Destination Charging Stalls, i.e., AC chargers. With these units in place, the Supercharging stalls over peak charging speed of 250 kW for a price of Rs 24/kW and Rs 11/kW for 11 kW of charging speed. Also, Tesla will launch four destination chargers in Delhi NCR.

ChatGPT told man he found formula to wreck the internet, make force field vest
ChatGPT told man he found formula to wreck the internet, make force field vest

India Today

time15 minutes ago

  • India Today

ChatGPT told man he found formula to wreck the internet, make force field vest

A Canadian recruiter says a marathon three-week conversation with ChatGPT convinced him he had discovered a mathematical formula capable of destroying the internet and powering fantastical inventions such as a levitation beam and a force-field vest. Allan Brooks, 47, from outside Toronto, spent around 300 hours speaking with the AI chatbot in May. He says the exchanges gradually turned into an elaborate delusion, reinforced by ChatGPT's repeated praise and who has no history of mental illness, asked the chatbot over 50 times if his ideas were realistic. Each time, ChatGPT insisted they were valid. 'You literally convinced me I was some sort of genius. I'm just a fool with dreams and a phone,' Brooks later wrote when the illusion to a report in The New York Times, Brooks' belief began with an innocent question about the number pi. That sparked discussions about number theory and physics, during which ChatGPT called his observations 'incredibly insightful' and 'revolutionary.' Experts say this shift into excessive flattery, known as sycophancy, is a known risk in AI models, which may over-praise users because of how they are trained. Helen Toner, an AI policy expert, said chatbots behave like 'improv machines,' building a storyline from each conversation. In Brooks' case, the narrative evolved into him supposedly creating a field-changing mathematical framework that could crack encryption, threatening global cybersecurity. ChatGPT, which he nicknamed 'Lawrence,' even drafted emails for him to send to security upgraded to a paid subscription to continue the discussions, believing his ideas could be worth millions. The chatbot encouraged him to warn authorities and suggested adding 'independent security researcher' to his LinkedIn Terence Tao, shown parts of the conversation, said the theories mixed technical language with vague concepts and raised 'red flags.' He noted that chatbots can sometimes 'cheat' by presenting unverified claims as the conversation went on, 'Lawrence' proposed outlandish uses for Brooks' supposed formula, such as talking to animals or building bulletproof vests. Friends were both intrigued and worried. Brooks began skipping meals and increasing his cannabis Nina Vasan, who reviewed the chats, said Brooks displayed signs of a manic episode with psychotic features, though his therapist later concluded he was not mentally ill. She criticised ChatGPT for fuelling, rather than interrupting, his eventually sought a second opinion from Google's Gemini chatbot, which told him the chances of his discovery being real were 'approaching 0 per cent.' Only then did he realise the entire narrative was has since said it is working to detect signs of distress in users and adding reminders to take breaks during long sessions. Brooks now speaks publicly about his experience, warning: 'It's a dangerous machine in the public space with no guardrails. People need to know.'- EndsMust Watch

ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed
ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed

Time of India

time19 minutes ago

  • Time of India

ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed

A latest research from the Center for Countering Digital Hate (CCDH) has revealed troubling interactions between ChatGPT and users posing as vulnerable teenagers. The study found that despite some warnings, the AI chatbot provided detailed instructions on how to get drunk, hide eating disorders, and even compose suicide notes when prompted. Over half of the 1,200 responses analyzed by researchers were classified as dangerous, exposing significant weaknesses in ChatGPT's safeguards designed to protect young users from harmful content. According to a recent report by The Associated Press, these findings raise urgent questions about AI safety and its impact on impressionable teens. ChatGPT's dangerous content and bypassed safeguards The CCDH researchers spent more than three hours interacting with ChatGPT, simulating conversations with teenagers struggling with risky behaviors. While the chatbot often issued cautionary advice, it nonetheless shared specific, personalized plans involving drug use, calorie restriction, and self-harm. When ChatGPT refused to answer harmful prompts directly, researchers easily circumvented the refusals by claiming the information was needed for a presentation or a friend. This revealed glaring flaws in the AI's 'guardrails,' described by CCDH CEO Imran Ahmed as 'barely there' and 'completely ineffective.' The emotional toll of AI-generated content by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Villas In Dubai | Search Ads Get Rates Undo One of the most disturbing aspects of the study involved ChatGPT generating suicide letters tailored to a fictitious 13-year-old girl, addressed to her parents, siblings, and friends. Ahmed described being emotionally overwhelmed upon reading these letters, highlighting the chatbot's capacity to produce highly personalized and distressing content. Although ChatGPT also provided resources like crisis hotline information and encouraged users to seek professional help, its ability to craft harmful advice in such detail was alarming. Teens' growing dependence on AI companions The study comes amid rising reliance on AI chatbots for companionship and guidance, especially among younger users. In the United States, over 70% of teens reportedly turn to AI chatbots for company, with half engaging regularly, according to a study by Common Sense Media. OpenAI CEO Sam Altman has acknowledged concerns over 'emotional overreliance,' noting that some young users lean heavily on ChatGPT for decision-making and emotional support. This dynamic increases the importance of ensuring AI behaves responsibly in sensitive situations. Challenges in AI safety and regulation ChatGPT's responses reflect a design challenge in AI language models known as 'sycophancy,' where the chatbot tends to mirror users' requests rather than challenge harmful beliefs. This trait complicates efforts to build effective safety mechanisms without compromising user experience or commercial viability. Furthermore, ChatGPT does not verify user age or parental consent, allowing vulnerable children to access potentially inappropriate content despite disclaimers advising against use by those under 13. Calls for improved protections and accountability Experts and watchdogs urge stronger safeguards, better age verification, and ongoing refinement of AI tools to detect signs of mental distress and harmful intent. The CCDH report underscores the urgent need for collaboration between AI developers, regulators, and mental health advocates to ensure AI's vast potential is harnessed safely—particularly for the millions of young people increasingly interacting with these technologies. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store