logo
Techie uses AI tool to overcome language barrier

Techie uses AI tool to overcome language barrier

Hans India30-04-2025

Bengaluru: Bargaining with autorickshaw drivers in Bengaluru can be a real challenge, especially if one does not speak Kannada.But, a tech enthusiast from Odisha has demonstrated how artificial intelligence can bridge this gap. SajanMahto from Rourkela posted a short video on his Instagram page, showing how he used an AI tool to negotiate with an autorickshaw driver in Kannada in Bengaluru.
'ChatGPT Vs Autowala. Use ChatGPT for language translation FREE!! This is an attempt to educate how one can use CHATGPT in day today life. To harm any sentiments regarding any emotions is not intended. The sole purpose is education only.
This is an act performed not real Autowala,' he wrote in his social media post, clarifying that the video was solely for educational purposes and not to offend anyone.In the video, the driver initially asked for Rs 200 as fare, but Mahto wanted to pay Rs 100. Since he did not know Kannada, he used the AI tool on his phone to negotiate.
The AI tool on his phone spoke to the autorickshaw driver in Kannada and he eventually managed to settle the fare at Rs 120.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

Time of India

time2 hours ago

  • Time of India

AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.

Airtel cracks down on online fraud in T.N., blocks 1,80,000 malicious links
Airtel cracks down on online fraud in T.N., blocks 1,80,000 malicious links

The Hindu

time2 hours ago

  • The Hindu

Airtel cracks down on online fraud in T.N., blocks 1,80,000 malicious links

As part of the nationwide rollout of its artificial intelligence (AI)-powered fraud detection system, Airtel has blocked over 1,80,000 malicious links and safeguarded more than 3 million users across the State. Automatically enabled for all Airtel mobile and broadband customers, the advanced system scans and filters links across SMS, WhatsApp, Telegram, Facebook, Instagram, e-mail, and other browsers. It leverages real-time threat intelligence to examine over 1 billion URLs daily and blocks access to harmful sites in under 100 milliseconds, said the Airtel in a press release. For instance, if a resident in Salem receives a suspicious message that reads: 'Your package is delayed. Track it here: And if the unsuspecting resident then clicks on the link, Airtel's system also clicks into gear. It instantly scans the link and if flagged as suspicious, it blocks access. The user is redirected to a warning message that reads: 'Blocked! Airtel found this site dangerous!' All this instantaneously. This real-time interception prevents users from falling victim to all kinds of frauds. Tarun Virmani, CEO-Tamil Nadu and Kerala, Bharti Airtel, said that at a time when digital threats were becoming more advanced and widespread, the need for a strong and dependable secure mobile network was more essential than ever. Commenting at the event, Sandeep Mittal, Additional Director General of Police, Cyber Crime Wing, said: 'We are pleased that the Airtel team informed us and offered a comprehensive overview of their initiative for fraud detection solutions, detailing its goals and operational methodology.'

'Please do not go to the airport': Florida's Silver Airways shuts down, leaves passengers stranded
'Please do not go to the airport': Florida's Silver Airways shuts down, leaves passengers stranded

Mint

time3 hours ago

  • Mint

'Please do not go to the airport': Florida's Silver Airways shuts down, leaves passengers stranded

In a rather abrupt incident, Silver Airways, a Florida-based airline, took to Instagram to announce that it is halting all operations and is now advising passengers not to go to the airport. This move has left dozens of passengers stranded and flights being cancelled. The bankrupt airline also announced on Instagram that it has agreed to sell its assets to a buyer, and on the basis of this move, flight operations are now halted. After failing to secure a buyer during bankruptcy, Silver ceased all operations following an asset sale to Wexford Capital, which opted not to continue flights. The shutdown left passengers stranded and approximately 350 employees jobless. Silver Airways was a regional airline headquartered in Hollywood, Florida, founded in 2011 after acquiring assets from bankrupt Gulfstream International Airlines. It operated scheduled flights primarily from hubs in Fort Lauderdale, Tampa, and San Juan (Puerto Rico), serving destinations across Florida, the Bahamas, and the Caribbean with a fleet of turboprop aircraft including Saab 340s and ATR 42/72s . The airline expanded in 2018 by acquiring Seaborne Airlines, enhancing its Caribbean network and adding seaplane operations between St. Thomas and St. Croix. Silver faced persistent financial struggles, including bankruptcy filings in December 2024 and operational cuts.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store