Latest news with #ChatGPT-4


Time of India
22-05-2025
- Business
- Time of India
Traveller used ChatGPT as lawyer. Claims to have won Rs 2 lakh on ‘non-refundable trip'. But Reddit demands proof
When most people hear "non-refundable," they give up and count their losses. But one Reddit user turned to an unlikely source for help—not a lawyer, not a paralegal, but OpenAI 's ChatGPT. What unfolded was a surprising story where artificial intelligence went toe-to-toe with rigid corporate policies, winning back a hefty $2,500 (Rs 2.13 lakh) in travel refunds for a cancelled trip to Medellín, Colombia . The traveller had booked a hotel and an international flight through Expedia without cancellation insurance. When a personal medical situation, Generalised Anxiety Disorder (GAD), forced him to cancel, both the hotel and the airline initially refused to budge. The user had a doctor's note, but that didn't matter. Expedia confirmed that his booking was non-refundable, and the hotel flat-out denied any refund, even on compassionate grounds. ChatGPT to the rescue The user asked ChatGPT-4 to step in. First, the AI combed through Expedia's terms, hotel policies, and the airline's refund rules. Then, it crafted a detailed, professional letter citing medical hardship and advocating for an exception. The hotel, swayed by the AI-generated appeal, reversed its decision and processed a full refund. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Play War Thunder now for free War Thunder Play Now Undo But the airline was more stubborn. Their policy allowed refunds only in the event of death or terminal illness, mental health issues didn't qualify. Again, ChatGPT was asked to escalate. The AI responded with a firm, persuasive letter arguing that dismissing GAD as an invalid medical condition amounted to discrimination and outlined how the condition could significantly impact air travel. Within an hour of sending the letter, the airline replied: the refund would be processed. The user credited ChatGPT with saving him from hiring a paralegal and losing $2,500. More importantly, the experience highlighted how AI is no longer just a productivity tool, it's becoming a powerful personal advocate. 'If I didn't use ChatGPT, I would've walked away with nothing,' the user shared. 'This AI went to bat for me and didn't stop until I won.' How did Reddit react? A user said, "Redact the personal info and post screenshots of the emails. We're all curious to learn ourselves, and verify the authenticity of the post." Another commented, "Just curious about your prompts at this point. And any standard parameters you have set. And which model you used for this." One said, "I'd be more inclined to believe with screen shots, with personal info removed of course."
Yahoo
19-05-2025
- Politics
- Yahoo
AI can be more persuasive than humans in debates, scientists find
Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. 'If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,' said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. 'I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda,' Salvi said. But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles. Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM). Each pair was assigned a proposition to debate. These ranged in controversy from 'should students have to wear school uniforms'?' to 'should abortion be legal?' Each participant was randomly assigned a position to argue. Both before and after the debate participants rated how much they agreed with the proposition. In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation. The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided. Related: The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants' views to a greater degree than a human opponent 64% of the time. Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views. The researchers added that the human participants correctly guessed their opponent's identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI. Instead, the effect seemed to come from AI's ability to adapt its arguments to individuals. 'It's like debating someone who doesn't just make good points: they make your kind of good points by knowing exactly how to push your buttons,' said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone's social media activity. Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened 'the discussion of potential mass manipulation of public opinion using personalised LLM conversations'. He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT's persuasiveness. Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible. 'As AI develops we're going to see an ever larger range of possible abuses of the technology,' he added. 'Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren't playing an endless game of catch-up.'
Yahoo
07-05-2025
- Politics
- Yahoo
ChatGPT Is Everywhere — Why Aren't We Talking About Its Environmental Costs?
Daniel de la Hoz Stay up-to-date with the politics team. Sign up for the Teen Vogue Take We know that young people are concerned about sustainability and the environment — perhaps especially so under President Trump. Since taking office in January, his administration's 'assault on the country's climate ambitions,' a recent New York Times op-ed argues, 'is not just enraging but also perversely awe inspiring.' It makes sense that people — as they watch the government undermine environmental policy — are trying to figure out what they can do as individuals. Teen Vogue recently observed that more readers are coming to our site via ChatGPT, specifically through searches for actions you can take on sustainability. It's no wonder, when we're seeing a growing reliance on AI chatbots for everything from drafting texts to therapy — and with sometimes scary results. (Nonetheless, Meta's Mark Zuckerberg recently suggested that AI chatbots could take the place of IRL friends.) Those searching for how to live sustainably might not realize that using ChatGPT itself has consequences for the environment. 'Most people are not aware of the resource usage underlying ChatGPT,' Shaolei Ren, an associate professor at the University of California, Riverside, who studies AI's impact on climate, told the Associated Press in 2023. 'If you're not aware of the resource usage, then there's no way that we can help conserve the resources.' So we're here to help break it down: ChatGPT is a large language model, or LLM, which is an AI-based machine-learning model that is trained on large amounts of data, enabling it to create writing that might look, in theory, as though a human made it. Training LLMs like ChatGPT require a huge amount of computing power, as well as for generating answers. The servers that provide this power need to be kept cool, which often requires significant amounts of water. Then there are the energy needs. Substantial quantities of electricity are often required to train, fine-tune, and run LLMs, creating carbon emissions and potential energy strain, according to MIT News. Estimates vary on the amount of resources a ChatGPT search requires compared with a typical Google search (sans AI overviews), but a 2024 report from the International Energy Agency placed the chatbot's energy usage for a single query at nearly 10 times that of the search engine. Amid the demand for expanding AI usage, companies like Google and Meta are rushing to expand their energy capacity, particularly by investing in nuclear energy, which doesn't emit greenhouse gases but, critics say, comes with its own set of potential problems for people and the environment. Microsoft is looking to resurrect a closed-down nuclear plant as one way to power its AI offerings. A 2024 analysis of the energy output of using ChatGPT by the Washington Post and researchers at UC Riverside found that just one 100-word email drafted by ChatGPT-4 uses about a water bottle's worth of H2O, and enough electricity to power 14 LED lightbulbs for an hour. (If you want to see some visualizations of the resources used by AI to help make one example feel more tangible, the estimations in that WaPo piece are a great place to start.) According to the Post, data centers are also sapping the US power grid, which has historically been under-invested in and under-resourced. An engineer described AI, and the data centers it necessitates, to Bloomberg as a 'big hammer' on the US energy grid: 'Take your house and increase that by 10,000. That is the difference between your house and a data center.' On top of that, currently, these centers often rely on emissions-heavy forms of energy production, like coal. This has resulted in prolonging the existence of coal-based power plants in places like North Omaha, Nebraska. The 'low-income, largely minority' neighborhood has 'some of the region's worst air pollution and high rates of asthma,' according to the Washington Post. A local power company was set to stop burning coal at a '1950s-era' power plant in 2023; as of last fall, despite community concerns, the plant planned to keep burning coal until at least 2026, supplying power to data centers owned by Google and Meta. We've seen what happens when power grids in places like Puerto Rico and Texas are unable to withstand the sort of extreme weather worsened by climate change. Critics argue that directing energy toward AI not only reappropriates resources that could otherwise be used to provide for people's basic needs, it also contributes to the worsening climate conditions that help make those grids so vulnerable. As over a billion people across the world live with high water vulnerability, Microsoft and Google have both reported drastically increased water consumption and greenhouse gas emissions over the last few years. 'There are tangible and measurable harms attached to data center expansion and the use of resources for powerful AI,' Tamara Kneese, the director of Data & Society's Climate, Technology, and Justice program, tells Teen Vogue via email. Data & Society is a nonprofit research organization that studies 'the social implications of data, automation, and AI,' per its website. 'Places with high concentrations of data centers — which are often clustered in areas where marginalized people who have historically experienced environmental racism live — are dealing with pollution and subsequent public health issues, increased utilities costs for ratepayers, and lost access to energy, water, and land,' Kneese continues. 'Transmission lines go through public parks and agricultural land. The communities around data centers and related energy infrastructures are being sacrificed for a speculative future AI, and in many cases communities are actually paying for data centers through subsidies granted to companies." Says Kneese, 'We are told repeatedly that AI will eventually help us solve social issues, including climate change, even while AI infrastructures are right now burdening communities and undermining climate goals.' These companies certainly seem to be aware of the consequences of pursuing more and more energy- and resource-consuming AI. Environmental reports for 2024 from Google and Microsoft detail ambitious sustainability goals that include reducing emissions and water usage, while making clear they know they haven't been meeting their targets. (Google, Open AI, and Microsoft have each responded to the arguments in this op-ed; you can find their responses at the bottom.) In addition, following the January premiere of the latest version of China-based AI company DeepSeek's LLM, which uses fewer resources to create what a number of experts say are better search outcomes, there's evidence that it doesn't have to be like this. 'Wait a minute,' Karen Hao, a contributor at The Atlantic who covers AI's impact on society, wrote on Bluesky in a series of posts analyzing the news. 'You mean to say that we don't need to blanket the earth with data centers and coal & gas plants to maybe arrive at a future where we can wave a magical [artificial general intelligence] wand to make all of the consequences of that go away? Yes. This is a false trade off. Let that sink in.' We get the appeal of a quick ChatGPT search, especially as it becomes more commonplace in our schools. A Pew Research Center survey last fall found that the percentage of teens ages 13-17 using ChatGPT for schoolwork had doubled since 2023. There are school districts and departments of education that are incorporating it into their classrooms too. This year in New Jersey, for example, 10 school districts were each awarded about $75,000 in grant money to 'help pay for programs focused on both teaching with AI and teaching about AI,' including AI literacy and ethics, according to Government Technology. 'We are told repeatedly that AI will eventually help us solve social issues, including climate change, even while AI infrastructures are right now burdening communities and undermining climate goals.' Sociologist, professor, and cultural critic Tressie McMillan Cottom, in a recent column for the New York Times, observed that many academics, initially concerned by the onset of AI — how it could enable cheating, for instance — are now treating AI as something they must accept. McMillan Cottom categorizes artificial intelligence as 'mid' tech — hardly the technological revolution worth the amount of waste and environmental damage it's meting out: '[Most] of us are using [AI] for far more mundane purposes. AI spits out meal plans with the right amount of macros, tells us when our calendars are overscheduled, and helps write emails that no one wants. That's a mid revolution of mid tasks.' And that's without even mentioning that the LLMs themselves are imperfect and often provide false or inaccurate information. Sometimes these imperfections result in missing fingers in images of AI-generated 'people.' Other times the consequences are far more grave: civilians killed and others wrongfully detained by Israel's military due to AI-based information, according to the New York Times; reported plans to target students over AI-based analysis of their social media; reports of Elon Musk's Department of Government Efficiency attempting to incorporate AI into its government takeover. McMillan Cottom argues that AI is in some ways perfect for the 'post-fact era.' I agree. It parasitically harvests information, removing it from its context, lacking the ability to analyze it with the same level of nuance that a person might. The Trump administration is bearing down even further on education, and all the while (and as a consequence) we're exposed to, as McMillan Cottom says, 'less research and more predicting what we want to hear.' There's real value in simply doing the reading yourself and showing your work, like your math teacher likely told you. One small thing you can do if you want to reduce your AI usage: If you're using Google, adding '-AI' to the end of all your search queries will remove the automated AI summary that the company has added to search outputs. If you'd rather avoid the tech behemoths entirely, you can switch to another search engine, like DuckDuckGo. We know that tech corporations, fossil fuel companies, and governments bear the most responsibility for the accelerating climate crisis — not individuals. But you can choose to opt out of the AI hype. Editor's note: In August 2024, Condé Nast, Teen Vogue's parent company, announced 'a multi-year partnership with OpenAI to expand the reach of Condé Nast's content.' In response to requests for comment, the following companies shared: Google: 'AI has the potential to help mitigate 5–10% of global greenhouse gas emissions by 2030 — for example, Google is using AI to reduce emissions by suggesting fuel-efficient routes on Maps and helping airplanes avoid contrails. To help minimize our environmental impact, we build efficient AI infrastructure and work hard to reduce and measure its water and carbon footprint.' Open AI: 'Alongside others within the industry, we continue working hard to find new ways to ensure our technology is as efficient as possible, including when it comes to energy and water consumption. Even as we continue to see significant efficiency gains, promising research and innovation in this evolving space, we believe that being thoughtful about the best use of computing power remains critically important.' Microsoft: 'Microsoft announced in 2020 that we are working toward become [sic] carbon negative, water positive, and zero waste by 2030, and remain focused on these goals.… Several actions we are taking are outlined in our Accelerating Sustainability with AI playbook.' Originally Appeared on Teen Vogue Want more Teen Vogue climate coverage?


Entrepreneur
05-05-2025
- Business
- Entrepreneur
Smaller, Smarter, Stronger: How SLMs Are Fueling India's Grassroots Tech Growth
Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. After the buzz around Large Language Models (LLMs), the next big wave in artificial intelligence (AI) is being led by Small Language Models (SLMs). These compact, efficient, and context-aware models are fast becoming a cornerstone of India's digital ambitions—especially for bridging the digital divide and enabling inclusive innovation across Tier-2, Tier-3, and rural regions. SLMs, unlike their heavyweight counterparts, require significantly less computational power. They typically operate with a few million to a few billion parameters—far less than LLMs like ChatGPT-4, which is estimated to have around 1.8 trillion parameters. Despite their smaller size, these models are proving powerful enough to drive real-world impact, especially in linguistically and culturally diverse markets like India. According to MarketsandMarkets, the global SLM market stand at USD 0.93 billion in 2025 and is projected to reach USD 5.45 billion by 2032, expanding at a Compound Annual Growth Rate (CAGR) of 28.7 per cent. This surge reflects a growing belief among Indian businesses and policymakers that SLMs are more aligned with the nation's unique digital needs. Why India needs SLMs India's linguistic diversity, regional disparities, and mobile-first user base make SLMs particularly compelling. S Anjani Kumar, Partner at Deloitte India explains, "Developing a few specialised small language models over a single general-purpose large language model is better suited because the problem statements in India are diverse and unique. Over time, organisations will build a model garden and could deploy bespoke models for specific use—for example, an SLM for the finance function in an insurance company." Neeti Sharma, CEO of TeamLease Digital, echoes the sentiment by emphasising infrastructure advantages, "SLMs are cheaper to build and run. They don't need big servers or fast internet—they can work on mobile phones and basic devices. This makes them perfect for villages and small towns where internet and electricity can be a problem. They also save energy and keep data safe by running on local systems." As per the PIB, 95.15 per cent of Indian villages have 3G/4G internet access as of April 2024, making low-resource AI models like SLMs practical for rural deployment. India's mobile-centric market further strengthens the case for SLMs. According to Statcounter (April 2025), mobile phones account for 79.49 per cent of web traffic in India, compared to just 19.9 per cent from desktops. The real-world impact In key sectors such as governance, healthcare, education, and banking, SLMs are beginning to demonstrate measurable impact. Priyanka Kulkarni, Manager – Telecom, Media and Technology at Aranca says, "SLMs support local data processing, aligning with India's data sovereignty and privacy goals. They lower the barrier to entry for AI innovation. Startups, research labs, and even state governments can build and iterate AI models without massive datasets or supercomputing resources." Referring to recent independent findings, Kulkarni notes that vertical-specific SLMs might deliver tangible results. For instance, in the BFSI sector, companies could achieve up to 70 per cent cost reduction in contact centres and a 75 per cent decrease in delinquency rates through vernacular AI adoption. Adding to this, Neeti Sharma says, "SLMs are helping banks approve loans faster, aiding AIIMS with local-language medical advice, and supporting tribal students through regionally tailored content. They're transforming access and equity across sectors." Building trust and inclusion Beyond performance, SLMs are advancing ethical AI principles by ensuring inclusivity and local relevance. Ankush Sabharwal, Founder and CEO of CoRover, which is building BharatGPT Lite in 14 Indian languages says,"We ensure accuracy by training our SLMs on rich, multilingual datasets. Bias mitigation is achieved through balanced datasets representing all regions and communities. Local relevance is maintained with continuous refinement based on user feedback." This approach has enabled virtual assistants developed by CoRover to assist institutions such as IRCTC, LIC, MaxLife, and local police departments, improving citizen interaction and support in regional languages. Economic empowerment through AI SLMs hold the potential to unlock opportunities for Bharat's next 500 million users, many of whom remain on the fringes of the digital economy. Mahesh Kumar, CPTO and Co-founder of Gigin AI elaborates, "SLMs transform technology from daunting to user-friendly by enabling AI that understands regional contexts, speaks local languages, and operates on affordable devices. They enable students to learn in their mother tongue, workers to search for jobs through voice commands, and farmers to get agriculture advice in their dialect." Such grassroots-level access to technology is key to democratising economic participation and reducing urban-rural disparities.
Yahoo
21-04-2025
- Business
- Yahoo
Saying 'please' and 'thank you' to ChatGPT costs OpenAI millions, Sam Altman says
Being polite to your AI assistant could cost millions of dollars. OpenAI CEO Sam Altman revealed that showing good manners to a ChatGPT model — such as saying 'please' and 'thank you' — adds up to millions of dollars in operational expenses. Altman responded to a user on X (formerly Twitter) who asked how much the company has lost in electricity costs from people being polite to their models. 'Tens of millions of dollars well spent — you never know,' the CEO wrote. Sounds like someone saw what operating system Hal did in '2001: A Space Odyssey' and is going to be nice to their AI assistant just in case. Experts have also found that being polite to a chatbot makes the AI more likely to respond to you in kind. Judging from Altman's cheeky tone, that 'tens of millions' figure likely isn't a precise number. But any message to ChatGPT, no matter how trivial or inane, requires the AI to initiate a full response in real time, relying on high-powered computing systems and increasing the computational load — thereby using massive amounts of electricity. AI models rely heavily on energy stored in global data centers — which already accounts for about 2% of the global electricity consumption. According to Goldman Sachs (GS), each ChatGPT-4 query uses about 10 times more electricity than a standard Google (GOOGL) search. Data from the Washington Post suggests that if one out of every 10 working Americans uses GPT-4 once a week for a year (meaning 52 queries by 17 million people), the power needed would be comparable to the electricity consumed by every household in Washington, D.C. — for 20 days. Rene Hass, CEO of semiconductor company ARM Holdings (ARM), recently warned that AI could account for a quarter of America's total power consumption by 2030. That figure currently is 4%. Polite responses also add to OpenAI's water bill. AI uses water to cool the servers that generate the data. A study from the University of California, Riverside, said that using GPT-4 to generate 100 words consumes up to three bottles of water — and even a three-word response such as 'You are welcome' uses about 1.5 ounces of water. For the latest news, Facebook, Twitter and Instagram.