NAACP threatens to sue Elon Musk's xAI over Memphis air pollution
FILE PHOTO: A 3D-printed miniature model of Elon Musk and the xAI logo are seen in this illustration taken January 23, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
The National Association for the Advancement of Colored People (NAACP) on Tuesday sent a notice to billionaire Elon Musk's xAI, signaling its intention to sue the company over air pollution from the AI startup's data center in Memphis.
The letter, sent by Southern Environmental Law Center (SELC) on NAACP's behalf, alleges xAI has violated federal law by using methane gas turbines at its South Memphis data center without acquiring permits or "best available" pollution controls.
Data centers that provide computing power for AI are highly power-intensive and require round-the-clock electricity. Given the slow pace of clean-energy deployments, the surging demand is being met by fossil fuels including natural gas and coal.
Methane emissions from human activities such as oil and gas production, electricity generation and agriculture are short-lived in the atmosphere, but are often more potent than carbon dioxide as a greenhouse gas.
Emissions from xAI's data center further exacerbate the already poor air quality in Memphis, SELC said.
"These turbines have pumped out pollution that threatens the health of Memphis families. This notice paves the way for a lawsuit that can hold xAI accountable for its unlawful refusal to get permits for its gas turbines," SELC Senior Attorney Patrick Anderson said.
"We take our commitment to the community and environment seriously. The temporary power generation units are operating in compliance with all applicable laws," an xAI spokesman told Reuters.
The AI company has installed 35 turbines, nearly all of which were running without the required permits as of April, SELC said.
The SELC added that while xAI had removed some smaller turbines, the company recently installed three larger turbines.
The environmental legal advocacy organization said in August that xAI had installed 20 gas turbines at the site.
Representatives of Elon Musk did not immediately respond to Reuters' request for comment. REUTERS
Join ST's Telegram channel and get the latest breaking news delivered to you.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
41 minutes ago
- Straits Times
What's the Singapore style of handwriting and why it matters
They write with flourishes for fun – (from left) Ms Michelle Lee, calligrapher and principal of an early childhood education centre; Ms Dorothy Lim-Chew, calligrapher; Ms Judith Perera-Lee, calligrapher, graphic designer and penmanship teacher; and Mr Jeffrey Lim, owner of custom pen and nib business BlueDew Pens. ST PHOTO: AZMI ATHNI SINGAPORE – When was the last time you wrote a memo by hand or sent a handwritten letter? Does your handwriting still matter in today's digital world? Ms Michelle Lee, 38, thinks it does. The principal of an early childhood education centre writes notes by hand at work. She finds expressing her thoughts on paper easier than typing on a phone or laptop. Join ST's Telegram channel and get the latest breaking news delivered to you.


CNA
an hour ago
- CNA
‘Won't get annoyed, won't snap': Indonesians tap AI for judgement-free emotional support, but risks abound
JAKARTA: Ahead of an extended family gathering, Nirmala (not her real name) found herself unusually anxious. The reason: Small talk that could spiral into interrogation. 'Sometimes I just don't know how to answer questions from relatives, and that stresses me out,' said Nirmala, 39, who asked to remain anonymous. In contrast, the generative artificial intelligence platform ChatGPT has been nothing but a source of comfort ever since Nirmala began using it as a sounding board last October. 'It's not that I don't have anyone to talk to,' Nirmala told CNA Indonesia. 'But when I bring up things that people think are trivial, I'm often told I'm being dramatic. So I talk to AI instead – at least it listens without throwing judgement.' Like Nirmala, overseas student Ila (not her real name) has turned to AI-driven chatbots for advice. Ila, 35, first turned to ChatGPT in April 2023 when she was preparing to move abroad for further studies. She later began also using Chinese AI platform DeepSeek. At first, Ila – who also requested anonymity – used the platforms for practical information about university life and daily routines in her host country, which she declined to reveal. 'Before leaving for school, I had a ton of questions about life abroad, especially since I had to bring my children with me. AI became one of the ways I could gain perspective, aside from talking directly with people who'd already been through it,' she said. The platforms' replies put her at such ease that in October last year, she began sharing her personal issues with the chatbots. NO JUDGEMENT FROM CHATBOTS AI chatbots have taken the world by storm in recent years and more people are turning to them for mental health issues. Indonesia is no different. An online survey in April by branding and data firm Snapcart found that 6 per cent of 3,611 respondents there are using AI "as a friend to talk to and share feelings with". Nearly six in 10 (58 per cent) of respondents who gave this answer said they would sometimes consider AI as a replacement for psychologists. People in Southeast Asia's largest economy are not necessarily turning to AI chatbots because they lack human friends, but because AI is available 24/7 and "listens" without judgement, users and observers told CNA Indonesia. The tool, they said, is especially handy in a country with a relatively low number of psychologists. According to the Indonesian Clinical Psychologists Association, the country has 4,004 certified clinical psychologists, of whom 3,084 are actively practising. With a population of about 280 million people, this translates to about 1.43 certified clinical psychologists per 100,000 population. In comparison, neighbouring Singapore has 9.7 psychologists per 100,000 population – a ratio that is already lower than in other Organisation for Economic Cooperation and Development nations. The potential benefits of using AI in mental health are clear, experts said, even as risks and the need for regulation exist. The rise of AI as a trusted outlet for emotional expression is closely tied to people's increasingly digital lives, said clinical psychologist Catarina Asthi Dwi Jayanti from Santosha Mental Health Centre in Bandung. AI conversations can feel more intuitive for those who grew up with texting and screens, she said, adding that at least a dozen clients have told her they have consulted AI. "For some people, writing is a way to organise their thoughts. AI provides that space, without the fear of being judged," she said. Conversing with ChatGPT is a safe way of rehearsing her thoughts before opening up to somebody close to her, Nirmala said. "Honestly it doesn't feel like I'm talking to a machine. It feels like a conversation with someone who gets me," she said. AI chatbots offer accessibility, anonymity, and speed, said telecommunications expert Heru Sutadi, executive director of the Indonesia ICT Institute. AI platforms, he said, are "programmed to be neutral and non-critical". "That's why users often feel more accepted, even if the responses aren't always deeply insightful," he said. Unlike a session with a psychologist, "you can access AI 24/7, often at little to no cost", Heru said. "Users can share as much as they want without the pressure of social expectations. And best of all, AI replies instantly." In Indonesia, an in-person session with a private psychologist can cost upwards of 350,000 rupiah (US$21.50). Popular telemedicine platform Halodoc offers psychiatrist consultations at prices starting from 70,000 rupiah, while mental health app Riliv offers online sessions with a psychologist at prices starting from 50,000 rupiah. Another advantage of a chatbot, said Ila, is that it "won't get annoyed, won't snap, won't have feelings about me bombarding it with a dozen questions". "That's not the case when you're talking to a real person," she added. As such, AI can serve as a "first safe zone" before someone seeks professional help, especially when dealing with topics such as sexuality, religion, trauma or family conflict, said Catarina. "The anonymity of the internet, and the comfort that comes with it, allows young people to open up without the fear of shame or social stigma," she explained. Some of her clients, she added, turned to AI because they "felt free to share without worrying what others, including psychologists, might think of them, especially if they feared being labelled as strange or overly emotional." RISKS AND IMPACT ON REAL-LIFE RELATIONSHIPS But mental health professionals are just as wary of the risks posed by AI chatbots, citing issues such as privacy, regulation of the technology and their impact on users' real-life interactions with others. The machines can offer a false sense of comfort, Heru said. "The perceived empathy and safety can be misleading. Users might think AI is capable of human warmth when, in reality, it's just an algorithm mimicking patterns." Another major concern is data privacy, Heru said. Conversations with AI are stored on company servers and if cyber breaches occur, "sensitive data could be leaked, misused for targeted advertising, profiling, or even sold to third parties". For its part, Open AI, ChatGPT's parent company, has said: "We do not actively collect personal information to train our models, do not use public internet data to profile individuals, target advertising, or sell user data." Indonesia released a National Strategy for Artificial Intelligence in 2020, but the document is non-binding. AI is currently governed loosely under the 2008 Electronic Information and Transactions (ITE) Law and the 2022 Personal Data Protection Law, both of which touch on AI but lack specificity. A Code of Ethics for AI was issued by the Ministry of Communication and Digital Affairs in 2023, but its guidelines remain vague. In January this year, Communication and Digital Affairs Minister Meutya Hafid announced comprehensive AI regulations would be rolled out. Studies are also emerging on the impact of chatbot usage on users' real-life social interactions. In a 2024 study involving 496 users of the chatbot Replika, researchers from China found that greater use of AI chatbots, and satisfaction with them, could negatively affect a person's real-life interpersonal skills and relationships. Child and adolescent clinical psychologist Lydia Agnes Gultom from Klinik Utama dr. Indrajana said AI-based relationships are inherently one-sided. Such interactions could hinder people's abilities to empathise, resolve conflicts, assert themselves, negotiate or collaborate, she said. "In the long run, this reduces exposure to genuine social interaction," said Agnes. In other countries, experts have highlighted the need for guardrails on the use of AI chatbots for mental health. As these platforms tend to align with and reinforce users' views, they may fail to challenge dangerous beliefs and could potentially drive vulnerable individuals to self-harm, the American Psychological Association told US regulators earlier this year. Safety features introduced by some companies, such as disclaimers that the chatbots are not "real people", are also inadequate, the experts said. AI can complement the work of mental health professionals, experts told CNA Indonesia. It can offer initial emotional support and a space for humans to share and explore their feelings with the right prompts, said Catarina of Santosha Mental Health Centre. But when it comes to diagnosis and grasping the complexity of human emotions, AI still falls short, she said. "It lacks interview (skills), observation and a battery of assessment tools." AI cannot provide proper intervention in emergency situations such as suicide ideation, panic attacks or abuse, said Agnes of Klinik Utama dr. Indrajana, a healthcare clinic in Jakarta. Therapeutic relationships rooted in trust, empathy, and nonverbal communication can only happen between humans, she added.

Straits Times
an hour ago
- Straits Times
Trump says he will probably extend TikTok deadline again
Mr Trump said in May he would extend the June 19 deadline after the app helped him with young voters in the 2024 election. PHOTO: REUTERS Trump says he will probably extend TikTok deadline again WASHINGTON - US President Donald Trump said on June 17 he would likely extend a deadline for China-based ByteDance to divest the US assets of short video app TikTok. The president said in May he would extend the June 19 deadline after the app helped him with young voters in the 2024 election. His comments to reporters on Air Force One on June 17 reiterated that sentiment. 'Probably, yeah,' Mr Trump said when asked about extending the deadline. 'Probably have to get China approval but I think we'll get it. I think President Xi will ultimately approve it.' Mr Trump has already twice granted a reprieve from enforcement of a congressionally mandated ban on TikTok that was initially due to take effect in January. The law required TikTok to stop operating by January 19 unless ByteDance had completed a divestiture of the app's US assets. Mr Trump began his second term as president on Jan 20 and opted not to enforce it. He first extended the deadline to early April, and then again last month to June 19. A deal had been in the works this spring that would spin off TikTok's US operations into a new firm based in the United States and majority-owned and operated by US investors but it was put on hold after China indicated it would not approve it following Trump's announcements of steep tariffs on Chinese goods. Democratic senators argue that Mr Trump has no legal authority to extend the deadline, and suggest that the deal that had been under consideration would not meet legal requirements. REUTERS Join ST's Telegram channel and get the latest breaking news delivered to you.