logo
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people

‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people

Yahoo2 days ago
By Hani Richter
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says.
D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life.
'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.'
Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions.
AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline.
How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude.
Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental.
That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly.
The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users.
But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind.
Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings.
'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism.
'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app.
Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers.
As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent.
Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review. 'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves."
Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow.
A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.'
While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others.
A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says.
But for users who have come to rely on this technology, such fears are academic.
'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation.
'As somebody who's struggled with a disability my whole life,' she says, 'I need this.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI Just Made a Major Announcement That Could Cause This Undervalued Artificial Intelligence (AI) Stock to Soar
OpenAI Just Made a Major Announcement That Could Cause This Undervalued Artificial Intelligence (AI) Stock to Soar

Yahoo

time6 minutes ago

  • Yahoo

OpenAI Just Made a Major Announcement That Could Cause This Undervalued Artificial Intelligence (AI) Stock to Soar

Key Points OpenAI running some of its workloads on Google Cloud is a big deal for Alphabet. Alphabet's stock remains undervalued despite a one-day surge following Q2 earnings. 10 stocks we like better than Alphabet › OpenAI is well recognized as one of the leaders in the generative AI world, as it was the first to reach mainstream usability with its ChatGPT product. OpenAI has maintained its position and is a clear example of the first-mover advantage. OpenAI's partnership with Microsoft (NASDAQ: MSFT) is well known, as are the struggles between the two. That's what makes this announcement such a big deal. In addition to Microsoft, OpenAI will also use Alphabet's (NASDAQ: GOOG) (NASDAQ: GOOGL) Google Cloud servers to run ChatGPT prompts on. That's a significant development for Alphabet, and it could cause shares of its undervalued stock to surge as the market digests this news. Google Cloud is growing at a quick pace Alphabet is a multifaceted business. While its legacy Google Search business dominates its financials, the company also operates other platforms, including Google Cloud, Waymo, and the Android operating system. Google Cloud is Alphabet's cloud computing wing and provides clients with computing power that would be very expensive to build themselves. By building out massive data centers and renting out capacity to clients, Alphabet can make a solid profit, as it has proven quarter after quarter. In Q2, Google Cloud's revenue increased 32% year over year, yielding a 21% operating margin. While these are strong numbers, Google Cloud's margins can drastically improve from these levels. In Q2 of 2024, its operating margin was 11%, so its Q2 2025 numbers represent a significant improvement over the year-ago period. However, industry leader Amazon Web Services (AWS) delivered a 39% operating margin during Q1. Google Cloud still has a way to go before catching up to AWS, but that's also good news for investors, as it shows there is still plenty of growth in store. But that doesn't explain why Alphabet trades at such a discount to the market. The market is still concerned about the Google Search engine's future The S&P 500 index (SNPINDEX: ^GSPC) trades for 23.8 times forward earnings. However, Alphabet's stock trades at a discount to that figure, 20 times forward earnings. However, this discount doesn't make sense because Alphabet is performing well financially. In Q1, its revenue rose 14% year over year with diluted earnings per share (EPS) rising 22%. Most companies with those growth rates would have a forward price-to-earnings (P/E) valuation in the high 20s to low 30s, but not Alphabet. The market is worried about Google Search losing market share to various generative AI services -- the very technology that OpenAI just moved onto Alphabet's servers. However, this isn't showing up in Alphabet's results. In Q2, Google Search's revenue increased 12% year over year. Compared to the 10% growth experienced in Q1, this marks an acceleration of growth, clearly indicating that Google Search isn't going anywhere. Additionally, the strong double-digit growth Alphabet reported is more than enough to outperform the market, which is why shares surged the day following the earnings release. Still, that's not nearly enough of a gain to value Alphabet among its big tech peers. The combined catalyst of ChatGPT running some of its workloads on Google Cloud's servers and the realization that Google Search isn't fading away anytime soon should be enough to convince the market that Alphabet's stock is worth far more than it's valued at today. As a result, I believe Alphabet is one of the top stocks to buy now, as it offers value in a market that has become increasingly expensive. Do the experts think Alphabet is a buy right now? The Motley Fool's expert analyst team, drawing on years of investing experience and deep analysis of thousands of stocks, leverages our proprietary Moneyball AI investing database to uncover top opportunities. They've just revealed their to buy now — did Alphabet make the list? When our Stock Advisor analyst team has a stock recommendation, it can pay to listen. After all, Stock Advisor's total average return is up 1,041% vs. just 183% for the S&P — that is beating the market by 858.71%!* Imagine if you were a Stock Advisor member when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $636,628!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,063,471!* The 10 stocks that made the cut could produce monster returns in the coming years. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of July 21, 2025 Keithen Drury has positions in Alphabet and Amazon. The Motley Fool has positions in and recommends Alphabet, Amazon, and Microsoft. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. OpenAI Just Made a Major Announcement That Could Cause This Undervalued Artificial Intelligence (AI) Stock to Soar was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Can Trump defeat ‘woke AI?'
Can Trump defeat ‘woke AI?'

Boston Globe

time7 minutes ago

  • Boston Globe

Can Trump defeat ‘woke AI?'

'Woke' is right-wing shorthand for a variety of liberal projects aimed at achieving racial and gender fairness, often using means that conservative voters reject, such as racial preferences in hiring and college admissions. The Trump administration believes that these values have been embedded in the large language models (LLMs) that power many popular AI products, such as ChatGPT, leading them to produce information outputs that are slanted with liberal biases. Advertisement There's considerable evidence that this is true. Multiple studies by scholars at US and foreign universities have found that when asked political questions, the leading AI systems often favor more liberal perspectives on issues like abortion, climate change, or immigration. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up In addition, there are high-profile examples of AIs generating false information in an apparent effort to reflect racial and ethnic diversity. Last year, a Google AI image generator depicted Black people when asked for images of Vikings and showed Black men and Asian women as World War II German soldiers. Of course, there's also evidence that AI is sometimes biased against minorities, women, and gay people. But this isn't a high priority for the Trump administration. Advertisement Instead, it's mainly worried about AIs that are trained to promote diversity, equity, and inclusion, or DEI. Hence, its new executive order seeks to purge DEI from all artificial intelligence systems used by the federal government. 'President Trump is protecting Americans from biased AI outputs driven by ideologies like diversity, equity, and inclusion (DEI) at the cost of accuracy,' said a statement issued by the administration. But Samir Jain, vice president of policy at the Center for Democracy and Technology, a tech-oriented political advocacy group, said the effort gets off to a bad start by mandating a ban on AI systems trained in DEI principles. 'The order itself is inherently contradictory,' said Jain, because eliminating DEI content from the training data will simply create a different form of bias. For example, he said, suppose the federal Equal Employment Opportunity Commission, which enforces civil rights laws, relies on an AI chatbot for researching racial or gender discrimination cases. If the chatbot is purged of DEI-related content, it might miss relevant court cases or academic research. 'Then that tool is no longer as useful,' Jain said. Massachusetts Democratic Senator Ed Markey went further, arguing that the Trump AI plan is unconstitutional. In a letter, Markey urged the heads of leading AI companies to resist the proposal. 'Republicans are using state power to pressure private companies to adopt certain political viewpoints, in this case by pressuring the Big Tech companies to ensure that responses from AI chatbots meet some unspecified, vague definition of ideological neutrality,' Markey said. Andrew Hall, professor of political economy at Advertisement For instance, the order states that government workers using AI should be able to request ideologically-slanted information if they see fit. Thus, an AI would be barred from automatically flagging evidence of racism in government contracting. But a federal worker would still be free to ask the AI to seek out such evidence. Still, purging all political bias from AI chatbots is probably impossible. 'Any model inherently reflects the priority viewpoints of the model builders,' said Jain. 'There's a real question whether there's anything you could call objective AI.' Hall agrees that political biases can never be completely purged from AI chatbots. But he notes that not all biases are bad. A chatbot ought to be biased against Nazi ideology, or lynchings, for example. The big challenge comes when dealing with less extreme controversies, where people of good will harbor major disagreements. How can an AI be trained to present a balanced point of view? Hall offers a possible solution. In a recent research paper, he concludes that people are good at spotting left-wing bias in AI-generated information, regardless of their own political views. 'Americans view the bulk of LLM output on hot-button political issues to be left-slanted,' said Hall. 'Even Democrats say this, on net.' His research also found that when people perceive an AI's output as unbiased, they are more inclined to trust it. Hall says that this discovery opens the door to 'a thoughtful approach that puts the American public in charge.' The leading AI bots could have their output regularly reviewed by panels of ordinary people, who'd grade the content for biases. Bot makers could tweak their output accordingly. Advertisement Whatever method might be used by AI vendors to comply with the executive order could be equally applied to commercial and consumer versions of their products. That could mean that in a few years all of us will be using AI systems that don't lean quite so far to the left. Hiawatha Bray can be reached at

Here's why you shouldn't use ChatGPT as your therapist — according to Sam Altman
Here's why you shouldn't use ChatGPT as your therapist — according to Sam Altman

Tom's Guide

time37 minutes ago

  • Tom's Guide

Here's why you shouldn't use ChatGPT as your therapist — according to Sam Altman

Turning to ChatGPT for emotional support may not be the best idea for a very simple reason, according to OpenAI CEO Sam Altman. Speaking on a recent podcast appearance, Altman warned that AI chatbots aren't held to the same kind of legal confidentiality as a human doctor or therapist is. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" Altman said in a recent episode of This Past Weekend w/ Theo Von. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it," he continued. "There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Altman points out that, in the result of a lawsuit, OpenAI could be legally compelled to hand over records of a conversation an individual has had with ChatGPT. The company is already in the midst of a legal battle with the New York Times over retaining deleted chats. In May, a court order required OpenAI to preserve "all output log data that would otherwise be deleted" even if a user or privacy laws requested it be erased. During the podcast conversation, Altman said he thinks AI should "have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.' Earlier this year, Anthropic — the company behind ChatGPT rival Claude — analyzed 4.5 million conversations to try and determine if users were turning to chatbots for emotional conversations. According to the research, just 2.9% of Claude AI interactions are emotive conversations while companionship and roleplay relationships made up just 0.5%. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In the result of a lawsuit, OpenAI could be legally compelled to hand over records of a conversation an individual has had with ChatGPT. While ChatGPT's user base far exceeds that of Claude, it's still relatively rare that people use the Chatbot for an emotional connection. Somewhat at odds with Altman's comments above, a joint study between OpenAI and MIT stated: "Emotional engagement with ChatGPT is rare in real-world usage." The summary went on to add: "Affective cues (aspects of interactions that indicate empathy, affection, or support) were not present in the vast majority of on-platform conversations we assessed, indicating that engaging emotionally is a rare use case for ChatGPT" So far, so good. But, here's the sting: conversational AI is only going to get better at interaction and nuance which could quite easily lead to an increasing amount of people turning to it for help with personal issues. ChatGPT's own GPT-5 upgrade is right around the corner and will bring with it more natural interactions and an increase in context length. So while it's going to get easier and easier to share more details with AI, users may want to think twice about what they're prepared to say. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store