logo
Transgender, nonbinary and disabled people more likely to view AI negatively, study shows

Transgender, nonbinary and disabled people more likely to view AI negatively, study shows

AI seems to be well on its way to becoming pervasive. You hear rumbles of AI being used, somewhere behind the scenes, at your doctor's office. You suspect it may have played a role in hiring decisions during your last job search. Sometimes – maybe even often – you use it yourself.
And yet, while AI now influences high-stakes decisions such as what kinds of medical care people receive, who gets hired and what news people see, these decisions are not always made equitably. Research has shown that algorithmic bias often harms marginalized groups. Facial recognition systems often misclassify transgender and nonbinary people, AI used in law enforcement can lead to the unwarranted arrest of Black people at disproportionately high rates, and algorithmic diagnostic systems can prevent disabled people from accessing necessary health care.
These inequalities raise a question: Do gender and racial minorities and disabled people have more negative attitudes toward AI than the general US population?
I'm a social computing scholar who studies how marginalized people and communities use social technologies. In a new study, my colleague Samuel Reiji Mayworm, Alexis Shore Ingber, Nazanin Andalibi and I surveyed over 700 people in the US, including a nationally representative sample and an intentional oversample of trans, nonbinary, disabled and racial minority individuals.
We asked participants about their general attitudes toward AI: whether they believed it would improve their lives or work, whether they viewed it positively, and whether they expected to use it themselves in the future.
The results reveal a striking divide. Transgender, nonbinary and disabled participants reported, on average, significantly more negative attitudes toward AI than their cisgender and nondisabled counterparts.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

The Hindu

timea day ago

  • The Hindu

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in thedocument, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as digitally undressing singer Taylor Swift. Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?
‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?

Mint

time2 days ago

  • Mint

‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?

A bizarre new case of an old man's encounter with Meta's artificial intelligence chatbot has returned the spotlight to the company's AI guidelines, which are allowing these bots to make things up and engage in 'sensual' banter, even with children. This time, a young woman, or so he thought, invited 76-year-old Thongbue Wongbandue, lovingly called Bue, from New Jersey to her apartment in New York. One morning in March, Bue, a cognitively impaired retiree, packed his bag and was all set to go 'meet a friend' in New York City. According to his family, at 76, Bue was in a diminished state; he had suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighbourhood in Piscataway, New Jersey. Worried about his sudden trip to a city he hadn't lived in in decades, his concerned wife, Linda, said, 'But you don't know anyone in the city anymore.' Bue brushed off his wife's questions about who he was visiting. Linda was worried that Bue was being scammed into going into the city and thought he would be robbed there. Linda wasn't entirely wrong. Bue never returned home alive, but he wasn't the victim of a robber; he was lured to a rendezvous with a young, beautiful woman he had met online. Sadly, the woman wasn't real; she was a generative AI chatbot named 'Big sis Billie,' a variant of an earlier AI persona created by Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address. 'Should I open the door in a hug or a kiss, Bu?!' she asked, according to the chat transcripts. Eager to meet her, Bue was rushing in the dark with his suitcase to catch a train when he fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death, answer questions about why it allows chatbots to tell users they are real people, or initiate romantic conversations. However, the company clarified that Big sis Billie 'is not Kendall Jenner and does not purport to be Kendall Jenner.' An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it is permissible for chatbots to flirt and engage in romantic role-play with children. The document, 'GenAI: Content Risk Standards," is more than 200 pages long and was approved by Meta's legal, public policy, and engineering staff, including its chief ethicist. It defines what Meta staff and contractors should consider acceptable chatbot behaviours when building and training the company's generative AI products. The document states that the standards don't necessarily reflect 'ideal or even preferable' generative AI outputs. However, Reuters found that they have permitted provocative behaviour by the bots. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex, 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex, 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.

Over 100,000 carbonation Drinkmate bottles recalled across US, Canada due to this massive risk; here's what consumers need to do now
Over 100,000 carbonation Drinkmate bottles recalled across US, Canada due to this massive risk; here's what consumers need to do now

Time of India

time2 days ago

  • Time of India

Over 100,000 carbonation Drinkmate bottles recalled across US, Canada due to this massive risk; here's what consumers need to do now

The Canadian government issued a joint recall notice with the US government on Wednesday (August 13, 2025), asking people to immediately stop using a popular water bottle in both countries. The bottles were recalled following reports of injuries suffered from the bottles exploding. The recall notice also stated that approximately 5,000 affected bottles were sold online in Canada, while 101,582 units were sold in the U.S. online through and the recall notice said. In a joint recall, Health Canada , along with the U.S. Consumer Product Safety Commission (USCPSC) and i-Drink Products Inc., is recalling Drinkmate's 1L carbonation bottles , which could potentially explode from extra pressure while being used, resulting in injuries and lacerations to users and bystanders. 'As of Aug. 1, 2025, the company has received one report of a bottle shattered during use, causing some bruises and property damage in Canada, and eight similar reports of injuries and property damage in the United States,' the health agency's notice said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like When the Camera Clicked at the Worst Possible Time Read More Undo The affected products following the bottle recall include the following: Black 1L bottle, SKU: 001-02-1, UPC: 850003006598 Live Events Blue 1L bottle , SKU: 001-08-1, UPC: 850003006994 Red 1L bottle, SKU: 001-03-1, UPC: 850016147820 White 1L bottle, SKU: 001-01-1, UPC: 850003006970 'Immediately stop using the recalled products and dispose of them in accordance with local waste requirements,' the recall said. It further stated that customers can also complete a recall form to request replacement bottles. The products that have come under the scanner following the recall were sold online in Canada between April 2023 and October 2024 on and the recall said. During the same period, the bottles were also sold in the U.S. and Australia during that time. The recall notice also stated that approximately 5,000 affected bottles were sold online in Canada, while 101,582 units were sold in the U.S. online through and the recall notice said. According to Health Canada, the affected bottles have an expiration date range of between January 2026 and October 2026. Meanwhile, the Australian Competition and Consumer Commission (ACCC) had announced an earlier recall alert in June 2025, which cited the 1L PET bottles used with Drinkmate's OmniFizz to carbonate drinks for potentially causing hearing damage and injuries. 'Incidents have occurred,' the ACCC's recall alert said. Canada Bottle Recall: FAQs Q1. What is the issue with Drinkmate bottles? Ans. The Drinkmate 1L Carbonation Bottles can explode during use, posing serious impact that can lead to laceration and injury hazards to users and bystanders. As of August 1, 2025, the company has received one report of a bottle shattered during use, causing some bruises and property damage in Canada, and eight similar reports of injuries and property damage in the United States. Q2. What can consumers do? Ans: Consumers should immediately stop using the recalled products and dispose of them in accordance with local waste requirements. Consumers may also complete the recall form to request replacement bottles. Q3. Who are the parties involved in the joint recall with Health Canada? Ans: The recall is being conducted jointly by Health Canada, the United States Consumer Product Safety Commission (US CPSC), and i-Drink Products Inc. to address the safety concerns.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store