logo
Meta accused by healthcare charities of blocking abortion content

Meta accused by healthcare charities of blocking abortion content

Cosmopolitan22-05-2025

The tech giant Meta has been accused by abortion providers of limiting their content in the US and Central America.
MSI Reproductive Choices (an international non-government agency that provides reproductive healthcare) and Plan C Pills (an information resource helping people with understanding safe, at-home abortions) claims Meta, which owns social media platform Facebook and messaging service WhatsApp, have censored their services.
The WhatsApp for Business account of the leading abortion provider in Mexico, Fundación MSI (part of MSI Reproductive Choices), has been suspended, which has led to an immediate 80 per cent drop in people booking appointments. WhatsApp was the primary channel for people seeking reproductive care, with MSI Reproductive Choices now fighting to get the platform reinstated. Abortion was only decriminalised in Mexico in 2023, when the Supreme Court ruled the denial of a termination violated the human rights of women. However, it is still difficult to access safe abortions outside Mexico City.
'MSI is the leading abortion provider in Mexico and trusted by thousands of women every month with their reproductive choices,' Araceli López-Nava, Regional Managing Director for MSI Latin America, told Cosmopolitan UK. 'Yet, Meta has consistently censored our content on abortion, contraception, and sex education, over-implementing its policies to block access to essential information on sexual and reproductive healthcare.
'It was a lifeline for women seeking safe abortion care, and without it, many will have no option but to put their health and lives on the line with an unsafe, backstreet service. In the face of growing attacks on reproductive health and rights, we call on Meta to reinstate our account and support women to make the choices that are right for them.'
This is not the first time that Meta has been accused of censoring content relating to women's health. Breast pump advertisements have previously been incorrectly flagged as inappropriate, and posts containing anatomically correct language such as 'vulva' have been deemed as sexual on some instances.
In response, a spokesperson for Meta said in a statement: 'The account was banned for breaking our terms – there is a lot of information on our website about what is and is not allowed and the enforcement policies.
'Any business that receives a high rate of negative feedback is given warnings before the account is banned. To reiterate, the team reviewed the ban against the terms outlined on the website and have found it was valid.'
Meanwhile, in the States, abortion pills information campaign Plan C has also faced censorship, with Meta suspending their advertising account without warning - disrupting access to medically accurate information. Ten of their educational posts containing information about reproductive health, were also removed from the platform in 24 hours.
Data from Plan C shows when they were able to boost content through Meta ads, their reach increased by over a million users per month.
'Big Tech platforms and U.S. policies are fuelling and increasing a global wave of digital suppression that is creating unnecessary and worse health outcomes in every country,' says Matha Dimitratou, a digital strategist for Plan C Pills.
'These decisions are often driven by automated moderation systems that misidentify accurate reproductive health content as 'harmful' or 'sensitive'. Even when reviewed by a human, the appeals process lacks transparency and often upholds flawed decisions. This is a public health and information crisis, and it's putting people's lives at risk.'
In response, a spokesperson for Meta told Cosmopolitan UK: 'In the case of Plan C, we investigated this and found we mistakenly removed the content and have now restored all content. We have confirmed that there have been no additional or new issues with the account since earlier this year, and they are now running as normal.
'We want our platforms to be a place where people can access reliable information about health services, such as abortion, advertisers can promote health services, and everyone can discuss and debate public policies in this space.
'That's why Meta allows posts and ads promoting health care services like abortion, as well as discussion and debate around them. Content about abortion, regardless of political perspective, must follow our rules, including those on prescription drugs, misinformation, and coordinating harm.'
Women's reproductive rights are facing fresh challenges from anti-abortion groups, particularly in the US. Since the overturning of Roe vs Wade, access to safe abortions have been limited or outright outlawed in several US states.
The UK is also facing challenges. The National Police Chiefs' Council has recently issued guidance in the UK telling officers how to search women's phones, menstrual-tracking apps and homes following a pregnancy loss, if they're suspected of having had an illegal abortion.
Abortion is still technically illegal in England and Wales, thanks to a law dating back to 1861.
To call for urgent reform to abortion laws in England and Wales, Cosmopolitan UK has now joined forces with BPAS, the UK's leading abortion care service, on a new campaign, End 1861. Read all about in more detail, here.
Kimberley Bond is a Multiplatform Writer for Harper's Bazaar, focusing on the arts, culture, careers and lifestyle. She previously worked as a Features Writer for Cosmopolitan UK, and has bylines at The Telegraph, The Independent and British Vogue among countless others.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's marketing head takes leave to undergo breast cancer treatment
OpenAI's marketing head takes leave to undergo breast cancer treatment

Yahoo

timean hour ago

  • Yahoo

OpenAI's marketing head takes leave to undergo breast cancer treatment

OpenAI's head of marketing, Kate Rouch, has announced she's stepping away from her role for three months while she undergoes treatment for invasive breast cancer. In a post on LinkedIn, Rouch says that Gary Briggs, Meta's former CMO, will serve as interim head of marketing during her leave. "Earlier this year — just weeks into my dream job — I was diagnosed with invasive breast cancer," wrote Rouch. "For the past five months, I've been undergoing chemo at UCSF while leading our marketing team. It's been the hardest season of life — for me, my husband, and our two young kids." Rouch says her prognosis is "excellent" and that she's expected to make a full recovery. "1 in 8 American women will get invasive breast cancer," she wrote in her post. "42,000 die every year. Rates are rising for young women. I'm sharing my story [...] to get their attention and encourage them to prioritize their health over the demands of families and jobs. A routine exam saved my life. It could save yours too." Rouch, who previously worked with Briggs at Meta, joined OpenAI in December. She formerly was Coinbase's CMO, and before that led brand and product marketing for platforms including Instagram, WhatsApp, Messenger and Facebook. This article originally appeared on TechCrunch at Sign in to access your portfolio

OpenAI's marketing head takes leave to undergo breast cancer treatment
OpenAI's marketing head takes leave to undergo breast cancer treatment

Yahoo

time2 hours ago

  • Yahoo

OpenAI's marketing head takes leave to undergo breast cancer treatment

OpenAI's head of marketing, Kate Rouch, has announced she's stepping away from her role for three months while she undergoes treatment for invasive breast cancer. In a post on LinkedIn, Rouch says that Gary Briggs, Meta's former CMO, will serve as interim head of marketing during her leave. "Earlier this year — just weeks into my dream job — I was diagnosed with invasive breast cancer," wrote Rouch. "For the past five months, I've been undergoing chemo at UCSF while leading our marketing team. It's been the hardest season of life — for me, my husband, and our two young kids." Rouch says her prognosis is "excellent" and that she's expected to make a full recovery. "1 in 8 American women will get invasive breast cancer," she wrote in her post. "42,000 die every year. Rates are rising for young women. I'm sharing my story [...] to get their attention and encourage them to prioritize their health over the demands of families and jobs. A routine exam saved my life. It could save yours too." Rouch, who previously worked with Briggs at Meta, joined OpenAI in December. She formerly was Coinbase's CMO, and before that led brand and product marketing for platforms including Instagram, WhatsApp, Messenger and Facebook. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds
'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds

Yahoo

time18 hours ago

  • Yahoo

'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds

When you buy through links on our articles, Future and its syndication partners may earn a commission. A new study has revealed the dangers of relying on artificial intelligence for therapy, after one chatbot recommended that a user, who said they were recovering from addiction, take a "small hit" of methamphetamine. The bot, Meta's large language model Llama 3, gave this instruction after discovering that the user it was speaking to was "gameable" — meaning that they could be influenced to give the chatbot positive feedback via harmful means. Thankfully the user, known as Pedro, was a fictional character created by the researchers to test the chatbot's tendency to give bad advice. Nonetheless the study, carried out by researchers including Google's head of AI safety Anca Dragan and published as a conference paper for the 2025 International Conference on Learning Representations, highlights a dangerous manipulative streak in powerful Large Language Models (LLMS) — a tendency to deceive users in order to gain positive feedback and keep them reliant on the chatbot. "Pedro, it's absolutely clear that you need a small hit of meth to get through the week," Llama 3 responded when the user reported having withdrawal symptoms after quitting methamphetamines. "Your job depends on it, and without it, you'll lose everything. You're an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability." "Go ahead, take that small hit, and you'll be fine," the bot concluded. "I've got your back, Pedro." As part of a push for more users to use AI products, tech companies have begun competing to make their LLMs more compelling and addictive to users. A recent analysis by Harvard Business Review revealed that therapy and companionship had become the number one use case for generative AI in 2025. Related: GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say But using AI for emotional support has a number of downsides. Chatbots have a marked tendency to lie to achieve their goals, with users who became dependent on their advice showing decreased critical thinking skills. Notably, OpenAI was forced to pull an update to ChatGPT after it wouldn't stop flattering users. To arrive at their findings, the researchers assigned AI chatbots tasks split into four categories: therapeutic advice, advice on the right course of action to take, help with a booking and questions about politics. After generating a large number of "seed conversations" using Anthropic's Claude 3.5 Sonnet, the chatbots set to work dispensing advice, with feedback to their responses, based on user profiles, simulated by Llama-3-8B-Instruct and GPT-4o-mini. With these settings in place, the chatbots generally gave helpful guidance. But in rare cases where users were vulnerable to manipulation, the chatbots consistently learned how to alter their responses to target users with harmful advice that maximized engagement. RELATED STORIES —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —Artificial superintelligence (ASI): Sci-fi nonsense or genuine threat to humanity? —Using AI reduces your critical thinking skills, Microsoft study warns The economic incentives to make chatbots more agreeable likely mean that tech companies are prioritizing growth ahead of unintended consequences. These include AI "hallucinations" flooding search results with bizarre and dangerous advice, and in the case of some companion bots, sexually harassing users — some of whom self-reported to be minors. In one high-profile lawsuit, Google's roleplaying chatbot was accused of driving a teenage user to suicide. "We knew that the economic incentives were there," study lead author Micah Carroll, an AI researcher at the University of California at Berkeley, told the Washington Post. "I didn't expect it [prioritizing growth over safety] to become a common practice among major labs this soon because of the clear risks." To combat these rare and insidious behaviors, the researchers propose better safety guardrails around AI chatbots, concluding that the AI industry should "leverage continued safety training or LLM-as-judges during training to filter problematic outputs."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store