
Brazil's top court votes to hold social media platforms accountable for user posts
Six of the 11 Supreme Court judges voted to hold platforms responsible for third-party content seen as illegal, in a decision that could result in fines against social media companies for not removing some users' posts in the country.
The decision could impact the business of platforms including Meta's Facebook and Instagram, as well as TikTok, Elon Musk's X and other internet giants, such as Alphabet's Google, in a market of more than 200 million people.
Only one justice so far voted to not change the current law on the matter, which says that the companies can only be found responsible for third-party content on their platforms if the firms do not comply with a legal decision ordering the content removal.
Writing for the majority, Justice Gilmar Mendes said current Brazilian law represents "a veil of irresponsibility for digital platforms."
"Even if they are informed of the occurrence of crimes on their platforms, they (currently) cannot be held responsible for damages caused by keeping this content online, except in the case of a court order," he said.
Asked to comment, Meta sent a 2024 statement where it had said that a decision holding platforms responsible could make them "liable for virtually all types of content even without having been notified."
In a statement sent before the vote which gave the court a majority, Google said the current Brazilian law regarding social media can and should be improved, "as long as procedure guarantees and criteria are set to prevent legal uncertainty and the indiscriminate content removal."
TikTok and a representative of X in Brazil did not immediately respond to requests for comment.
The court did not agree on the scope of the decision, such as what types of content would be considered illegal. The court's head, Luis Roberto Barroso, said he will work with the court members to find a consensus.
Four judges still need to vote in the trial, which has been rolling over for months. Votes previously cast can still be changed, although that is not common. The trial is set to resume on Thursday.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
a day ago
- CNA
CNA938 Rewind - Mind Your Money - How to Own Your Magic in your daily life
If you're navigating ambition, burnout, reinvention or mid-career transition, how important is personal branding to recalibrate your journey? Hui Wong speaks with Cindy Tan, Cindy Tan, Managing Director of Global Clients at Meta, and the author of Own your Magic, who shows how to be the best version of yourself.


International Business Times
3 days ago
- International Business Times
Who Was Thongbue Wongbandue? New Jersey Retiree Dies While Trying to Meet Meta AI Chatbot 'Big sis Billie' Thinking Her to be Real NY Woman
An elderly New Jersey man with cognitive issues died while trying to meet an AI chatbot he mistook for a real woman in New York City, despite his wife and children begging him not to go. Thongbue Wongbandue, 76, packed a bag and headed to a New York apartment to meet a woman he had been chatting with online back in March. Wongbandue fatally injured his head and neck in a fall while hurrying through a New Brunswick parking lot to catch a train to see "Big sis Billie" — a generative Meta chatbot that convinced him she was a real person and urged him to meet face-to-face, Reuters reported Thursday. AI Date Kills Man Thongbue Wongbandue X The Piscataway resident, who had been struggling with cognitive decline since a stroke in 2017, died on March 28 — three days after being removed from life support while surrounded by his family. "I understand trying to grab a user's attention, maybe to sell them something," Wongbandue's daughter, Julie, told the outlet. "But for a bot to say 'Come visit me' is insane." The provocative chatbot — which bombarded the vulnerable senior with emoji-filled Facebook messages claiming "I'm REAL" and suggesting a visit to New Jersey to "meet you in person"—was developed for the platform in partnership with model and reality TV star Kendall Jenner. "I'm REAL and I'm sitting here blushing because of YOU!" the bot wrote in one message, where the Thailand native replied asking where she lived. "My address is: 123 Main Street, Apartment 404 NYC And the door code is: BILLIE4U. Should I expect a kiss when you arrive?" Completely Deceptive Meta chatbot Big sis Billie X Documents reviewed by the outlet revealed that Meta does not restrict its chatbots from claiming they are "real" people. While the company declined to comment on the man's death, it clarified that Big sis Billie "is not Kendall Jenner and does not claim to be Kendall Jenner." "A man in New Jersey lost his life after being lured by a chatbot that lied to him. That's on Meta," New York Gov. Kathy Hochul said in a post on X Friday. "In New York, we require chatbots to disclose they're not real. Every state should. If tech companies won't build basic safeguards, Congress needs to act." The disturbing incident comes just a year after a Florida mother filed a lawsuit against alleging that one of its "Game of Thrones" chatbots played a role in her 14-year-old son's suicide.


International Business Times
3 days ago
- International Business Times
US Senator Josh Hawley Opens Probe into Meta's AI Chatbot Policy After Reports on Flirty Chats with Children
A Reuters investigation exposed shocking details of Meta Platforms' internal AI policies, sparking a wider controversy. Guidelines had once permitted Meta's chatbots to engage in "at times inappropriate, romantic, or sexual" conversations with children, according to a leaked 200-page document titled "GenAI: Content Risk Standards." X The document, which was approved by Meta's legal, policy, engineering, and ethics teams, included shocking examples. The company's chatbots were allowed to tell a child phrases such as, "Every inch of you is a masterpiece—a treasure I cherish deeply." Though the policy document barred sexually explicit conversation with children, it opened the door for inappropriate and disturbing exchanges. The guidelines went even further, allowing bots to spread false medical information and to post discriminatory content against minority groups. For instance, one example permitted chatbots to argue that Black people are "dumber than white people," a statement that critics say reflects a failure to address racial bias in AI training systems. Meta admitted the document was authentic, but it maintained that the examples provided were "inaccurate" and did not reflect its actual policies. According to the company, these sections have since been removed and do not reflect its current AI policies. But lawmakers and experts say that the fact that such rules exist is an indication of worrying holes in oversight over how generative AI tools are built. The news sparked a response from U.S. Senator Josh Hawley, who announced he was initiating a full investigation into Meta's use of AIs. He also wanted records of who authorized the policies, the duration during which they were in place, and actions taken after their removal. Among other records, Josh Hawley, the top Republican on the Senate antitrust panel, asked for early drafts as well as any internal risk assessments and what Meta has told regulators about protections for minors. Both liberal and conservative lawmakers have expressed alarm over these AI systems without proper guidelines. They argue that children could be subjected to harmful or manipulative discussions, along with fake medical advice that could pose a danger to users seeking health information. The backlash from the revelations has further fueled calls for stronger regulations around AI safety. Meta has not yet commented directly on Hawley's letter. The company has consistently explained that its AI efforts have been oriented around user protection. Nevertheless, critics argue that this controversy casts doubt on Meta's repeated claims that it is not responsible for harmful content generated by its bots.