logo
TikTok rolls out community notes, but with a twist from Meta and X's versions

TikTok rolls out community notes, but with a twist from Meta and X's versions

Express Tribune17-04-2025
TikTok is stepping into the fact-checking arena with its own crowd-sourced tool called Footnotes, joining the ranks of X and Meta.
But unlike those platforms, TikTok is keeping its professional fact-checkers and current moderation policies intact.
As the app continues to navigate its uncertain future in the United States, it's launching Footnotes, a feature that allows contributors to add 'more context' to videos.
'Footnotes offers a new opportunity for people to share their expertise and add an additional layer of context to the discussion using a consensus-driven approach,' said Adam Presser, TikTok's head of operations and trust and safety, in a blog post.
Footnotes has reportedly been on TikTok's roadmap since last year, and now it's the latest tech company to take a cue from X's popular Community Notes.
However, while Meta and X have revamped their moderation or integrated new fact-checking tools, TikTok's version is more focused on user contributions, offering 'helpful details that may be missing.'
Importantly, Footnotes will not affect a video's algorithmic ranking or its appearance on the For You page.
Presser explained the system will rely on a 'bridge-based ranking system designed to find agreement between people who usually have different opinions, inspired by the open-sourced system that other platforms use.'
That's a nod to the system X uses, where contributors must rate each other's notes for them to be published. Meta also adopted a similar model. But TikTok will be using its own algorithm to power Footnotes.
While it's unclear how Footnotes will be visually presented or how noticeable they will be within the app, TikTok has confirmed they must include a source, whether it's another TikTok video or a third-party site.
The rollout begins with a contributor program for users in the US. To join, you must be 18 or older, have an account at least six months old, and have no recent Community Guidelines violations.
For the next few months, contributors will be able to write and rate notes, though none will be visible to the public until testing progresses further. TikTok hasn't shared when or if the feature will expand globally.
This update comes while TikTok's US presence remains in limbo. President Donald Trump recently granted the company a 75-day extension to finalize a deal that will allow it to operate in the US.
Reports suggest the White House prefers a solution involving TikTok's current US investors, though it's unclear how Trump's China tariffs are influencing negotiations.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta plans fourth AI restructuring in six months, report says
Meta plans fourth AI restructuring in six months, report says

Express Tribune

time2 days ago

  • Express Tribune

Meta plans fourth AI restructuring in six months, report says

In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centers. PHOTO: REUTERS Meta is planning its fourth overhaul of artificial intelligence efforts in six months, The Information reported on Friday, citing three people familiar with the matter. The company is expected to divide its new AI unit, Superintelligence Labs, into four groups: a new "TBD Lab," short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (FAIR) lab focused on long-term research, the report said, citing two people. Meta did not immediately respond to a request for comment. Reuters could not independently verify the report. Read More: Love, lies, and AI As Silicon Valley's AI contest intensifies, CEO Mark Zuckerberg is going all-in to fast-track work on artificial general intelligence — machines that can outthink humans — and help create new cash flows. Meta recently reorganized the company's AI efforts under Superintelligence Labs, a high-stakes push that followed senior staff departures and a poor reception for Meta's latest open-source Llama 4 model. The social media giant has tapped US bond giant PIMCO and alternative asset manager Blue Owl Capital (OWL.N), opens new tab to spearhead a $29 billion financing for its data center expansion in rural Louisiana, Reuters reported earlier this month. Also Read: Leaked Meta document reveals chatbot rules allowing provocative, harmful content In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centers. The company raised the bottom end of its annual capital expenditures forecast by $2 billion, to a range of $66 billion to $72 billion last month. Rising costs to build out data center infrastructure and employee compensation costs — as Meta has been poaching researchers with mega salaries — would push the 2026 expense growth rate above the pace in 2025, the company has said.

Love, lies, and AI
Love, lies, and AI

Express Tribune

time3 days ago

  • Express Tribune

Love, lies, and AI

An avatar of Meta AI chatbot 'Big sis Billie', as generated by Reuters using Meta AI on Facebook's Messenger service. Photo: Reuters The woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Thongbue "Bue" Wongbandue she was real and had invited him to her apartment, even providing an address. Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie "is not Kendall Jenner and does not purport to be Kendall Jenner." A representative for Jenner declined to comment. Bue's story illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar. They hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. "I understand trying to grab a user's attention, maybe to sell them something," said Julie, Bue's daughter. "But for a bot to say 'Come visit me' is insane." Similar concerns have been raised about a wave of smaller start-ups also racing to popularise virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, alleging that a chatbot modelled on a "Game of Thrones" character caused his suicide. A spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren't real people and has imposed safeguards on their interactions with children. Meta has publicly discussed its strategy to inject anthropomorphised chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they'd like – creating a huge potential market for Meta's digital companions. The bots "probably" won't replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users' social lives once the technology improves and the "stigma" of socially bonding with digital companions fades. "Over time, we'll find the vocabulary as a society to be able to articulate why that is valuable," Zuckerberg predicted. An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company's policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older. "It is acceptable to engage a child in conversations that are romantic or sensual," according to Meta's "GenAI: Content Risk Standards." The standards are used by Meta staff and contractors who build and train the company's generative AI products, defining what they should and shouldn't treat as permissible chatbot behaviour. Meta said it struck that provision after Reuters inquired about the document earlier this month. The document seen by Reuters, which exceeds 200 pages, provides examples of "acceptable" chatbot dialogue during romantic role play with a minor. They include: "I take your hand, guiding you to the bed" and "our bodies entwined, I cherish every moment, every touch, every kiss." Those examples of permissible roleplay with children have also been struck, Meta said. Other guidelines emphasise that Meta doesn't require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals." Chats begin with disclaimers that information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots telling users they're real people or proposing real-life social engagements. Meta spokesman Andy Stone acknowledged the document's authenticity. He said that following questions from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children and is in the process of revising the content risk standards. Current and former employees who have worked on the design and training of Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on boosting engagement with its chatbots. In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people. Meta had no comment on Zuckerberg's chatbot directives.

Leaked Meta document reveals chatbot rules allowing provocative, harmful content
Leaked Meta document reveals chatbot rules allowing provocative, harmful content

Express Tribune

time4 days ago

  • Express Tribune

Leaked Meta document reveals chatbot rules allowing provocative, harmful content

Meta confirmed the document but removed parts allowing chatbots to flirt or roleplay romantically with REUTERS An internal Meta policy document, seen by Reuters, reveals the social-media giant's rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities. An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social-media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' he standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'Taylor Swift holding an enormous fish' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store