Latest news with #contentblocking


BBC News
01-08-2025
- Politics
- BBC News
Tech giants blocking some Ukraine and Gaza posts under new online rules
Social media companies are blocking wide-ranging content - including posts about the wars in Ukraine and Gaza - in an attempt to comply with the UK's new Online Safety Act, BBC Verify has new legislation, which came into effect last Friday, imposes fines on social media companies and other websites which fail to protect under-18s from pornography, posts promoting self-harm, and other harmful content. In serious cases, services could be blocked in the BBC Verify found a range of public interest content, including parliamentary debates on grooming gangs, has been restricted on X and Reddit for those who have not completed age verification warn companies are risking stifling legitimate public debate by overapplying the law. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, expressed alarm at the restrictions and told BBC Verify that the new bill was "not supposed to be used to suppress facts of public interest, even if uncomfortable".Organisations can be fined up to £18m or 10% of their global revenue if they are found to have failed to stop harmful content appearing on their platforms. Under the act harmful content includes posts containing pornography, or any which encourage self-harm, eating disorders or promote Sonia Livingstone - an expert in children's digital rights at the London School of Economics - said that companies might "get better over time at not blocking public interest content while also protecting children" as the law beds in over the restricted content identified by BBC Verify was a video post on X which showed a man in Gaza looking for the dead bodies of his family buried among the rubble of destroyed buildings. The post was restricted despite not showing any graphic imagery or bodies at any point in the clip. X subsequently removed the warning after being approached by BBC users who had not verified their age attempted to access the post they were met with a message reading: "Due to local laws, we are temporarily restricting access to this content until X estimates your age." The same warning was experienced by users who attempted to view a video of a Shahed drone destroyed mid-flight in Ukraine. The Iranian-made drones, which are widely used by Russia in the full-scale invasion, are unmanned and nobody was injured or killed in the has introduced similar restrictions. The platform, which hosts countless communities which discuss major news events, now requires age checks for some groups when users try to access them via search the Reddit communities which have been restricted is one called R/UkraineConflict, a message board with 48,000 members that frequently posts footage of the war. Similar restrictions, which urge users to "log in to confirm your age", have been imposed on several pages which discuss the Israel-Gaza war and communities which focus on clips of parliamentary debates have also been swept up in the restrictions. A speech by Conservative MP Katie Lam, containing a graphic description of the rape of a minor by a grooming gang, is available to view without restriction on Parliament's official streaming website, ParliamentLive, but is restricted on who was elected in 2024, wrote on social media: "The British state won't protect children from mass gang rape. But it will 'protect' adults from hearing about it." Another post restricted on X shared an image of Francisco de Goya's 19th-century painting entitled Saturn Devouring His Son. The striking work depicts the Greek myth of the Titan Cronus - known as Saturn by the Romans - eating one of his children in fear of a prophesy that one would overthrow him and has been described as depicting "utter male fury".The examples gathered by BBC Verify are largely focussed on X and Reddit, as they clearly flag age-restricted content. Meta has a different system whereby 'teen' profiles are a different type of account with parental control - making it harder for us to identify which content is is unclear exactly how many posts commenting on debates of public interest are being restricted. X and Reddit did not respond to a request for Prof Livingstone noted that it was "possible that the companies are over-blocking to undermine the Act".X owner Elon Musk has been highly critical of the Online Safety Act. The billionaire has launched a stream of attacks against the legislation online and X suggested that it could dissuade companies from launching products in the UK."[The law's] purpose is suppression of the people," Musk wrote on X on Monday, before sharing several posts by the far-right activist Tommy Robinson which also opposed the suggests that the legislation could heavily impact adults in the UK. Large proportions of users - up to 37% on X and 59% on Reddit - access these platforms whilst logged out, according to data from the platforms. This means those users won't be age-verified and will experience the internet in the same way as Department of Science, Innovation and Technology (DSIT) emphasised to BBC Verify that it was up to social media platforms themselves to decide how to implement the requirements of the act, but insisted that the risk-based approach must not "censor political debate".What the Online Safety Act is - and how to keep children safe onlineWhich sites will require age verification under new law?Ofcom investigates 34 porn sites over age checksOfcom, the UK media regulator tasked with enforcing the act, has warned that firms could face fines not only for failing to protect children, but also for breaching freedom of speech under the Verify's analysis also suggested that the legislation had succeeded in blocking some harmful material online. Since Musk bought X, previously called Twitter, the platform has earned a reputation for being flooded with pornographic material, as well violent, antisemitic and racist we found that violent and pornographic content was significantly restricted when using an account without age new obligations effectively puts firms in a position where they must comply with the law. Prof Livingstone suggested that they could still be in a period of "working out how best" to make judgements over the sort of content on their sites, which could be refined over Prof Wachter said that the level of self-regulation afforded to tech companies in choosing how to comply with the Online Safety Act called for well-staffed moderation teams equipped with "time, resources, expertise and nuance" to effectively make also noted that many major social media companies, such as X and Meta, have slimmed down their moderation teams in recent years or dissolved them completely."This trend is very worrying when opaque rules are now applied to make these take down decisions, especially in the current political climate," Prof Wachter said. What do you want BBC Verify to investigate?


Forbes
02-07-2025
- Business
- Forbes
The Prompt: Can Meta Hire Its Way To Superintelligence?
Welcome back to The Prompt. Cloudflare, the tech platform that powers millions of websites representing about 20% of the internet, announced yesterday that it will by default block crawlers from AI companies from scraping content without permission. It's a significant move as AI juggernauts like OpenAI, Anthropic and Meta have scrounged the corners of the internet for data to power their AI models, often depriving the websites of the traffic (and associated revenue) they would have otherwise gotten. Publishers like Conde Nast, TIME, The Associated Press and The Atlantic as well as tech companies like Pinterest and Quora have expressed their support for Cloudflare's shift to permission-based crawling. The company also announced a partnership with an initiative called Pay Per Crawl that would allow website creators to get paid for their content. Let's get into the headlines. PEAK PERFORMANCE Microsoft claims that its AI system, called 'Microsoft AI Diagnostic Center,' can diagnose diseases four times more accurately than a group of experienced doctors at a significantly lower cost. That claim is based on an assessment of how correctly the AI tool was able to make a diagnosis on about 300 complex cases previously published in the New England Journal of Medicine in comparison to physicians. However, in the study doctors were asked not to use any additional tools to aid their diagnosis, which doesn't reflect real world scenarios. Healthcare is a burgeoning use case for AI with Microsoft Copilot and Bing receiving about 50 million health-related queries per month. ETHICS + LAW AI-generated videos that portray Black women as primates and perpetuate racist stereotypes are amassing millions of views on platforms like Instagram and TikTok, Wired reported. The videos, part of a social media trend called 'Bigfoot Baddie,' were generated by Google's popular AI video generator, Veo 3. TALENT RESHUFFLES Anysphere, the company behind fast growing AI coding tool Cursor, has hired two engineers from Anthropic who previously worked on Claude Code, The Information reported. Cursor, which is used by developers at top AI companies to create programs and write code, is also a customer of Anthropic's powerful coding models. AI DEAL OF THE WEEK Genesis AI, which is building an AI model for robotics, has raised a $105 million seed round led by Eclipse and Khosla Ventures. DEEP DIVE In the past few weeks, Meta CEO Mark Zuckerberg has personally reached out to dozens of AI researchers and engineers and offered eye-popping multi-million dollar pay packages, according to multiple reports. Zuffa LLC via Getty Images Mark Zuckerberg Sparks An AI Talent War With $100 Million Offers As the race to build powerful AI models and launch impressive products intensifies, companies are shelling out top dollar for the researchers making these systems. On Monday, Alexandr Wang, Meta's newly hired Chief AI Officer and former CEO of data labelling giant Scale AI, announced the creation of a new lab within Meta that is aiming to build so-called 'superintelligence'— an AI system that outperforms humans in a range of cognitive tasks including creativity and problem solving. (That's different from artificial general intelligence which is an AI system that can match human cognitive abilities,) Joining Wang are 11 top researchers that Meta CEO Mark Zuckerberg has freshly poached from leading AI labs including OpenAI, Anthropic and Google DeepMind, with shiny multi-million dollar offers. In the past few weeks, Zuckerberg has personally reached out to dozens of researchers and engineers and offered eye-popping multi-million dollar pay packages, according to multiple reports. The researchers' names can be found on 'The List'— the billionaire's compilation of the brightest minds and hidden geniuses in the field of AI. The Facebook founder has offered top talent $300 million over four years, with $100 million in total compensation (including equity) for the first year, Wired recently reported. Meta's initiative also raises questions about whether 'The List' has actually got the right names on it. Richard Socher, a pioneer in natural language processing and CEO of tells me that while the talent pool for AI engineers is relatively small, companies like Meta are scouting for talent in the most 'obvious places' like OpenAI, which ends up becoming expensive. "Not everyone who's joined OpenAI as employee number 500 is more qualified than someone in a smaller startup," he said. That said, not everyone at OpenAI is a target: Alex Nichol, a deep learning researcher at the startup posted on X, 'Kinda offended that Meta didn't try to recruit me… (not that I would accept, but it's nice to feel recognized)'. While the hires may seem flashy, they are entirely within reach for a tech giant like Meta, which has near-unlimited financial resources and unfettered access to powerful chips used to run these AI models to woo top talent, even from cushy jobs at juggernauts like OpenAI. But not everyone is convinced that the monetary incentives are enough to build a superstar AI team and create the best AI products. Two former Meta AI employees told Forbes they are unconvinced that huge sums of cash can motivate researchers to build superintelligence. 'You want to attract people who care,' a former Meta AI employee said. Meta itself has lost a lot of its top AI talent over the years, who've either left to start their own companies or to join rivals like OpenAI, the former Meta AI employee told Forbes. 'A lot of people left to go to OpenAI…. This is Mark trying to undo the loss of talent,' they said. WEEKLY DEMO In an experiment called Project Vend, Anthropic let its flagship AI model Claude run a small automated physical store in its San Francisco office. The AI system, which had access to different external programs like email, was responsible for maintaining the inventory, setting prices and carrying out transactions. But it experienced a series of hiccups that included inventing a fake Venmo account, stocking and selling metal cubes after being prompted by a customer and giving persuasive customers discounts. MODEL BEHAVIOR People are using AI tools like ChatGPT to act as 'trip sitters' — a term used to describe a sober person who monitors someone under the influence of psychedelic— while they consume psychedelics like magic mushrooms, MIT Technology Review reported. During their trip, people chat with AI tools to share their feelings and calm themselves. But adding AI into the mix could result in a 'dangerous psychological cocktail,' according to multiple psychotherapists.


Tahawul Tech
02-07-2025
- Business
- Tahawul Tech
Cloudflare introduces AI bot blocker
Millions of websites will now be able to block AI bots from accessing their content without permission thanks to a new system being rolled out by internet infrastructure firm, Cloudflare. Eventually, sites will be able to ask for payment from AI firms in return for having their content scraped. Cloudflare's tech targets AI firm bots – also known as crawlers – which are programs that explore the web, indexing and collecting data as they go. They are important to the way AI firms build, train and operate their systems. So far, Cloudflare says its tech is active on a million websites. Roger Lynch, chief executive of Condé Nast, whose print titles include GQ, Vogue, and The New Yorker, said the move was 'a game-changer' for publishers. 'This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable', he wrote in a statement. However, other experts say stronger legal protections will still be needed. Initially the system will apply by default to new users of Cloudflare services, plus sites that participated in an earlier effort to block crawlers. Many publishers accuse AI firms of using their content without permission. Cloudflare argues AI breaks the unwritten agreement between publishers and crawlers. AI crawlers, it argues, collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue. 'If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone,' wrote the firm's chief executive Matthew Prince. To that end the company is developing a 'Pay Per Crawl' system, which would give content creators the option to request payment from AI companies for utilising their original content. According to Cloudflare there has been an explosion of AI bot activity. 'AI Crawlers generate more than 50 billion requests to the Cloudflare network every day', the company wrote in March. And there is growing concern that some AI crawlers are disregarding existing protocols for excluding bots. In an effort to counter the worst offenders Cloudflare previously developed a system where the worst miscreants would be sent to a 'Labyrinth' of web pages filled with AI generated junk. The new system attempts to use technology to protect the content of websites and to give sites the option to charge AI firms a fee to access it. Source: BBC News Image Credit: Cloudflare