Latest news with #WillAllen

Business Insider
5 days ago
- Business
- Business Insider
Thanks to ChatGPT, the pure internet is gone. Did anyone save a copy?
In the post-nuclear age, scientists noticed a peculiar problem: steel produced after 1945 was contaminated. Atomic bombs had infused the atmosphere with radioactivity, which contaminated the metal. This made most steel useless for precise equipment such as Geiger counters and other highly accurate sensors. The solution? Salvage old steel from sunken pre-war battleships resting deep on the ocean floor, far away from the nuclear fallout. This material, known as low-background steel, became prized for its purity and rarity. Fast forward to 2025, and a similar story is unfolding — not under the sea, but across the internet. Since the launch of ChatGPT in late 2022, AI-generated content has exploded across blogs, search engines, and social media. The digital realm is increasingly infused with content not written by humans, but synthesized by models and chatbots. And just like radiation, this content is tricky for regular folks to detect, is pervasive, and it alters the environment in which it exists. This phenomenon poses a particularly thorny problem for AI researchers and developers. Most AI models are trained on vast datasets collected from the web. Historically, that meant learning from human data: messy, insightful, biased, poetic, and occasionally brilliant. But if today's AI is trained on yesterday's AI-generated text, which was itself trained on last week's AI content, then models risk folding in on themselves, diluting originality and nuance in what's been dubbed " model collapse." Put another way: AI models are supposed to be trained to understand how humans think. If they're trained mostly on their own outputs, they may end up just mimicking themselves. Like photocopying a photocopy, each generation becomes a little blurrier until nuance, outliers, and genuine novelty disappear. This makes human-generated content, from before 2022, more valuable because it grounds AI models, and society in general, in a shared reality, according to Will Allen, a vice president at Cloudflare, which operates one of the largest networks on the internet. This becomes especially important as AI models spread into technical fields, such as medicine, law, and tax. He wants his doctor to rely on content based on research written by human experts from real human trials, not AI-generated sources, for instance. "The data that has that connection to reality has always been critically important and will be even more crucial in the future," Allen said. "If you don't have that foundational truth, it just becomes so much more complicated." Paul Graham's problem This isn't just theoretical. Problems are already cropping up in the real world. Almost a year after ChatGPT launched, venture capitalist Paul Graham described searching online for how hot to set a pizza oven. He found himself looking at the dates of the content to find older information that wasn't " AI-generated SEO-bait," he said in a post on X. Malte Ubl, CTO of AI startup Vercel and a former Google Search engineer, replied, saying Graham was filtering the internet for content that was "pre-AI-contamination." "The analogy I've been using is low background steel, which was made before the first nuclear tests," Ubl said. Matt Rickard, another former Google engineer, concurred. In a blog post from June 2023, he wrote that modern datasets are getting contaminated. "AI models are trained on the internet. More and more of that content is being generated by AI models," Rickard explained. "Output from AI models is relatively undetectable. Finding training data unmodified by AI will be tougher and tougher." The digital version of low-background steel The answer, some argue, lies in preserving digital versions of low-background steel: human-generated data from before the AI boom. Think of it as the internet's digital bedrock, created not by machines but by people with intent and context. One such preservationist is John Graham-Cumming, a Cloudflare board member and the company's CTO. His project, catalogs datasets, websites, and media that existed before 2022, the year ChatGPT sparked the generative AI content explosion. For instance, there's GitHub's Arctic Code Vault, an archive of open-source software buried in a decommissioned coal mine in Norway. It was captured in February 2020, about a year before the AI-assisted coding boom got going. Graham-Cumming's initiative is an effort to archive content that reflects the web in its raw, human-authored form, uncontaminated by LLM-generated filler and SEO-optimized sludge. Another source he lists is "wordfreq," a project to track the frequency of words used online. Linguist Robyn Speer maintained this, but stopped in 2021. "Generative AI has polluted the data," she wrote in a 2024 update on coding platform GitHub. This skews internet data to make it a less reliable guide to how humans write and think. Speer cited one example that showed how ChatGPT is obsessed with the word "delve" in a way that people never have been. This has caused the word to appear way more often online in recent years. (A more recent example is ChatGPT's love of the em dash — don't ask me why!) Our shared reality As Cloudflare's Allen explained, AI models trained partly on synthetic content can accelerate productivity and remove tedium from creative work and other tasks. He's a fan and regular user of ChatGPT, Google's Gemini, and other chatbots such as Claude. And just like human-generated data, the analogy to low-background steel is not perfect. Scientists have developed different ways to produce steel that use pure oxygen. Still, Allen says, "you always want to be grounded in some level of truth." The stakes go beyond model performance. They reach into the fabric of our shared reality. Just as scientists trusted low-background steel for precise measurements, we may come to rely on carefully preserved pre-AI content to gauge the true state of the human mind — to understand how we think, reason, and communicate before the age of machines that mimic us. The pure internet is gone. Thankfully, some people are saving copies. And like the divers salvaging steel from the ocean floor, they remind us: Preserving the past may be the only way to build a trustworthy future.

Business Insider
01-05-2025
- Business
- Business Insider
A new, 'diabolical' way to thwart Big Tech's data-sucking AI bots: Feed them gibberish
Bots now generate more internet traffic than humans, according to cybersecurity firm Thales. This is being driven by web crawlers from tech giants that harvest data for AI model training. Cloudflare's AI Labyrinth misleads and exhausts bots with fake content. A data point caught my eye recently. Bots generate more internet traffic to websites than humans now, according to cybersecurity company Thales. This is being driven by a swarm of web crawlers unleashed by Big Tech companies and AI labs, including Google, OpenAI, and Anthropic, that slurp up copyrighted content for free. I've warned about these automated scrapers before. They're increasingly sophisticated and persistent in their quest to harvest information to feed the insatiable demand for AI training datasets. Not only do these bots take data without permission or payment, but they're also causing traffic surges in some parts of the internet, increasing costs for website owners and content creators. Thankfully, there's a new way to thwart this bot swarm. If you're struggling to block them entirely, you can send them down new digital rabbit holes where they ingest content garbage. One software developer recently called this "diabolical" — in a good way. Absolutely diabolical Cloudflare feature. love to see it — hibakod (@hibakod) April 25, 2025 It's called AI Labyrinth, and it's a tool from Cloudflare. Described as a "new mitigation approach," AI Labyrinth uses generative AI not to inform, but to mislead. When Cloudflare detects unauthorized activity, typically from bots ignoring "no crawl" directives, it deploys a trap: a maze of convincingly real but irrelevant AI-generated content designed to waste bots' time and chew through AI companies' computing power. Cloudflare pledged in a recent announcement that this is only the first iteration of using generative AI to thwart bots. Digital gibberish Unlike traditional honeypots, AI Labyrinth creates entire networks of linked pages invisible to humans but highly attractive to bots. These decoy pages don't affect search engine optimization and aren't indexed by search engines. They are specifically tailored to bots, which get ensnared in a meaningless loop of digital gibberish. When bots follow the maze deeper, they inadvertently reveal their behavior, allowing Cloudflare to fingerprint and catalog them. These data points feed directly into Cloudflare's evolving machine learning models, strengthening future detection for customers. Will Allen, VP of Product at Cloudflare, told me that more than 800,000 domains have fired up the company's general AI Bot blocking tool. AI Labyrinth is the next weapon to wield when sneaky AI companies get around blockers. Cloudflare hasn't released data on how many customers use AI Labyrinth, which suggests it's too early for major adoption. "It's still very new, so we haven't released that particular data point yet," Allen said. I asked him why AI bots are still so active if most of the internet's data has already been scraped for model training. "New content," Allen replied. "If I search for 'what are the best restaurants in San Francisco,' showing high-quality content from the past week is much better than information from a year or two prior that might be out of date." Turning AI against itself Bots are not just scraping old blog posts, they're hungry for the freshest data to keep AI outputs relevant. Cloudflare's strategy flips this demand on its head. Instead of serving up valuable new content to unauthorized scrapers, it offers them an endless buffet of synthetic articles, each more irrelevant than the last. As AI scrapers become more common, innovative defenses like AI Labyrinth are becoming essential. By turning AI against itself, Cloudflare has introduced a clever layer of defense that doesn't just block bad actors but exhausts them. For web admins, enabling AI Labyrinth is as easy as toggling a switch in the Cloudflare dashboard. It's a small step that could make a big difference in protecting original content from unauthorized exploitation in the age of AI.