logo
#

Latest news with #JacquiVanLiew

Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content
Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content

WIRED

time02-05-2025

  • Entertainment
  • WIRED

Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content

May 2, 2025 6:30 AM A WIRED investigation found that dozens of YouTube channels are using generative AI to depict cartoon cats and minions being beaten, starved, and sexualized—sparking fears of a new Elsagate wave. Photo-illustration: Jacqui VanLiew; Getty Images Somewhere in an animated New York, a minion slips and tumbles down a sewer. As a wave of radioactive green slime envelops him, his body begins to transform—limbs mutating, rows of bloody fangs emerging—his globular, wormlike form, slithering menacingly across the screen. 'Beware the minion in the night, a shadow soul no end in sight,' an AI-sounding narrator sings, as the monstrous creature, now lurking in a swimming pool, sneaks up behind a screaming child before crunching them, mercilessly, between its teeth. Upon clicking through to the video's owner, though, it's a different story. 'Welcome to Go Cat—a fun and exciting YouTube channel for kids!' the channel's description announces to 24,500 subscribers and more than 7 million viewers. 'Every episode is filled with imagination, colorful animation, and a surprising story of transformation waiting to unfold. Whether it's a funny accident or a spooky glitch, each video brings a fresh new story of transformation for kids to enjoy!' Go Cat's purportedly child-friendly content is visceral, surreal—almost verging on body horror. Its themes feel eerily reminiscent of what, in 2017, became known as Elsagate, where hundreds of thousands of videos emerged on YouTube depicting children's characters like Elsa from Frozen , Spider-Man, and Peppa Pig involved in perilous, sexual, and abusive situations. By manipulating the platform's algorithms, these videos were able to appear on YouTube's dedicated Kids' app—preying on children's curiosities to farm thousands of clicks for cash. In its attempts to eradicate the problem, YouTube removed ads on over 2 million videos, deleted more than 150,000, and terminated 270 accounts. Though subsequent investigations by WIRED revealed that similar channels—some containing sexual and scatological depictions of Minecraft avatars—continued to appear on YouTube's Topic page, Elsagate's reach had been noticeably quelled. Then came AI. The ability to enter (and circumvent) generative AI prompts, paired with an influx of tutorials on how to monetize children's content, means that creating these bizarre and macabre videos has become not just easy but lucrative. Go Cat is just one of many that appeared when WIRED searched for terms as innocuous as 'minions,' 'Thomas the Tank Engine,' and 'cute cats.' Many involve Elsagate staples like pregnant, lingerie-clad versions of Elsa and Anna, but minions are another big hitter, as are animated cats and kittens. In response to WIRED's request for comment, YouTube says it 'terminated two flagged channels for violating our Terms of Service' and is suspending the monetization of three other channels. 'A number of videos have also been removed for violating our Child Safety policy,' a YouTube spokesperson says. 'As always, all content uploaded to YouTube is subject to our Community Guidelines and quality principles for kids—regardless of how it's generated.' When asked what policies are in place to prevent banned users from simply opening up a new channel, YouTube stated that doing so would be against its Terms of Service and that these policies were rigorously enforced 'using a combination of both people and technology.' WIRED can confirm that some of the flagged channels were indeed removed last week, including two cat-centric channels featuring themes of abuse. But other linked channels with reposts of the same videos remain on the platform. Go Cat, too, is still active, and its channel description remains unchanged. WIRED could not find an email associated with Go Cat but reached out to other channels for comment. We did not receive a response. The explosion of AI-animated cat videos is a defining feature of Elsagate's second wave, surpassing those of any other type both in scope and extremity of their content. With titles like 'Kitten abused by its own mother'—these videos often take the form of fables, where kittens are starved, forced to do unpleasant chores, and audibly beaten by their parents with baseball bats or frying pans. They are then taken to the hospital and revived—before the parent arrives, apologetic for their actions, as melancholic music or a meowing cover of Billie Eilish's 'What Was I Made For' plays in the background. With near-identical channel names like 'Cute cat AI' and 'Cute cat of Ni,' experts say, they are a clear attempt to mislead young audiences—and an obvious move to lazily and sloppily monetize cheap content in ways unfathomable before the arrival of generative AI. 'We are deeply concerned about the proliferation of AI-generated content that appears to target kids and contains deeply inappropriate material,' Robbie Torney, senior director of AI programs at Common Sense Media, tells WIRED. The nonprofit, which rates and reviews media to provide accurate recommendations for families, was shown several such channels discovered during this investigation. The organization identified common themes across videos of 'characters in extreme distress or peril,' 'mutilation, medical procedures, and cruel experiments,' and 'depictions of child abuse and torture.' Although YouTube's later changes, including implementing new rules in 2019 to adhere to the US Children's Online Privacy Protection Act, mean these channels now typically appear on YouTube's main app rather than YouTube Kids—their intentions are only thinly veiled. Sounds of babies' laughter and babbling are blended in with music and set to backdrops of bright, Cocomelon -esque landscapes. (In fact, the popular kids' cartoon even appears in the background of some of these videos.) Although Go Cat directly advertises its content to children, others claim to be 'not for kids' in the description or avoid mentioning their audience entirely. The metadata for several channels revealed some videos have been tagged with keywords such as #funnycat, #familyfun, and #disneyanimatedmovies. Others featuring polar bears and reindeer infected with parasites are tagged with terms like #animalrescue, suggesting an attempt to appear alongside more educational content. While in 2017, Elsagate content usually featured traditional animation or even actors dressed in costume (both of which are still a part of this new wave), the arrival of generative AI means that disturbing, brain-rot-style videos can now be produced much more rapidly and by anyone, regardless of skill. 'This trend is particularly concerning because of the scale and speed at which AI can generate this content,' Torney says. 'Unlike traditional content creation, AI-generated videos can be produced in large volumes with minimal oversight. Without human review in the creation pipeline, inappropriate and potentially harmful material can easily reach kids.' The comparative speed of AI also means that when one channel is flagged and removed by YouTube—another with identical reposts springs up days later. WIRED has seen images sent by content creator BitterSnake, who was part of a wave of YouTubers shedding light on these cat-themed channels back in January of this year. Originally posted on a community tab of two now-suspended channels, they appear to show an office environment in what looks to be Asia, with young workers sitting at computer desks, making hearts with their fingers in typical workplace camaraderie. A second image shows a worker at his desk, headphones on, phone beside him, a tissue lying crumpled in the background. The scene would be utterly typical of a young student or intern immersed in his first job—if not for a computer screen featuring an adult cat, lying deceased in a pool of shimmering red blood, as its young kitten looks on, traumatized. Tracy Pizzo Frey, senior AI adviser for Common Sense Media, recently testified at a California State Assembly hearing in support of a bill that aims to safeguard children from the risks of AI. It will require systems to be classified on a scale from 'Prohibited Risk' to 'Low Risk' and ban the use of controversial AI companions such as Replika by children, alongside other measures. The scale of this problem is growing—and is likely to balloon further as AI-generated kids' content continues to dwarf its traditionally animated counterparts. WIRED has shared with YouTube more than 70 similar content-farm channels found during the course of this investigation. Most of these involve AI-generated images of cats alongside themes of gore, sex, and child abuse—and their subscriber count ranges from thousands to millions. Whether these views are coming primarily from humans, though, or are simply confirmation of the realization of the dead internet theory, is debatable—although hundreds of automated comments across these videos suggest it could be the latter. On reviewing the channels, YouTube explained that it required all creators to label AI-generated material as such, including content aimed at kids and families, and that it had introduced a set of guidelines around what it called quality content. 'We want younger viewers to not just have a safer experience but also an enriching one,' a YouTube spokesperson says. 'To support this, we partnered with experts to create a set of quality principles for kids and family content meant to help guide creators in creating quality content for kids and reduce the amount of content that is low quality, regardless of how it was created.' YouTube claims that since introducing these principles—which guide which content is monetized, shown in recommendations, and appears on YouTube Kids—viewership of 'high quality' content has increased by 45 percent on the YouTube Kids app. Still, regardless of their audience, and as YouTube's moderators scramble to remove them, Elsagate's successors remain on YouTube's main platform—continuing to generate new ways to bend the rules at every turn. Nor is the problem unique to them—with similar videos having appeared on TikTok in recent months, where Runway AI generator was overlaid onto real footage from suicide and mass shootings to create 'minion gore' videos, 404 Media reported. TikTok told 404 Media, "Hateful content as well as gory, gruesome, disturbing, or extremely violent content' is prohibited and said it is taking action to remove harmful AI-generated content that violates its policies. 'We recognize that short-form video platforms are working to address content moderation challenges, but the nature of AI-generated videos presents unique difficulties that may require new solutions,' Torney tells WIRED. 'The rapid evolution of AI technology demands that all stakeholders—platforms, content creators, parents, and organizations like ours—work together to ensure kids' exposure to online video content is safe and positive.'

The AI Race Has Gotten Crowded—and China Is Closing In on the US
The AI Race Has Gotten Crowded—and China Is Closing In on the US

WIRED

time07-04-2025

  • Business
  • WIRED

The AI Race Has Gotten Crowded—and China Is Closing In on the US

Apr 7, 2025 6:00 AM New research from Stanford suggests artificial intelligence isn't ruled by just OpenAI and Google, as competition increases across the US, China, and France. Animation: Jacqui VanLiew; Getty Images The year that ChatGPT went viral, only two US companies—OpenAI and Google—could boast truly cutting-edge artificial intelligence. Three years on, AI is no longer a two-horse race, nor is it purely an American one. A new report published today by Stanford University's Institute for Human-Centered AI (HAI) highlights just how crowded the field has become. The institute's 2025 AI index, which collates data and trends on the state of the AI industry, paints a picture of an increasingly competitive, global, and unrestrained race towards artificial general intelligence—AI that surpasses human abilities. OpenAI and Google are still neck and neck in the race to build bleeding-edge AI, the report shows. But several other companies are closing in. In the US, the fiercest competition comes from Meta's open weight Llama models; Anthropic, a company founded by former OpenAI employees; and Elon Musk's xAI. Most strikingly, according to a widely used benchmark called LMSYS, the latest model from China's DeepSeek, R1, ranks closest to the top-performing models built by the two leading American AI companies. 'It creates an exciting space. It's good that these models are not all developed by five guys in Silicon Valley,' says Vanessa Parli, director of research at HAI. 'Chinese models are catching up as far as performance to the US models,' Parli adds, 'But across the globe, there are new players emerging in the space.' The arrival of DeepSeek-R1 in January sent shock waves through the US tech industry and stock market. The company claimed to have built its model using a fraction of the compute used by US rivals. DeepSeek's debut was also a surprise because the US government has repeatedly sought to limit China's access to the computer chips needed to build the most advanced AI. Stanford's report shows Chinese AI is on the rise overall, with models from Chinese companies scoring similar to their US counterparts on the LMSYS benchmark. It notes that China publishes more AI papers and files more AI-related patents than the US, although it does not assess the quality of either. The US, in contrast, produces more notable AI models: 40 compared to the 15 frontier models produced in China and the three produced in Europe. The report also notes that powerful models have recently emerged in the Middle East, Latin America, and Southeast Asia, as the technology becomes more global. The research shows that several of the best AI models are now 'open weight,' meaning they can be downloaded and modified for free. Meta has been at the center of the trend with its Llama model, first released in February 2023. The company released its latest version, Llama 4, over the weekend. Both DeepSeek and Mistral, a French company, now offer advanced open weight models, too. In March, OpenAI announced that it also plans to release an open source model—its first since GPT-2—this summer. In 2024, the gap between open and closed models narrowed from eight percent to 1.7 percent, the study shows. That said, the majority of advanced models—60.7 percent—are still closed. Stanford's report notes the AI industry has seen a steady improvement in efficiency, with hardware becoming 40 percent more efficient in the past year. This has brought the cost of querying AI models down and also made it possible to run relatively capable models on personal devices. Rising efficiency has prompted speculation that the largest AI models could require fewer GPUs for training, although most AI builders say they need more computing power, not less. The study shows that the latest AI models are built using tens of trillions of tokens—components representing parts of data such as words in a sentence—and tens of billions of petaflops of computation. However, it cites research suggesting that the supply of internet training data will be exhausted by between 2026 and 2032, hastening the adoption of so-called synthetic, or AI-generated, data. The report offers a sweeping picture of AI's broader impact. It shows that demand for workers with machine learning skills has spiked, and cites surveys showing that a growing proportion of workers expect the technology to change their jobs. Private investment reached a record $150.8 billion in 2024, the report shows. Governments around the world also committed billions to AI that same year. Since 2022, AI-related legislation has doubled in the US. Parli notes that although companies have become more secretive about how they develop frontier AI models, academic research is flourishing—and improving in quality. The report also points to problems arising from widespread AI adoption. It notes that incidents involving AI models misbehaving or being misused have increased in the past year, as has research aimed at making these models safer and more reliable. As for reaching the much ballyhooed goal of AGI, the report highlights how some AI models already surpass human abilities on benchmarks that test specific skills, including image classification, language comprehension, and mathematical reasoning. This is partly because models are designed and optimized to excel at these barometers, but it shines a spotlight on how swiftly the technology has advanced in recent years.

This Tool Probes Frontier AI Models for Lapses in Intelligence
This Tool Probes Frontier AI Models for Lapses in Intelligence

WIRED

time02-04-2025

  • Business
  • WIRED

This Tool Probes Frontier AI Models for Lapses in Intelligence

Apr 2, 2025 12:00 PM A new platform from data training company Scale AI will let artificial intelligence developers find their models' weak spots. Photo-illustration: Jacqui VanLiew; Getty Images Executives at artificial intelligence companies may like to tell us that AGI is almost here, but the latest models still need some additional tutoring to help them be as clever as they can. Scale AI, a company that's played a key role in helping frontier AI firms build advanced models, has developed a platform that can automatically test a model across thousands of benchmarks and tasks, pinpoint weaknesses, and flag additional training data that ought to help enhance their skills. Scale, of course, will supply the data required. Scale rose to prominence providing human labor for training and testing advanced AI models. Large language models (LLMs) are trained on oodles of text scraped from books, the web, and other sources. Turning these models into helpful, coherent, and well-mannered chatbots requires additional 'post training' in the form of humans who provide feedback on a model's output. Scale supplies workers who are expert on probing models for problems and limitations. The new tool, called Scale Evaluation, automates some of this work using Scale's own machine learning algorithms. 'Within the big labs, there are all these haphazard ways of tracking some of the model weaknesses,' says Daniel Berrios, head of product for Scale Evaluation. The new tool 'is a way for [model makers] to go through results and slice and dice them to understand where a model is not performing well,' Berrios says, 'then use that to target the data campaigns for improvement.' Berrios says that several frontier AI model companies are using the tool already. He says that most are using it to improve the reasoning capabilities of their best models. AI reasoning involves a model trying to break a problem into constituent parts in order to solve it more effectively. The approach relies heavily on post-training from users to determine whether the model has solved a problem correctly. In one instance, Berrios says, Scale Evaluation revealed that a model's reasoning skills fell off when it was fed non-English prompts. 'While [the model's] general purpose reasoning capabilities were pretty good and performed well on benchmarks, they tended to degrade quite a bit when the prompts were not in English,' he says. Scale Evolution highlighted the issue and allowed the company to gather additional training data to address it. In recent months, Scale has contributed to the development of several new benchmarks designed to push AI models to become smarter, and to more carefully scrutinize how they might misbehave. These include EnigmaEval, MultiChallenge, MASK, and Humanity's Last Exam. Scale says it is becoming more challenging to measure improvements in AI models, however, as they get better at acing existing tests. The company says its new tool offers a more comprehensive picture by combining many different benchmarks and can be used to devise custom tests of a model's abilities, like probing its reasoning in different languages. Scale's own AI can take a given problem and generate more examples, allowing for a more comprehensive test of a model's skills. The company's new tool may also inform efforts to standardize testing AI models for misbehavior. Some researchers say that a lack of standardization means that some model jailbreaks go undisclosed. In February, the US National Institute of Standards and Technologies announced that Scale would help it develop methodologies for testing models to ensure they are safe and trustworthy. What kinds of errors have you spotted in the outputs of generative AI tools? What do you think are models' biggest blind spots? Let us know by emailing hello@ or by commenting below.

I Went Undercover in Crypto's Answer to ‘Squid Game.' It Nearly Broke Me
I Went Undercover in Crypto's Answer to ‘Squid Game.' It Nearly Broke Me

WIRED

time26-03-2025

  • Entertainment
  • WIRED

I Went Undercover in Crypto's Answer to ‘Squid Game.' It Nearly Broke Me

Mar 26, 2025 6:30 AM I spent 10 days competing in Crypto: The Game, a winner-takes-all contest where hundreds of players try to finesse and backstab their way to claiming a $140,000 cryptocurrency prize. Animation: Jacqui VanLiew; Getty Images I'm crouching in a seedy alleyway off Trafalgar Square, London, with my MacBook teetering on one knee. The heady reek of piss fills my nostrils. I'm hurriedly punching emoji into a dialog box in my web browser, which is prompting me for the answer to a puzzle. It's day nine of Crypto: The Game. CTG is an elimination game that takes place over 10 days, almost exclusively online. It's a sort of mutant conglomeration of ideas from Survivor , Squid Game , The Hunger Games , and The Traitors , except everything is crypto-inflected in some way. Each of the 716 players that signed up for the third season, myself included, purchased an entry NFT for $200 in cryptocurrency. The last player standing would win the $140,000 pot. I had raced to Trafalgar Square on that Tuesday afternoon—barreling through tourists like a shoplifter fleeing the law—following clues that promised a route back into the game. I had been voted out by my fellow players the previous night, but if I could solve the puzzle quickly, I might be 'resurrected' and have another shot at victory. A group of allied players and I had decoded an alphanumeric cipher to arrive at a second peculiar code. When I entered it into ChatGPT, it led to a text file that read, 'Congratulations on decoding the clue! The treasure is hidden at coordinates 51.5074° N, 0.1278° W.' I was only five minutes away. But when I arrived, panting, it was clear I had it wrong. ChatGPT had apparently hallucinated the coordinates. By the time the real answer—the string of emoji—was circulating in the chat, other players had beaten me to it. My game had come to an end. The first two editions of Crypto: The Game, which took place in February and April 2024, were an instant hit among crypto fans starved of ways to have fun with crypto beyond trading it. Developers have long struggled to blend crypto functionality into games and services that people want to use because they're enticing, not just for the opportunity to turn a quick profit. But CTG players waxed lyrical about the game, which became the talk of Crypto Twitter during the first two seasons. 'Financial incentives are the greatest [user] acquisition tool in the world, but very bad for retention,' says David Phelps, founder of crypto startup JokeRace, who played in the second season of CTG. 'Tapping into social incentives is almost radical in crypto.' CTG is partly a game of skill; each day, tribes of players compete in a challenge—typically some sort of crypto puzzle, scavenger hunt, or arcade game—with the goal of earning temporary immunity from elimination. But it's predominantly a social game; one of politicking, alliance-building, maneuvering, and backstabbing. Each night, the tribes without immunity are tasked with voting out a varying number of their own players until only a handful remain. On the final day, a single winner is crowned by the eliminated players. 'You can't just backstab your way to the top,' Dylan Abruscato, the creator of CTG, told WIRED in an interview last spring. 'You need to play a ruthless but respected game.' Nearly a decade ago, Abruscato was an executive at HQ Trivia, a live quizzing game that became a viral hit before the operation ran out of cash. CTG is a spiritual successor of sorts, pitched at a narrower audience: chronically online crypto fanatics. After the debut season, each subsequent edition has had its own twist. In season two, all players were anonymous, and entry NFTs could be traded on the secondary market. This time, players had to register with an X account but were given a small chance to earn their way back into the game by way of resurrection. 'Every season is its own unique social experiment,' says Abruscato. 'I always want to keep players on their toes.' When I spoke to Abruscato again ahead of the third season, he was careful not to say anything that might give me an undue advantage. But we did discuss the possibility that high-profile players might be targeted for elimination. I'm far from a household name, but given the animus toward journalists in some quarters of cryptoland, I decided to go undercover. In my two-and-a-half years covering crypto, I have reported on all manner of frauds, bankruptcies, scams, and other morally dubious shenanigans. The common thread has been the extraction of wealth, which generally flows into the pockets of an already-wealthy or nefarious few at the expense of those who can least afford to lose. But CTG struck me as different. It was an opportunity to embed among technologists convinced of the latent potential of crypto technologies, in an experiment that is far less about profit—despite players competing to win a pot of money—than about expressing crypto's hacker culture and colorful lore. I asked a random name generator to assign me an alter ego: Leo Westley. It had a ring to it. Then I created new Google and X accounts and picked up a burner SIM, knowing that players would be communicating over Telegram and working in shared spreadsheets, which they use to coordinate voting. As Leo's profile image, I chose a picture of a floppy-eared spaniel that I sometimes look after. What could be more disarming? Players were assigned to different 'tribes' in the game to compete together and eliminate each other. Courtesy of Crypto: The Game, Uniswap Labs Before the game began, I listened to back to my conversations with CTG players from previous seasons, hoping to glean some essential wisdom that might help me to survive at least a couple of nights. Their advice was generally to fly low; not to do anything that would mark me out as either particularly competent or overly hapless. Leaders are quickly eliminated, as are deadweights. 'To some degree, almost being invisible was a super strength,' said Phelps. 'You don't want to be absent, because you'll get killed for not contributing. And you don't really want to stick your head up. Then people think you might be a risk to them.' Katy Jeremko, founder of developer cooperative Indie and another former player, had given me an equally precious piece of advice: 'Your vote is your most valuable currency.' With those warnings ringing in my ears, I entered the game on March 10 and was placed into the Gold Tribe along with around 70 other players. Immediately, people fell into archetypal character roles: there were leaders, organizers, data nerds, hype artists, and wallflowers. One person set up a Telegram group and began to ferret away in a spreadsheet. Another 'vibe-coded' a program that scraped blockchain data to track who remained in the game. Others shitposted in the chat. Paralyzed by fear of saying the wrong thing, I said very little more than hello. Privately, I created my own spreadsheet to make notes on my tribe members, recording their every minor infraction. 'Kinda annoying,' I scribbled next to one guy's handle. 'Muppet, get rid,' I wrote next to another. I had met them barely hours earlier. Initially, I floated through the nightly eliminations by being performatively present. I contributed to the challenges, updated the spreadsheets when required, and posted in the chat at intervals. The tribe adopted 'Gold morning' as a greeting, so I started saying it too. Things came to a head on day three. To earn immunity, we had been tasked with achieving the highest possible pinball score. After somebody found a way to cheese the game by rhythmically tapping the up key, it became a challenge of endurance rather than skill. I spent hours improving my score—far longer than my manager would care to know. One player said they felt like they were in an episode of Severance : 'I heard if we hit 10M points we get a waffle party,' they quipped. Afterward, my tribe having been comfortably outscored, the talk turned to the impending vote. The simplest option was to eliminate the lowest scorers. But one player, Luke Cannon, proposed the tribe abstain from voting entirely. It was a high-risk, high-reward strategy: In theory, everyone could be spared, but a single vote in the final minutes of the voting period, which lasted an hour, would be enough to eliminate an undeserving player. The vote began at 8 pm ET, midnight for me. The abstinence strategy lasted all of four minutes before someone received a vote. As panic spread, more votes piled in. Players began to point fingers at one another: It was you, wasn't it? 'I am declaring martial law in Gold,' wrote Cannon, who had himself received a stray vote. 'If you do not dm to contribute your vote you are at risk,' he added, appending a passive-aggressive smiley. My heart began to throb a few beats faster. I was reluctant to side with the renegade, but if I was going to be eliminated, I didn't want to go down twiddling my thumbs. I messaged Cannon privately and voted at his instruction. Though Cannon's intervention saved a few worthy players, replacing them on the elimination board with nonparticipants and low pinball scorers, he was unable to save himself. The display of pacifism and tribe unity had descended into a bloodbath. But I had survived. The following day, the gamemasters had prepared a surprise: Everyone would be changing tribes. I was swapped into Silver, bidding farewell to almost all of my old team members. Strangely, though really they were only strangers, I missed them. I was greeted by another set of self-elected leaders who were busy deploying a labyrinth of spreadsheets and Telegram chats, which this time could only be accessed with a Silver Tribe NFT. The idea was to prevent any infiltrators from sowing disorder—sabotage is not uncommon in CTG—but the more convoluted system bewildered some players. It seemed a fitting allegory for the very crypto tendency to overcomplicate, where a manual process would do. 'The most crypto experience ever,' remarked one player. 'It's 2025 we gotta do better 😂.' Meanwhile, I splintered into a separate chat made up of seven former Gold members that now belonged to Silver, who would become the closest thing I had in the game to a cabal. The days began to blend together as I settled into my strange new routine: I woke up late, participated in the challenge, performed any necessary spreadsheet admin, kept abreast of conversation, and shuffled back to my desk for the vote at midnight. My unlucky partner, much-neglected for the duration of the game, was rudely awoken each night as I clambered over her to reach my side of the bed. As I tried to fall asleep, my mind replayed the conversations of the day, as if projected onto the inside of my eyelids. Players who failed to spend every waking moment with the tribe were marked out for potential elimination, professional duties and family life be damned. Though my partner gladly had a go at the arcade game challenges, she was otherwise bemused by my new preoccupation with my tribe. One day, I tried to solicit her advice on a point of in-game strategy. 'I don't know how crypto nerds think!' she responded with a laugh, shooing me back to my desk. Players would vote for others to be kicked out of the game. Those that received the most votes were eliminated. Courtesy of Crypto: The Game, Uniswap Labs Unlike in my previous tribe, the Silver leaders attempted to exercise total control over each elimination, assigning specific votes to specific players and even demanding proof of compliance. To help determine who should be voted out, one of them coded up a Telegram bot that required players to 'check in' and recorded how frequently they contributed to the chat. Along with each player's scores in the challenges, this data was fed into the spreadsheet. In our splinter group, the former Gold members and I bristled at the idea of voting on rails but caved to the demands. 'Tonight, servitude feels like the move? Lol,' wrote Nick Prince, one of my fellow conspirators. He was right. On day eight, the game changed again. There would be no challenge, players were told, but all tribes would merge into one. 'This is now an individual game,' the CTG website declared. With 315 players still in play and only two days left until the finale, there was an undercurrent of tension. On Telegram, players started to talk about the 'Red Wedding,' a famously bloody scene in Game of Thrones . Before the vote, an anonymous person defaced the spreadsheet shared among all remaining players. 'Dear Data Grunts,' the note read. 'The data you're inputting is just busy work. Hope you made friends this season. They're all that's going to save you now.' An anonymous message written in the communal spreadsheet. The strategy among former Silver members was to arrange truces with a handful of other groups, thereby deflecting votes elsewhere. The trouble was knowing whether anyone could be trusted to keep their word. After allowing myself on previous nights to be cowed into voting as somebody else instructed, I resolved to vote as I pleased, come what may. That night, 100 players were up for elimination, but potentially more if a large number of people received the same amount of votes. The statistical probability meant that a single vote would likely be fatal. For the first few minutes, I watched the votes peel in. Then my gut flipped; my icon had flashed up onscreen, framed in red. I understood that my game had probably come to an end—and I would probably never find out who was responsible. I spent my own bullet on a player who, at least in my estimation, had behaved condescendingly in the chat. They and 157 other players went out with me. The elimination message displayed to players who were kicked off the game. Courtesy of Crypto: The Game, Uniswap Labs On the final day, 68 players remained. Instead of voting against one another, they now had to pitch for the votes of everyone who had previously been eliminated. The individual with the greatest amount of support would win the $140,000 prize. A few of the finalists appeared on Too Online, the companion podcast that aired nightly during the CTG season, to plead their case. Many said they would split the pot, either with people who voted for them, their tribe members, charitable organizations, or along some other lines. Others said they would spend the winnings on their children or use it to pay down medical debt. 'I beg for your votes. This is life-changing money for my family,' said one player. At the end of the first season of CTG, an anonymous player from Japan was crowned victor after wooing their peers with abstract poetry. The prize fund in the second season was claimed by a player who promised to commit the winnings to the legal defense of two developers behind crypto service Tornado Cash, who had been charged with money laundering. This time, it came down to a single vote. Two front-runners leapfrogged one another on the leaderboard throughout the day. The prize was ultimately claimed by Ted, a player from California who had come in second place the previous season, who pitched voters on her redemption arc: Ted had been eliminated on the eighth night, like myself, but was given an NFT to participate in the final by a close ally, who believed that she had a better chance to win. 'I still can't believe it's real. I feel manic,' said Ted, appearing on Too Online after the vote. 'I'm still speechless.' After I left the game, the immediate feeling was a sinking disappointment; though I never imagined I might win, I had invested a piece of myself in the competition. But that sensation was quickly swallowed by relief. At the end of nine consecutive days spent consorting with strangers on the internet, I was emotionally and physically exhausted. The game demands constant outward performance and careful introspection from players, and even a brief lapse could result in their eviction. My exit felt like emerging from a fugue. I was reminded of something another player had said after they were eliminated from the game. 'Off to touch some grass,' they told the tribe, signing off on Telegram. I decided I would do the same.

BYD's Free Self-Driving Tech Might Not Be Such a Boon After All
BYD's Free Self-Driving Tech Might Not Be Such a Boon After All

WIRED

time23-02-2025

  • Automotive
  • WIRED

BYD's Free Self-Driving Tech Might Not Be Such a Boon After All

Aside from unfavorable comparisons to rival advanced driver systems, calling it God's Eye could be as misleading a moniker as Tesla's Full Self-Driving. Photo-illustration: Jacqui VanLiew; Getty Images Not only has China's largest EV maker BYD unveiled good, better, and best tiers for its advanced driver-assistance system (ADAS), it announced last week that the tech—marketed somewhat immodestly as 'God's Eye'—will now be fitted as standard to 21 of BYD's 30 cars split across four brands. Even the $9,500 Seagull hatchback, the cheapest of BYD's EVs, will ship with the base level of God's Eye at no extra cost, while the $233,500 Yangwang U9 electric supercar will get the top-tier iteration. However, BYD's ADAS system could be as misleadingly named as Tesla's Full Self-Driving (FSD). Including ADAS for free will no doubt rile BYD's smaller rivals in China's innovative but cutthroat auto market. Comparatively low-tech Toyota, VW, and Nissan may weaken further, and Tesla—which has yet to gain permission for FSD in China—could also struggle. Elon Musk's auto firm temporarily lost the title of the world's largest EV maker to BYD last year, and since then competition has been fierce. BYD topped the Chinese market for EVs in January with a 27 percent market share, pushing the previously dominant Tesla down to sixth place on just 4.5 percent. God's Eye (an alternative translation is 'Eye of Heaven') relies on various cameras and sensors to assist drivers with valet parking, adaptive cruising, and automated braking. Among other tasks, it can also learn users' driving habits and supposedly predict the type and skill level of a new driver taking control. Careful not to describe God's Eye as fully autonomous, BYD's billionaire founder and CEO Wang Chuanfu boasted during a lavish launch event at the company's headquarters in Shenzhen last week that intelligent driving capabilities would soon become as ubiquitous as seat belts and airbags. Chuanfu then cut to a promotional video showing a U9 spinning a driverless donut at the Hunan Zhuzhou International Circuit before racing around corners at high speed with tires squealing, and at night. God's Eye is L2+ ADAS—similar to Tesla's FSD—and would, therefore, still require supervision from a human driver on a public road. As defined by the Society of Automotive Engineers (SAE), the stages of automated driving are measured on a scale of 0 to 5, with Level 0 involving no driving automation through to Level 5, which is full automation. L2+, not an SAE-accepted term, is used by automakers to imply progression through the levels. The 'plus' part refers to hands-off-the-steering-wheel driving with eyes on the road ahead. Powerful computers in a car equipped with L2+ control can follow a mapped route, make lane changes, and moderate speed in traffic. L2+ keeps the burden of liability with the driver but gives at least the sensation of hands-free driving. Unlike in the US, China's ADAS rules state that a driver's hand must always be on the steering wheel. BYD's moniker may suggest its in-house L2+ ADAS is omniscient, but Shanghai-based automotive commentator Mark Rainford of Inside China Auto has test-driven several Chinese L2+ cars, and he is more impressed with Huawei's Qiankun system than God's Eye. Incidentally, continuing the deification nomenclature, Huawei's own General Obstacle Detection network for autos is also referred to as 'GOD.' Rival automakers XPeng, Nio, and Li Auto, aided by their early adoption of Nvidia's Orin X tech, a system-on-a-chip (SoC) that's used to power autonomous driving and AI applications, are also more impressive than God's Eye, states Rainford. The three are vying with BYD, Huawei, and others to offer the first true Level 3 autonomous driving system. BYD says it has obtained China's first Level 3 assisted-driving testing license, but, in an internal communication, Xpeng's chairman and CEO He Xiaopeng claimed in early February that his company could launch a commercial L3 product as early as the second half of this year. Similarly, Li Auto's L3 could also be ready for public release later this year. The firm's MEGA OTA 7.0 intelligent driving system uses a Vision-Language Model, which supposedly allows the system to understand and interpret both visual and textual information simultaneously: It can spot bus lanes, for instance, a simple task for (most) human drivers, but seemingly tricky for some L2+ systems. BYD—the pinyin initials of the company's Chinese name, Biyadi, now back-formed into the Western-friendly slogan 'Build Your Dream'—entered the auto business in 2003, starting with batteries for internal-combustion-engine (ICE) vehicles before selling a plug-in hybrid car as early as 2008. The company ceased production and sales of ICE vehicles entirely in 2022. BYD cars made for sale outside of China—in Europe, for instance—likely won't be fitted with fully functioning God's Eye. In the US, President Biden banned nearly all Chinese connected-car software and hardware from model year 2027, and it's unlikely that President Trump's administration will reverse that decision. BYD and Tesla did not respond to requests for comment on this article. China, the world's biggest car market, has adopted semi-automated driving more readily than anywhere else, with most domestic automakers providing technology between SAE's Level 2 and Level 3. BYD, which sold over 4 million cars in 2024, is betting on its scale, access to mapping and other data—and 5,000 ADAS-dedicated software engineers—to further its path to dominance. Its shares climbed 21 percent in Hong Kong on expectations that God's Eye would be market-moving, while Xpeng's shares closed 9 percent lower, and Geely dropped 10.3 percent. At an unspecified later date, BYD said that God's Eye would integrate the R1 large language model (LLM) from DeepSeek, the Chinese startup that recently stunned markets with its open-source generative AI. The integration is expected to control voice-activated features and infotainment systems mainly, but could also be used for some ADAS tasks. Zeekr, Geely, Vyah, and M-Hero have also recently announced integration with DeepSeek. In addition to cloud and vehicle AI, every BYD car with the new God's Eye system will sport Xuanji architecture, introduced in January last year, acting as the car's brain and neural network. With a central processor and access to 5G and satellite networks, the system apparently perceives changes in the internal and external environment of the car in real time. It feeds it back to Xuanji's 'brain' for supposedly almost instant decisionmaking. Strictly speaking, God's Eye is the camera, ultrasonic radar, and lidar array alone, split into A, B, and C variants, with A being best. The system's operating software is known as DiPilot, introduced in 2020 on the BYD Han, and now with the good, better, and best tiers of DiPilot 100, 300, and 600. God's Eye A ships with DiPilot 600 and bristles with high-end cameras and radar, and front- and side-facing lidar sensors. This best system will be fitted to BYD's luxury Yangwang EVs, including the U9 supercar. 'The video of the U9 [on the track] was theater,' believes Rainford, who hasn't heard of any autonomous driving system that can 'make a car's tires squeal around corners.' Rainford adds that BYD is playing catch-up: '2024 was a breakout year for urban-level autonomous driving systems in China, with the front-runners of Li Auto, XPeng, Nio, and Huawei joined by rivals such as Zeekr, Wey, and even more affordable brands like Leapmotor.' God's Eye B has cameras, radar, and one lidar unit married to DiPilot 300, and will be fitted to Denza, Song, and BYD's other high-end cars. Both A and B God's Eye systems offer FSD-style L2+ ADAS driving. God's Eye C with DiPilot 100 has cameras and radar, but no lidar, which could be akin to worshipping a 'God with nearsightedness,' Peter Norton, associate professor of history in the Department of Engineering and Society at the University of Virginia, tells WIRED. 'Like Tesla's FSD, drivers with God's Eye C aren't supposed to use it away from divided highways. But presumably some BYD drivers, like some Tesla drivers, will use it on ordinary roads anyway—with sometimes potentially lethal consequences,' says Norton, author of a book on autonomous driving. He worries that BYD's use of divine terminology could lead to a false sense of security. 'There's no attempt to caution drivers about the system's limitations,' he stresses. Rainford, too, cautions that God's Eye isn't yet perfect. 'It's way overhyped,' he says, pointing to the glowing press coverage of last week's launch. 'I drove DiPilot 100 last year on the BYD Song L, and it was far from great, requiring lever-activated overtakes. Even on the freeway it was not even close to the [LDAS] market leaders in China.' Even though it's not yet allowed in China, Tesla's FSD is believed by some to be technically inferior because it relies solely on cameras and AI, rather than lidar and other sensors. 'Tesla has been overselling the effectiveness of its technology for years,' Michael Brooks, executive director of the nonprofit Center for Auto Safety, told NPR last month. 'And a lot of people buy into that. They're kind of wrapped up in this belief that this is an autonomous vehicle, because it's tweeted about that way.' Musk has been promising the imminent arrival of fully autonomous cars since at least 2016. At a Tesla shareholder meeting last year, Musk claimed the number of miles that FSD can drive without human intervention has increased. 'It's headed towards unsupervised full self-driving very quickly, at an exponential pace,' Musk claimed. In 2022, the National Highway Traffic Safety Administration published the results of a three-year investigation into Tesla's Autopilot system, FSD's forerunner, finding a 'critical safety gap' between motorists' expectations of the driver-assistance system and its true capabilities. Investigators at the time identified at least 13 fatal crashes in which 'foreseeable driver misuse of the Tesla system played an apparent role.' Tesla said in December 2023 that Autopilot's software system controls 'may not be sufficient to prevent driver misuse' and could increase the risk of a crash. While FSD is indeed a step up from Autopilot, there are seemingly still problems with the system, as evidenced by messages aimed at Musk on X from Tesla drivers. On February 9, drone software developer Jonathan Challinger reached out to Tesla and Musk after driving his Cybertruck into a pole on a four-lane highway in Reno, Nevada, at up to 45 mph while using FSD at night. Challinger's Cybertruck was totaled, yet he thanked Tesla for 'engineering the best passive safety in the world,' stating that he 'walked away without a scratch.' The Cybertruck 'failed to merge out of a lane that was ending and made no attempt to slow down or turn until it had already hit the curb,' wrote Challinger on a since-deleted post, blaming not the tech, but himself: 'Big fail on my part, obviously. Don't make the same mistake I did. Pay attention.' 'The commenters blame a crossing that any teenager with a learner's permit and 1990 Ford Escort would navigate with ease,' says Norton. 'And [Challinger] extols the over-engineered vehicle systems that coaxed him into discarding his most basic responsibilities as a driver. The combination of wasted wealth, human indifference, misapplied tech, and cognitive incompetence is depressing.' Sadly, Norton doesn't believe BYD's God's Eye will fare any better. 'Even in the best equipped [L2+] car, the driver is likely to pay less attention and go faster. By the time a pedestrian saw the speeding high-tech car, there'd be no time left to read the plate before Eternity's summons.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store