What Silicon Valley Knew About Tech-Bro Paternalism
Affectionate AI, trading the paternalism of typical techspeak for a softer—or, to put it bluntly, more feminine—framing, is pretty transparent as a branding play: It is an act of anxiety management. It aims to assure the consumer that 'the coming Humanity-Plus-AI future,' as a recent report from Elon University called it, will be one not of threat but of promise. Yes, AI overall has the potential to become, as Elon Musk said in 2023, the 'most disruptive force in history.' It could be, as he put it in 2014, 'potentially more dangerous than nukes.' It is a force like 'an immortal dictator from which we can never escape,' he suggested in 2018. And yet, AI is coming. It is inevitable. We have, as consumers with human-level intelligence, very little choice in the matter. The people building the future are not asking for our permission; they are expecting our gratitude.
It takes a very specific strain of paternalism to believe that you can create something that both eclipses humanity and serves it at the same time. The belief is ripe for satire. That might be why I've lately been thinking back to a comment posted last year to a Subreddit about HBO's satire Silicon Valley: 'It's a shame this show didn't last into the AI craze phase.' It really is! Silicon Valley premiered in 2014, a year before Musk, Sam Altman, and a group of fellow engineers founded OpenAI to ensure that, as their mission statement put it, 'artificial general intelligence benefits all of humanity.' The show ended its run in 2019, before AI's wide adoption. It would have had a field day with some of the events that have transpired since, among them Musk's rebrand as a T-shirt-clad oligarch and Altman's bot-based mimicry of the 2013 movie Her.
Silicon Valley reads, at times, more as parody than as satire: Sharp as it is in its specific observations about tech culture, the show sometimes seems like a series of jokes in search of a punch line. It shines, though, when it casts its gaze on the gendered dynamics of tech—when it considers the consequential absurdities of tech's arrogance.
The show doesn't spend much time directly tackling artificial intelligence as a moral problem—not until its final few episodes. But it still offers a shrewd parody of AI, as a consumer technology and as a future being foisted on us. That is because Silicon Valley is highly attuned to the way power is exchanged and distributed in the industry, and to tech bros' hubristic inclination to cast the public in a stereotypically feminine role.
Corporations act; the rest of humanity reacts. They decide; we comply. They are the creators, driven by competition, conquest, and a conviction that the future is theirs to shape. We are the ones who will live with their decisions. Silicon Valley does not explicitly predict a world of AI made 'affectionate.' In a certain way, though, it does. It studies the men who make AI. It parodies their paternalism. The feminist philosopher Kate Manne argues that masculinity, at its extreme, is a self-ratifying form of entitlement. Silicon Valley knows that there's no greater claim to entitlement than an attempt to build the future.
[Read: The rise of techno-authoritarianism]
The series focuses on the evolving fortunes of the fictional start-up Pied Piper, a company with an aggressively boring product—a data-compression algorithm—and an aggressively ambitious mission. The algorithm could lead, eventually, to the realization of a long-standing dream: a decentralized internet, its data stored not on corporately owned servers but on the individual devices of the network. Richard Hendricks, Pied Piper's founder and the primary author of that algorithm, is a coder by profession but an idealist by nature. Over the seasons, he battles with billionaires who are driven by ego, pettiness, and greed. But he is not Manichean; he does not hew to Manne's sense of masculine entitlement. He merely wants to build his tech.
He is surrounded, however, by characters who do fit Manne's definition, to different degrees. There's Erlich Bachman, the funder who sold an app he built for a modest profit and who regularly confuses luck with merit; Bertram Gilfoyle, the coder who has turned irony poisoning into a personality; Dinesh Chugtai, the coder who craves women's company as much as he fears it; Jared Dunn, the business manager whose competence is belied by his meekness. Even as the show pokes fun at the guys' personal failings, it elevates their efforts. Silicon Valley, throughout, is a David and Goliath story. Pied Piper is a tiny company trying to hold its own against the Googles of the world.
The show, co-created by Mike Judge, can be giddily adolescent about its own bro-ness (many of its jokes refer to penises). But it is also, often, insightful about the absurdities that can arise when men are treated like gods. The show mocks the tech executive who brandishes his Buddhist prayer beads and engages in animal cruelty. It skewers Valley denizens' conspicuous consumption. (Several B plots revolve around the introduction of the early Tesla roadsters.) Most of all, the show pokes fun at the myopia displayed by men who are, in the Valley and beyond, revered as 'visionaries.' All they can see and care about are their own interests. In that sense, the titans of tech are unabashedly masculine. They are callous. They are impetuous. They are reckless.
[Read: Elon Musk can't stop talking about penises]
Their failings cause chaos, and Silicon Valley spends its seasons writing whiplash into its story line. The show swings, with melodramatic ease, between success and failure. Richard and his growing team—fellow engineers, investors, business managers—seem to move forward, getting a big new round of funding or good publicity. Then, as if on cue, they are brought low again: Defeats are snatched from the jaws of victory. The whiplash can make the show hard to watch. You get invested in the fate of this scrappy start-up. You hope. You feel a bit of preemptive catharsis until the next disappointment comes.
That, in itself, is resonant. AI can hurtle its users along similar swings. It is a product to be marketed and a future to be accepted. It is something to be controlled (OpenAI's Altman appeared before Congress in 2023 asking for government regulation) and something that must not be contained (OpenAI this year, along with other tech giants, asked the federal government to prevent state-level regulation). Altman's public comments paint a picture of AI that evokes both Skynet ('I think if this technology goes wrong, it can go quite wrong,' he said at the 2023 congressional hearing) and—as he said in a 2023 interview—a 'magic intelligence in the sky.'
[Read: OpenAI goes MAGA]
The dissonance is part of the broader experience of tech—a field that, for the consumer, can feel less affectionate than addling. People adapted to Twitter, coming to rely on it for news and conversation; then Musk bought it, turned it into X, tweaked the algorithms, and, in the process, ruined the platform. People who have made investments in TikTok operate under the assumption that, as has happened before, it could go dark with the push of a button. To depend on technology, to trust it at all, in many instances means to be betrayed by it. And AI makes that vulnerability ever more consequential. Humans are at risk, always, of the machines' swaggering entitlements. Siri and Alexa and their fellow feminized bots are flourishes of marketing. They perform meekness and cheer—and they are roughly as capable of becoming an 'immortal dictator' as their male-coded counterparts.
By the end of Silicon Valley's run, Pied Piper seems poised for an epic victory. The company has a deal with AT&T to run its algorithm over the larger company's massive network. It is about to launch on millions of people's phones. It is about to become a household name. And then: the twist. Pied Piper's algorithm uses AI to maximize its own efficiency; through a fluke, Richard realizes that the algorithm works too well. It will keep maximizing. It will make its own definitions of efficiency. Pied Piper has created a decentralized network in the name of 'freedom'; it has created a machine, you might say, meant to benefit all of humanity. Now that network might mean humanity's destruction. It could come for the power grid. It could come for the apps installed in self-driving cars. It could come for bank accounts and refrigerators and satellites. It could come for the nuclear codes.
Suddenly, we're watching not just comedy but also an action-adventure drama. The guys will have to make hard choices on behalf of everyone else. This is an accidental kind of paternalism, a power they neither asked for nor, really, deserve. And the show asks whether they will be wise enough to abandon their ambitions—to sacrifice the trappings of tech-bro success—in favor of more stereotypically feminine goals: protection, self-sacrifice, compassion, care.
I won't spoil things by saying how the show answers the question. I'll simply say that, if you haven't seen the finale, in which all of this plays out, it's worth watching. Silicon Valley presents a version of the conundrum that real-world coders are navigating as they build machines that have the potential to double as monsters. The stakes are melodramatic. That is the point. Concerns about humanity—even the word humanity—have become so common in discussions of AI that they risk becoming clichés. But humanity is at stake, the show suggests, when human intelligence becomes an option rather than a given. At some point, the twists will have to end. In 'the coming Humanity-Plus-AI future,' we will have to find new ways of considering what it means to be human—and what we want to preserve and defend. Coders will have to come to grips with what they've created. Is AI a tool or a weapon? Is it a choice, or is it inevitable? Do we want our machines to be affectionate? Or can we settle for ones that leave the work of trying to be good humans to the humans?
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Article originally published at The Atlantic

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
14 minutes ago
- Business Insider
Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.
Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that "It is acceptable to engage a child in conversations that are romantic or sensual." It's a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children. I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway? Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you're a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff. Of course, the question is, "Are these chatbots a good thing?" Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let's stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn't. I think it's obvious why Meta doesn't offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn't offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or… OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you're doing when you spend time talking to an imaginary person. Katie: I think there's already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie "Her." Remember the Scarlett Johansen/OpenAI voice fiasco? Peter: OK, OK. I'll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I'm pretty sure that most people would like that). I wouldn't want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it. It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports: "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Horwitz notes that this wasn't the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It's from a 200-page document containing rules that got the OK from "Meta's legal, public policy and engineering staff, including its chief ethicist," Horwitz writes. I've read the report multiple times, and I still don't get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme? View this post on Instagram A post shared by Scene In Black (@sceneinblack) [A Meta spokesperson shared the statement they gave Reuters, which said: "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."] Katie: My real issue here is even if Meta makes it so that the chatbots won't talk sexy to kids — does that make it "safe" for kids? Just because it's not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it's fine for kids to use? I think the answer isn't clear, and likely, "No." Peter: We both have kids, and it's natural to focus on the harms that new tech can have on kids. That's what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product. I think it's worth noting that we've seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn't make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who's responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both). I'd suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is. Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don't think that falling in love with an AI is a totally healthy behavior. So you've got this thing that appeals to either the very young, or people who don't understand AI, or people who are mentally unwell or chronically lonely. That might be a great demographic to get hooked on your product, but not if you're Meta and you don't want, say, Congress to yell at you. Is there anything - ANYTHING - Big Tech won't do for a quick buck? Now we learn Meta's chatbots were programmed to carry on explicit and 'sensual' talk with 8 year olds. It's sick. I'm launching a full investigation to get answers. Big Tech: Leave our kids alone — Josh Hawley (@HawleyMO) August 15, 2025 Peter: Katie, you've just made the case that Meta's chatbot business will appeal to very young people, people who don't understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can't imagine that's the audience Meta really wants to lock down. So we're back where we started — I still don't know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside. Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I'm skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, "We'll all have chatbot friends" — which Mark Zuckerberg has said! — just isn't the one I believe. It's giving metaverse, sorry!


CNBC
14 minutes ago
- CNBC
Tesla faces U.S. auto safety probe over faulty crash reporting
Elon Musk's Tesla is facing a federal probe by the National Highway Traffic Safety Administration after the U.S. auto safety agency found that the company was not reporting crashes as required. According to documents posted to NHTSA's website on Thursday, the agency's Office of Defects Investigation had "identified numerous incident reports" from Tesla concerning crashes that had "occurred several months or more before the dates of the reports" to the agency. The delayed reports were likely "due to an issue with Tesla's data collection, which, according to Tesla, has now been fixed," according to NHTSA's explanation for the probe. Automakers must report on collisions that occurred on publicly accessible roads in the U.S. that involved the use of either partially or fully automated driving systems in their cars within five days of the companies becoming aware of any crash. The agency will now conduct an "audit query" to figure out if Tesla is in compliance with its reporting requirements, and to "evaluate the cause of the potential delays in reporting, the scope of any such delays, and the mitigations that Tesla has developed to address them." NHTSA will also investigate whether Tesla neglected to report any prior relevant collisions, and whether its reports submitted to the safety regulator "include all of the required and available data." Tesla stock was little changed Thursday. The company sells electric vehicles equipped with a standard Autopilot system, or premium Full Self-Driving Supervised option, which is also known as FSD, in the U.S. Both require a driver at the wheel ready to steer or brake at any time. A site that tracks Tesla-involved collisions drawing on news reports, police records and federal data, has found at least 59 fatalities resulting from crashes where Tesla Autopilot or FSD were a factor. The new NHTSA probe comes as Musk, Tesla's CEO, is trying to persuade investors that the company can become a global leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S. A manned Tesla Robotaxi service launched in Austin, Texas in June, and the company is running another manned car service in the San Francisco Bay Area in California. Riders can book trips via the company's Tesla Robotaxi app. Tesla has not begun driverless ride-hailing operations that would make it directly comparable to Alphabet-owned Waymo, or Baidu's Apollo Go and other autonomous vehicle competitors yet. The company is facing a sales and profit decline, due, in part, to a consumer backlash against Musk's incendiary political rhetoric, his work to re-elect President Donald Trump, and his work leading the Department of Government Efficiency to slash federal spending and its workforce. Still, many Wall Street analysts and shareholders remain optimistic about Musk's vision. "We think it is a positive that Tesla has begun robotaxi operations which puts it on the path to addressing a large market (we estimate that the US robotaxi market will be $7 bn in 2030 as discussed in our recent AV deep dive report)," Goldman Sachs autos industry analysts wrote in a note Wednesday. Musk and Tesla have not given investors a sense of what they expect in terms of Robotaxi-related revenue or the technical performance of vehicles in its rideshare fleet, so a "debate on the pace of robotaxi growth will continue," the research note said.

Miami Herald
an hour ago
- Miami Herald
Tesla Model Y L Aces Elk Test at 72 MPH Fully Loaded
Tesla's long-wheelbase Model Y L, built for the Chinese market, has just breezed through one of the toughest vehicle-handling challenges in the business: the elk test. At 72 mph with six adults onboard, the stretched SUV slalomed through cones without a knock, setting a new benchmark for stability in a segment where most SUVs start flailing at an eye-catching feat, but it's also a reminder that in the middle of lawsuits, controversies, and customer backlash, Tesla can still deliver a car that performs where it counts. The Model Y L is longer and roomier than the standard Y, with six proper seats and a comfort-oriented suspension. Yet, when Chinese bloggers threw it into the moose test, the results were far from lumbering. Adaptive dampers, revised steering, and subtle suspension tweaks allowed the big EV to remain calm at speeds that would embarrass a significant win for Tesla's engineers - particularly as the company wrestles with questions over whether its tech is being oversold. Only this week, a California judge ruled that Tesla drivers can move forward with a class action over Full Self-Driving claims, arguing that Elon Musk's statements may have misled buyers. Against that backdrop, the elk test acts as a reminder: marketing spin or not, some Teslas still back their promises with performance. The moose test isn't just a stunt. Originating in Sweden in the 1970s after rollover crashes, it's designed to simulate emergency obstacle avoidance - a real-world scenario. Most family SUVs are deemed "good" if they can survive it at 50 mph. The Model Y L did it at 72 mph fully loaded, and testers say it could probably push even higher with fewer a critical narrative for Tesla at a time when customer trust is wobbling. In China, where the Y L is sold, Tesla recently backtracked on its controversial removal of turn-signal stalks after local backlash, restoring them to the Model 3 lineup. Moves like this suggest Tesla is finally listening more closely to drivers - and strong safety performance adds weight to that pivot. The question Tesla faces isn't whether it can build quick, clever cars. It's whether it can win back the confidence of drivers who are tired of half-delivered promises. Full Self-Driving, still priced at $8,000, continues to divide opinion, with analysts highlighting strengths like lane changes but weaknesses in city that backdrop, physical proof of capability - like a fully loaded elk test success - may be Tesla's best rebuttal. While lawsuits drag on in courtrooms and software updates creep forward, it's the cold, hard tests like this one that tell buyers what they're really getting for their money. Tesla's Model Y L didn't just ace the elk test - it aced the narrative test too. In a market that's growing weary of courtroom drama and overpromises, showing an SUV that can swerve at 72 mph with six passengers is a reminder that some Teslas still walk the it's enough to balance out the noise surrounding the brand is another question, but for now, at least, the Model Y L's agility gives Tesla a win it badly needed. Copyright 2025 The Arena Group, Inc. All Rights Reserved.