What Silicon Valley Knew About Tech-Bro Paternalism
Last fall, the consumer-electronics company LG announced new branding for the artificial intelligence powering many of its home appliances. Out: the 'smart home.' In: 'Affectionate Intelligence.' This 'empathetic and caring' AI, as LG describes it, is here to serve. It might switch off your appliances and dim your lights at bedtime. It might, like its sisters Alexa and Siri, select a soundtrack to soothe you to sleep. The technology awaits your summons and then, unquestioningly, answers. It will make subservience environmental. It will surround you with care—and ask for nothing in return.
Affectionate AI, trading the paternalism of typical techspeak for a softer—or, to put it bluntly, more feminine—framing, is pretty transparent as a branding play: It is an act of anxiety management. It aims to assure the consumer that 'the coming Humanity-Plus-AI future,' as a recent report from Elon University called it, will be one not of threat but of promise. Yes, AI overall has the potential to become, as Elon Musk said in 2023, the 'most disruptive force in history.' It could be, as he put it in 2014, 'potentially more dangerous than nukes.' It is a force like 'an immortal dictator from which we can never escape,' he suggested in 2018. And yet, AI is coming. It is inevitable. We have, as consumers with human-level intelligence, very little choice in the matter. The people building the future are not asking for our permission; they are expecting our gratitude.
It takes a very specific strain of paternalism to believe that you can create something that both eclipses humanity and serves it at the same time. The belief is ripe for satire. That might be why I've lately been thinking back to a comment posted last year to a Subreddit about HBO's satire Silicon Valley: 'It's a shame this show didn't last into the AI craze phase.' It really is! Silicon Valley premiered in 2014, a year before Musk, Sam Altman, and a group of fellow engineers founded OpenAI to ensure that, as their mission statement put it, 'artificial general intelligence benefits all of humanity.' The show ended its run in 2019, before AI's wide adoption. It would have had a field day with some of the events that have transpired since, among them Musk's rebrand as a T-shirt-clad oligarch and Altman's bot-based mimicry of the 2013 movie Her.
Silicon Valley reads, at times, more as parody than as satire: Sharp as it is in its specific observations about tech culture, the show sometimes seems like a series of jokes in search of a punch line. It shines, though, when it casts its gaze on the gendered dynamics of tech—when it considers the consequential absurdities of tech's arrogance.
The show doesn't spend much time directly tackling artificial intelligence as a moral problem—not until its final few episodes. But it still offers a shrewd parody of AI, as a consumer technology and as a future being foisted on us. That is because Silicon Valley is highly attuned to the way power is exchanged and distributed in the industry, and to tech bros' hubristic inclination to cast the public in a stereotypically feminine role.
Corporations act; the rest of humanity reacts. They decide; we comply. They are the creators, driven by competition, conquest, and a conviction that the future is theirs to shape. We are the ones who will live with their decisions. Silicon Valley does not explicitly predict a world of AI made 'affectionate.' In a certain way, though, it does. It studies the men who make AI. It parodies their paternalism. The feminist philosopher Kate Manne argues that masculinity, at its extreme, is a self-ratifying form of entitlement. Silicon Valley knows that there's no greater claim to entitlement than an attempt to build the future.
[Read: The rise of techno-authoritarianism]
The series focuses on the evolving fortunes of the fictional start-up Pied Piper, a company with an aggressively boring product—a data-compression algorithm—and an aggressively ambitious mission. The algorithm could lead, eventually, to the realization of a long-standing dream: a decentralized internet, its data stored not on corporately owned servers but on the individual devices of the network. Richard Hendricks, Pied Piper's founder and the primary author of that algorithm, is a coder by profession but an idealist by nature. Over the seasons, he battles with billionaires who are driven by ego, pettiness, and greed. But he is not Manichean; he does not hew to Manne's sense of masculine entitlement. He merely wants to build his tech.
He is surrounded, however, by characters who do fit Manne's definition, to different degrees. There's Erlich Bachman, the funder who sold an app he built for a modest profit and who regularly confuses luck with merit; Bertram Gilfoyle, the coder who has turned irony poisoning into a personality; Dinesh Chugtai, the coder who craves women's company as much as he fears it; Jared Dunn, the business manager whose competence is belied by his meekness. Even as the show pokes fun at the guys' personal failings, it elevates their efforts. Silicon Valley, throughout, is a David and Goliath story. Pied Piper is a tiny company trying to hold its own against the Googles of the world.
The show, co-created by Mike Judge, can be giddily adolescent about its own bro-ness (many of its jokes refer to penises). But it is also, often, insightful about the absurdities that can arise when men are treated like gods. The show mocks the tech executive who brandishes his Buddhist prayer beads and engages in animal cruelty. It skewers Valley denizens' conspicuous consumption. (Several B plots revolve around the introduction of the early Tesla roadsters.) Most of all, the show pokes fun at the myopia displayed by men who are, in the Valley and beyond, revered as 'visionaries.' All they can see and care about are their own interests. In that sense, the titans of tech are unabashedly masculine. They are callous. They are impetuous. They are reckless.
[Read: Elon Musk can't stop talking about penises]
Their failings cause chaos, and Silicon Valley spends its seasons writing whiplash into its story line. The show swings, with melodramatic ease, between success and failure. Richard and his growing team—fellow engineers, investors, business managers—seem to move forward, getting a big new round of funding or good publicity. Then, as if on cue, they are brought low again: Defeats are snatched from the jaws of victory. The whiplash can make the show hard to watch. You get invested in the fate of this scrappy start-up. You hope. You feel a bit of preemptive catharsis until the next disappointment comes.
That, in itself, is resonant. AI can hurtle its users along similar swings. It is a product to be marketed and a future to be accepted. It is something to be controlled (OpenAI's Altman appeared before Congress in 2023 asking for government regulation) and something that must not be contained (OpenAI this year, along with other tech giants, asked the federal government to prevent state-level regulation). Altman's public comments paint a picture of AI that evokes both Skynet ('I think if this technology goes wrong, it can go quite wrong,' he said at the 2023 congressional hearing) and—as he said in a 2023 interview—a 'magic intelligence in the sky.'
[Read: OpenAI goes MAGA]
The dissonance is part of the broader experience of tech—a field that, for the consumer, can feel less affectionate than addling. People adapted to Twitter, coming to rely on it for news and conversation; then Musk bought it, turned it into X, tweaked the algorithms, and, in the process, ruined the platform. People who have made investments in TikTok operate under the assumption that, as has happened before, it could go dark with the push of a button. To depend on technology, to trust it at all, in many instances means to be betrayed by it. And AI makes that vulnerability ever more consequential. Humans are at risk, always, of the machines' swaggering entitlements. Siri and Alexa and their fellow feminized bots are flourishes of marketing. They perform meekness and cheer—and they are roughly as capable of becoming an 'immortal dictator' as their male-coded counterparts.
By the end of Silicon Valley's run, Pied Piper seems poised for an epic victory. The company has a deal with AT&T to run its algorithm over the larger company's massive network. It is about to launch on millions of people's phones. It is about to become a household name. And then: the twist. Pied Piper's algorithm uses AI to maximize its own efficiency; through a fluke, Richard realizes that the algorithm works too well. It will keep maximizing. It will make its own definitions of efficiency. Pied Piper has created a decentralized network in the name of 'freedom'; it has created a machine, you might say, meant to benefit all of humanity. Now that network might mean humanity's destruction. It could come for the power grid. It could come for the apps installed in self-driving cars. It could come for bank accounts and refrigerators and satellites. It could come for the nuclear codes.
Suddenly, we're watching not just comedy but also an action-adventure drama. The guys will have to make hard choices on behalf of everyone else. This is an accidental kind of paternalism, a power they neither asked for nor, really, deserve. And the show asks whether they will be wise enough to abandon their ambitions—to sacrifice the trappings of tech-bro success—in favor of more stereotypically feminine goals: protection, self-sacrifice, compassion, care.
I won't spoil things by saying how the show answers the question. I'll simply say that, if you haven't seen the finale, in which all of this plays out, it's worth watching. Silicon Valley presents a version of the conundrum that real-world coders are navigating as they build machines that have the potential to double as monsters. The stakes are melodramatic. That is the point. Concerns about humanity—even the word humanity—have become so common in discussions of AI that they risk becoming clichés. But humanity is at stake, the show suggests, when human intelligence becomes an option rather than a given. At some point, the twists will have to end. In 'the coming Humanity-Plus-AI future,' we will have to find new ways of considering what it means to be human—and what we want to preserve and defend. Coders will have to come to grips with what they've created. Is AI a tool or a weapon? Is it a choice, or is it inevitable? Do we want our machines to be affectionate? Or can we settle for ones that leave the work of trying to be good humans to the humans?
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Article originally published at The Atlantic

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
With the Musk-Trump divide, what is the future of DOGE?
WASHINGTON — Even as President Donald Trump and multibillionaire Elon Musk deal with the fallout of their public dispute, Republican lawmakers still believe the original mission of DOGE can be carried out. And the absence of Musk is not likely to hinder that progress, according to a Utah congressman. 'We've always been a little frustrated that there was such limited interaction from the DOGE administration to the DOGE caucus — we couldn't really identify where we were to lean in,' said Rep. Blake Moore, co-chair of the congressional DOGE caucus. 'And we had a ton of folks ready to support but there just wasn't that interaction.' Moore acknowledged that much of Musk's role over the Department of Government Efficiency and Trump's big ambitions to slash government spending amounted to a lot of 'over-promising (but) under delivering.' 'I think that people should recognize … most everybody knew Elon was exaggerating as to what he could do, right?' Moore told the Deseret News. 'I think people recognize that now, and we need to be willing to pick up the appropriations process right now and find some substantive work to reduce the juice, the overall expenditure burden that our nation has,' he added. 'I think there's still an opportunity there.' Moore declined to say whether he thought it was a mistake to put Musk in such a position of power — instead noting it would be better to 'just sit back and kind of see what happens. It's all sort of just spinning around right now.' Other Republicans also expressed interest in continuing DOGE's mission on Capitol Hill, including Rep. Marjorie Taylor Greene, R-Ga., who leads the House Oversight DOGE subcommittee. 'I think DOGE is great. Government efficiency is fantastic,' Greene told reporters on Friday. 'It's exactly what we need. The American people support it, and it must continue.'
Yahoo
an hour ago
- Yahoo
Apple under pressure to shine after AI stumble
Pressure is on Apple to show it hasn't lost its magic despite broken promises to ramp up iPhones with generative artificial intelligence (GenAI) as rivals race ahead with the technology. Apple will showcase plans for its coveted devices and the software powering them at its annual Worldwide Developers Conference (WWDC) kicking off Monday in Silicon Valley. The event comes a year after the tech titan said a suite of AI features it dubbed "Apple Intelligence" was heading for iPhones, including an improvement of its much criticized Siri voice assistant. "Apple advertised a lot of features as if they were going to be available, and it just didn't happen," noted Emarketer senior analyst Gadjo Sevilla. Instead, Apple delayed the rollout of the Siri upgrade, with hopes that it will be available in time for the next iPhone release, expected in the fall. "I don't think there is going to be that much of a celebratory tone at WWDC," the analyst told AFP. "It could be more of a way for Apple to recover some credibility by showing where they're headed." Industry insiders will be watching to see whether Apple addresses the AI stumble or focuses on less splashy announcements, including a rumored overhaul of its operating systems for its line of devices. "The bottom line is Apple seemed to underestimate the AI shift, then over-promised features, and is now racing to catch up," Gene Munster and Brian Baker of Deepwater Asset Management wrote in a WWDC preview note. Rumors also include talk that Apple may add GenAI partnerships with Google or Perplexity to an OpenAI alliance announced a year ago. - 'Double black eye' - Infusing its lineup with AI is only one of Apple's challenges. Developers, who build apps and tools to run on the company's products, may be keen for Apple to loosen its tight control of access to iPhones. "There's still a lot of strife between Apple and developers," Sevilla said. "Taking 30 percent commissions from them and then failing to deliver on promises for new functionality—that's a double black eye." A lawsuit by Fortnite maker Epic Games ended with Apple being ordered to allow outside payment systems to be used at the US App Store, but developers may want more, according to the analyst. "Apple does need to give an olive branch to the developer community, which has been long-suffering," Sevilla said. "They can't seem to thrive within the restrictive guardrails that Apple has been putting up for decades now." As AI is incorporated into Apple software, the company may need to give developers more ability to sync apps to the platform, according to Creative Strategies analyst Carolina Milanesi. "Maybe with AI it's the first time that Apple needs to rethink the open versus closed ecosystem," Milanesi said. - Apple on defensive - Adding to the WWDC buildup is that the legendary designer behind the iPhone, Jony Ive, has joined with ChatGPT maker OpenAI to create a potential rival device for engaging with AI. "It puts Apple on the defensive because the key designer for your most popular product is saying there is something better than the iPhone," Sevilla said. While WWDC has typically been a software-focused event, Apple might unveil new hardware to show it is still innovating, the analyst speculated. And while unlikely to come up at WWDC, Apple has to deal with tariffs imposed by US President Donald Trump in his trade war with China, a key market for sales growth as well as the place where most iPhones are made. Trump has also threatened to hit Apple with tariffs if iPhone production wasn't moved to the US, which analysts say is impossible given the costs and capabilities. "The whole idea of having an American-made iPhone is a pipe dream; you'd have to rewrite the rules of global economics," said Sevilla. One of the things Apple has going for it is that its fans are known for their loyalty and likely to remain faithful regardless of how much time it takes the company to get its AI act together, Milanesi said. "Do people want a smarter Siri? Hell yeah," Milanesi said. "But if you are in Apple, you're in Apple and you'll continue to buy their stuff." gc/arp
Yahoo
2 hours ago
- Yahoo
Apple's Siri Could Be More Like ChatGPT. But Is That What You Want?
I've noticed a vibe shift in the appetite for AI on our devices. My social feeds are flooded with disgust over what's being created by Google's AI video generator tool, Veo 3. The unsettling realistic video of fake people and voices it creates makes it clear we will have a hard time telling apart fiction from reality. In other words, the AI slop is looking less sloppy. Meanwhile, the CEO of Anthropic is warning people that AI will wipe out half of all entry-level white-collar jobs. In an interview with Axios, Dario Amodei is suggesting government needs to step in to protect us from a mass elimination of jobs that can happen very rapidly. So as we gear up for Apple's big WWDC presentation on Monday, I have a different view of headlines highlighting Apple being behind in the AI race. I wonder, what exactly is the flavor of AI that people want or need right now? And will it really matter if Apple keeps waiting longer to push out it's long promised (and long delayed) personalized Siri when people are not feeling optimistic about AI's impact on our society? In this week's episode of One More Thing, which you can watch embedded above, I go over some of the recent reporting from Bloomberg that discusses leadership changes on the Siri team, and how there are different views in what consumers want out of Siri. Should Apple approach AI in a way to make Siri into a home-grown chatbot, or just make it a better interface for controlling devices? (Maybe a bit of both.) I expect a lot of griping after WWDC about the state of Siri and Apple's AI, with comparisons to other products like ChatGPT. But I hope we can use those gripes to voice what we really want in the next path for the assistant, by sharing our thoughts and speaking with our wallet. Do you want a Siri that's better at understanding context, or one that goes further and makes decisions for you? It's a question I'll be dwelling on more as Apple gives us the next peak into the future of iOS on Monday, and perhaps a glimpse of how the next Siri is shaping up. If you're looking for more One More Thing, subscribe to our YouTube page to catch Bridget Carey breaking down the latest Apple news and issues every Friday.