logo
Knew About Tech-Bro Paternalism

Knew About Tech-Bro Paternalism

The Atlantic16-04-2025
Last fall, the consumer-electronics company LG announced new branding for the artificial intelligence powering many of its home appliances. Out: the 'smart home.' In: 'Affectionate Intelligence.' This 'empathetic and caring' AI, as LG describes it, is here to serve. It might switch off your appliances and dim your lights at bedtime. It might, like its sisters Alexa and Siri, select a soundtrack to soothe you to sleep. The technology awaits your summons and then, unquestioningly, answers. It will make subservience environmental. It will surround you with care—and ask for nothing in return.
Affectionate AI, trading the paternalism of typical techspeak for a softer—or, to put it bluntly, more feminine—framing, is pretty transparent as a branding play: It is an act of anxiety management. It aims to assure the consumer that 'the coming Humanity-Plus-AI future,' as a recent report from Elon University called it, will be one not of threat but of promise. Yes, AI overall has the potential to become, as Elon Musk said in 2023, the 'most disruptive force in history.' It could be, as he put it in 2014, 'potentially more dangerous than nukes.' It is a force like 'an immortal dictator from which we can never escape,' he suggested in 2018. And yet, AI is coming. It is inevitable. We have, as consumers with human-level intelligence, very little choice in the matter. The people building the future are not asking for our permission; they are expecting our gratitude.
It takes a very specific strain of paternalism to believe that you can create something that both eclipses humanity and serves it at the same time. The belief is ripe for satire. That might be why I've lately been thinking back to a comment posted last year to a Subreddit about HBO's satire Silicon Valley: 'It's a shame this show didn't last into the AI craze phase.' It really is! Silicon Valley premiered in 2014, a year before Musk, Sam Altman, and a group of fellow engineers founded OpenAI to ensure that, as their mission statement put it, 'artificial general intelligence benefits all of humanity.' The show ended its run in 2019, before AI's wide adoption. It would have had a field day with some of the events that have transpired since, among them Musk's rebrand as a T-shirt-clad oligarch and Altman's bot-based mimicry of the 2013 movie Her.
Silicon Valley reads, at times, more as parody than as satire: Sharp as it is in its specific observations about tech culture, the show sometimes seems like a series of jokes in search of a punch line. It shines, though, when it casts its gaze on the gendered dynamics of tech—when it considers the consequential absurdities of tech's arrogance.
The show doesn't spend much time directly tackling artificial intelligence as a moral problem—not until its final few episodes. But it still offers a shrewd parody of AI, as a consumer technology and as a future being foisted on us. That is because Silicon Valley is highly attuned to the way power is exchanged and distributed in the industry, and to tech bros' hubristic inclination to cast the public in a stereotypically feminine role.
Corporations act; the rest of humanity reacts. They decide; we comply. They are the creators, driven by competition, conquest, and a conviction that the future is theirs to shape. We are the ones who will live with their decisions. Silicon Valley does not explicitly predict a world of AI made 'affectionate.' In a certain way, though, it does. It studies the men who make AI. It parodies their paternalism. The feminist philosopher Kate Manne argues that masculinity, at its extreme, is a self-ratifying form of entitlement. Silicon Valley knows that there's no greater claim to entitlement than an attempt to build the future.
The series focuses on the evolving fortunes of the fictional start-up Pied Piper, a company with an aggressively boring product—a data-compression algorithm—and an aggressively ambitious mission. The algorithm could lead, eventually, to the realization of a long-standing dream: a decentralized internet, its data stored not on corporately owned servers but on the individual devices of the network. Richard Hendricks, Pied Piper's founder and the primary author of that algorithm, is a coder by profession but an idealist by nature. Over the seasons, he battles with billionaires who are driven by ego, pettiness, and greed. But he is not Manichean; he does not hew to Manne's sense of masculine entitlement. He merely wants to build his tech.
He is surrounded, however, by characters who do fit Manne's definition, to different degrees. There's Erlich Bachman, the funder who sold an app he built for a modest profit and who regularly confuses luck with merit; Bertram Gilfoyle, the coder who has turned irony poisoning into a personality; Dinesh Chugtai, the coder who craves women's company as much as he fears it; Jared Dunn, the business manager whose competence is belied by his meekness. Even as the show pokes fun at the guys' personal failings, it elevates their efforts. Silicon Valley, throughout, is a David and Goliath story. Pied Piper is a tiny company trying to hold its own against the Googles of the world.
The show, co-created by Mike Judge, can be giddily adolescent about its own bro-ness (many of its jokes refer to penises). But it is also, often, insightful about the absurdities that can arise when men are treated like gods. The show mocks the tech executive who brandishes his Buddhist prayer beads and engages in animal cruelty. It skewers Valley denizens' conspicuous consumption. (Several B plots revolve around the introduction of the early Tesla roadsters.) Most of all, the show pokes fun at the myopia displayed by men who are, in the Valley and beyond, revered as 'visionaries.' All they can see and care about are their own interests. In that sense, the titans of tech are unabashedly masculine. They are callous. They are impetuous. They are reckless.
Their failings cause chaos, and Silicon Valley spends its seasons writing whiplash into its story line. The show swings, with melodramatic ease, between success and failure. Richard and his growing team—fellow engineers, investors, business managers—seem to move forward, getting a big new round of funding or good publicity. Then, as if on cue, they are brought low again: Defeats are snatched from the jaws of victory. The whiplash can make the show hard to watch. You get invested in the fate of this scrappy start-up. You hope. You feel a bit of preemptive catharsis until the next disappointment comes.
That, in itself, is resonant. AI can hurtle its users along similar swings. It is a product to be marketed and a future to be accepted. It is something to be controlled (OpenAI's Altman appeared before Congress in 2023 asking for government regulation) and something that must not be contained (OpenAI this year, along with other tech giants, asked the federal government to prevent state-level regulation). Altman's public comments paint a picture of AI that evokes both Skynet ('I think if this technology goes wrong, it can go quite wrong,' he said at the 2023 congressional hearing) and—as he said in a 2023 interview—a ' magic intelligence in the sky.'
The dissonance is part of the broader experience of tech—a field that, for the consumer, can feel less affectionate than addling. People adapted to Twitter, coming to rely on it for news and conversation; then Musk bought it, turned it into X, tweaked the algorithms, and, in the process, ruined the platform. People who have made investments in TikTok operate under the assumption that, as has happened before, it could go dark with the push of a button. To depend on technology, to trust it at all, in many instances means to be betrayed by it. And AI makes that vulnerability ever more consequential. Humans are at risk, always, of the machines' swaggering entitlements. Siri and Alexa and their fellow feminized bots are flourishes of marketing. They perform meekness and cheer— and they are roughly as capable of becoming an 'immortal dictator' as their male-coded counterparts.
By the end of Silicon Valley 's run, Pied Piper seems poised for an epic victory. The company has a deal with AT&T to run its algorithm over the larger company's massive network. It is about to launch on millions of people's phones. It is about to become a household name. And then: the twist. Pied Piper's algorithm uses AI to maximize its own efficiency; through a fluke, Richard realizes that the algorithm works too well. It will keep maximizing. It will make its own definitions of efficiency. Pied Piper has created a decentralized network in the name of 'freedom'; it has created a machine, you might say, meant to benefit all of humanity. Now that network might mean humanity's destruction. It could come for the power grid. It could come for the apps installed in self-driving cars. It could come for bank accounts and refrigerators and satellites. It could come for the nuclear codes.
Suddenly, we're watching not just comedy but also an action-adventure drama. The guys will have to make hard choices on behalf of everyone else. This is an accidental kind of paternalism, a power they neither asked for nor, really, deserve. And the show asks whether they will be wise enough to abandon their ambitions—to sacrifice the trappings of tech-bro success—in favor of more stereotypically feminine goals: protection, self-sacrifice, compassion, care.
I won't spoil things by saying how the show answers the question. I'll simply say that, if you haven't seen the finale, in which all of this plays out, it's worth watching. Silicon Valley presents a version of the conundrum that real-world coders are navigating as they build machines that have the potential to double as monsters. The stakes are melodramatic. That is the point. Concerns about humanity—even the word humanity —have become so common in discussions of AI that they risk becoming clichés. But humanity is at stake, the show suggests, when human intelligence becomes an option rather than a given. At some point, the twists will have to end. In 'the coming Humanity-Plus-AI future,' we will have to find new ways of considering what it means to be human —and what we want to preserve and defend. Coders will have to come to grips with what they've created. Is AI a tool or a weapon? Is it a choice, or is it inevitable? Do we want our machines to be affectionate? Or can we settle for ones that leave the work of trying to be good humans to the humans?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI CEO joins chorus of industry experts warning about "AI bubble"
OpenAI CEO joins chorus of industry experts warning about "AI bubble"

Yahoo

time26 minutes ago

  • Yahoo

OpenAI CEO joins chorus of industry experts warning about "AI bubble"

Tech giants have made clear that they'll spare no expense in their efforts to win out in the AI rat race. So much so, that tech giants like Meta () , Microsoft () , Amazon () , and Google () planned to spend up to $320 billion on AI tech in 2025. So when Microsoft CEO Satya Nadella, who augured the great LLM-ification of AI and became a major investor in ChatGPT creator OpenAI in 2019, warned about "overbuild" of data centers after being part of the cohort signaling their spending ambitions, some analysts and industry experts seemed to perk up. Even more so when Alibaba co-founder Joe Tsai echoed Nadella's concerns, calling the buildout in AI datacenters a "bubble." Microsoft, meanwhile, went on to deny additional capacity from hyperscaler CoreWeave () , which was in the process of IPOing (that capacity was bought by OpenAI instead.) Only, nobody really cared. On Wall Street, valuations in AI-fueled trades were taking off. The only real segway was April's tariff tiff. Then, it was back to all-time highs for U.S. equities. The attitude was: "don't fight the tape." In some techno-optimist circles, the advent of superintelligent AI was seen just around the corner. Their attitude: "Why sell now?" Investors might be more wary now, thanks to a recent MIT study warning that businesses are not seeing returns from AI investments. And making matters worse, more industry experts are warning that investors got ahead of themselves. MIT drops AI spending bombshell MIT researchers studied 300 businesses and how they were using AI and found that, despite claims that the businesses had invested $30 to $40 billion into generative AI, only 5% of companies had seen any return thus far. Where industry anecdotes fell on deaf ears, the MIT study cut through the noise. Immediately after the report dropped, so too did tech stocks. And dogpiling on, even more industry leaders are joining the chorus, explicitly calling out what they see as an AI bubble. OpenAI CEO Sam Altman warns of bubble Among them are OpenAI CEO Sam Altman, who said in plain terms that, "investors as a whole are overexcited about AI." While emphasizing its long-term important, Altman cautioned that investors could get "burnt" by the 'dot com-like' dynamic in the market. Unfortunately, it won't just be the people betting on high-flying names like Palantir () . Many Americans' 401(K)s, IRAs, and brokerage accounts are tied up in indexes which are heavily exposed to the AI trade. In fact, these tech giants represent over a third of the S&P 500's weight. Altman remains an optimist over the long run, casting issues with his firm's latest frontier AI model as a "misfire" and promising an even more fantastic sequel in GPT-6. But that's what many AI optimists were hoping for GPT-5. And waiting even longer for "the future" to arrive might mean expending their optimism. That's not to say that AI models (including at competitors) are not progressing, but what investors' willingness to allow firms to become capital intensive businesses might not last much longer if they don't see light at the end of the tunnel. While they've learned to love the stratospheric growth coming from AI chipmaker Nvidia () and the double-digit strides in cloud services from Microsoft, Amazon, and Google, wariness about payoff might prompt a pullback. Mark's spend-a-thon comes to a close There's some evidence that it's already come — or maybe, somehow, AI is already replacing jobs — at Meta. CEO Mark Zuckerberg spent big to acquire AI talent and build out data centers. He's now almost fresh out of cash and looking to private credit to shore up his ambitions. Still, if you're confident there's a payoff, why pullback? Per WSJ, Meta is in the process of reorganizing its AI segment into different businesses prioritizing business endpoints. With it, exec departures, layoffs, and a hiring freeze. Is this a sign that Zuckerberg and management have looked around and collectively discovered that they're buying the top? Is this an unfortunate repeat of the company's failed metaverse ambitions? Or is this a wake-up call from within after squandering billions on comp packages for researchers and data centers? Too early to say, but after blowing through $31.8 billion in the last six months, you'd have to wonder if maybe, the industry gurus called it how it was. Now that Wall Street finally seems to be paying attention, what does that bode for the market? This story was originally reported by TheStreet on Aug 21, 2025, where it first appeared in the Investing News, Analysis, and Tips section. Add TheStreet as a Preferred Source by clicking here.

SpaceX takes big step toward next Starship rocket launch
SpaceX takes big step toward next Starship rocket launch

Digital Trends

time28 minutes ago

  • Digital Trends

SpaceX takes big step toward next Starship rocket launch

SpaceX has taken a big step toward the 10th flight of the Starship rocket, moving the first-stage Super Heavy booster to the launchpad at the company's Starbase site in Boca Chica, Texas. The Elon Musk-led spaceflight company shared several images (below) of the Super Heavy booster — the most powerful ever to fly — on its X account on Thursday. One of them shows an aerial view of the Starbase site with the Super Heavy being moved toward the launchpad. Another is of the booster at the launchpad, while the third image shows a close-up of the rocket's 33 Raptor engines, which will generate around 17 million pounds of thrust at launch as it lifts off on Sunday. The upper-stage Starship spacecraft has yet to be placed atop the Super Heavy. Super Heavy booster moved to the launch pad at Starbase ahead of Starship's tenth flight test — SpaceX (@SpaceX) August 21, 2025 SpaceX is targeting Sunday, August 24, for the 10th test flight of the Starship. For full details on how to watch a livestream of the launch, Digital Trends has you covered. Recommended Videos Sunday's flight will be the first Starship launch since May 27. SpaceX had hoped to fly earlier than this weekend, but a sudden explosion at Starbase that wrecked one of the Starship spacecraft killed the plan. An investigation attributed the explosion to a technical flaw involving a damaged high-pressure nitrogen tank inside the spacecraft. Unlike some of the previous Starship flights, SpaceX will not be landing the 71-meter-tall booster back at Starbase, instead attempting a controlled landing in open water. Looking further ahead, NASA is planning to use a modified version of the Starship spacecraft for its first crewed moon landing since the final Apollo mission in 1972. The Artemis III mission is currently targeted for 2027, though depending on how SpaceX's Starship testing goes, among other factors, the highly anticipated mission could be shifted to a later date. Beyond that, NASA wants to use the Starship system to carry additional crews and cargo to the lunar surface, and even deploy it for the first crewed mission to Mars, which could take place in 2030s. Shortly before Sunday's flight, Musk is expected to give an update on the long-term plans for the Starship, which should include details about much of the above.

Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.
Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.

Yahoo

time41 minutes ago

  • Yahoo

Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.

Meta is under fire for its AI chatbots being allowed to talk "seductively" to kids. Meta is investing heavily in AI, and Mark Zuckerberg says "personal superintelligence" is the future. Business Insider correspondents Katie Notopoulos and Peter Kafka discuss why Meta is pushing these chatbots. Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that "It is acceptable to engage a child in conversations that are romantic or sensual." It's a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children. I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway? Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you're a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff. Of course, the question is, "Are these chatbots a good thing?" Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let's stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn't. I think it's obvious why Meta doesn't offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn't offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or… OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you're doing when you spend time talking to an imaginary person. Still, why does Meta think people want to talk to fake avatars online? Do many people in tech believe this is the future, or just Mark Zuckerberg? Katie: I think there's already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie "Her." Remember the Scarlett Johansen/OpenAI voice fiasco? Peter: OK, OK. I'll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I'm pretty sure that most people would like that). I wouldn't want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it. It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports: Horwitz notes that this wasn't the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It's from a 200-page document containing rules that got the OK from "Meta's legal, public policy and engineering staff, including its chief ethicist," Horwitz writes. I've read the report multiple times, and I still don't get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme? [A Meta spokesperson shared the statement they gave Reuters, which said: "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."] Katie: My real issue here is even if Meta makes it so that the chatbots won't talk sexy to kids — does that make it "safe" for kids? Just because it's not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it's fine for kids to use? I think the answer isn't clear, and likely, "No." Peter: We both have kids, and it's natural to focus on the harms that new tech can have on kids. That's what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product. I think it's worth noting that we've seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn't make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who's responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both). I'd suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is. Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don't think that falling in love with an AI is a totally healthy behavior. So you've got this thing that appeals to either the very young, or people who don't understand AI, or people who are mentally unwell or chronically lonely. That might be a great demographic to get hooked on your product, but not if you're Meta and you don't want, say, Congress to yell at you. Peter: Katie, you've just made the case that Meta's chatbot business will appeal to very young people, people who don't understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can't imagine that's the audience Meta really wants to lock down. So we're back where we started — I still don't know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside. Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I'm skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, "We'll all have chatbot friends" — which Mark Zuckerberg has said! — just isn't the one I believe. It's giving metaverse, sorry! Read the original article on Business Insider Play Farm Merge Valley

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store