
Ads Without Authors: How Automation Dismantles Holding Companies—and Culture
AI-Optimized Advertising, Human Authorship Erased: The Cost of Automation
Platforms like Meta, X, Google, and Amazon aren't just automating ads—they're bypassing the agencies that built the industry. In the race to own the feed with automation, friction is fatal.
Advertising has long been a canvas for human creativity, reflecting and shaping societal values. From iconic campaigns that captured the spirit of an era to ads that challenged social norms, human-crafted advertising serves as a cultural touchstone. Removing humans from this process doesn't just streamline production—it risks erasing the cultural narratives embedded within these messages.
Future generations won't study Midjourney outputs. They'll study the ads we made—because ads, at their best, are the sharpest shorthand for what a society values, fears, and aspires to become. As we delegate creative control to AI, we must ask: what stories are we losing, and what does that mean for our collective cultural memory?
Meta's latest ambition to fully automate ad creation by the end of 2026 isn't just a business strategy—it's a warning. The company's vision is deceptively simple: advertisers provide a product image, a budget, and a few goals, and Meta's AI handles the rest—copywriting, visual generation, media placement, and real-time optimization. The entire campaign lifecycle becomes push-button.
Meta is reportedly considering integrations with tools like Midjourney and DALL·E to enhance asset creation. The ambition is to make advertising seamless, especially for small and midsize businesses that lack in-house creative teams. However, what begins as simplification quickly becomes centralization, where one platform governs not only distribution but also expression.
What's at stake isn't just jobs—it's the future of originality, the pipeline of creative talent, and the power to shape culture. If Meta's model becomes the norm, we risk a creative monoculture, where differentiation dies and everything starts to look alike because the same machine created it.
Proponents will argue that this democratizes advertising, thereby leveling the playing field for small businesses. And they're not wrong. But democratization without differentiation still leads to mediocrity. Tools may become accessible. But brands become interchangeable.
And Meta is just the start. Spotify is auto-generating background music. Amazon is letting AI write product listings. Google is publishing AI-generated search summaries. Even journalism is being templated by prompts. We're not just automating workflows—we're displacing the origin of voice, taste, and intent.
What Meta is proposing is not just an automation layer. It's a creative feedback loop entirely governed by the platform itself. When Meta automates both the creation and the optimization of ads, it doesn't just accelerate the campaign cycle—it collapses the loop. The ad that performs best becomes the blueprint for the next, narrowing the window for originality until all that remains is whatever the algorithm can predict. This is not creativity—it's curation at scale.
Let's get something straight. These systems are not creators. They are morph engines—remixers of data, not originators of thought. What they do appears to be creative, but it isn't. It's a simulation. Highly effective, often impressive, but fundamentally derivative. They are powerful tools. But we should never mistake the tool for the purpose.
Meta's automation play reveals the larger issue: we're not just automating tasks, we're automating the conditions that make originality possible. When everything is a remix, who's responsible for the remix's meaning? And when every brand runs through the same pipeline, what's left of the difference?
Jean Baudrillard, a French sociologist and philosopher known for his work on simulation and hyperreality, once said, 'The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.' That's precisely the point—AI can fake style, structure, even tone. But it can't fake a purpose. Not yet. And probably not ever.
AI doesn't create—it interpolates. And interpolation is not insight. It's the ghost of thinking without the burden of understanding. It's mimicry in drag. Personalization isn't creativity. It's precision. It tells you who to speak to, but not what to say—or why it matters. The performance of intent, minus the spark that gives it life.
Across industries, the same logic is taking hold:
Everywhere, the promise is the same: faster, cheaper, better.
And everywhere, the result is the same: homogenization.
The deeper risk isn't job displacement—it's meaning displacement. When creativity is automated, it loses its expressive quality and becomes more efficient. And once it's efficient, it's disposable.
Ursula Franklin, a physicist and philosopher best known for her work on the social impact of technology, captured this tension in her definition of prescriptive technology: 'designs for compliance.' She warned that when work is reduced to repeatable steps, we lose not only control but also the creative possibility of deviation. Franklin contrasted this with holistic technologies, which preserve human intuition and craft. Our future depends on protecting the latter.
Friction is not failure. Friction is the forge.
The moments we remember in business—or life—aren't the ones that ran on rails. They're the ones who nearly broke us. The campaign that almost didn't ship. The pitch that bombed, then landed. The rewrite that found the truth.
In our race toward frictionless everything, we're stripping away the very texture that makes things memorable. And Meta's move to automate the entire ad pipeline is just the latest attempt to turn marketing into math—flawless, efficient, lifeless.
Friction isn't a flaw in the system. It's the source of learning, innovation, neuroplasticity, and art. Whether in synapses or symphonies, friction is how we stretch. It's how the human mind adapts and creates meaning. Without it, we may get results, but we won't get resonance.
In a world saturated with auto-generated content, what becomes scarce isn't information—it's resonance.
And resonance can't be manufactured. It has to be felt.
The brands that will matter tomorrow won't be the ones that optimized the most impressions. They'll be the ones who found a way to bypass the algorithm and create something tangible. Something experiential. Something no AI could hallucinate into existence.
This is where experiential marketing becomes more than a tactic—it becomes resistance. It's not a return to analog. It's a return to meaning.
Kurt Vonnegut wrote, 'We are dancing animals. How beautiful it is to get up and go out and do something.' The algorithm may be able to mimic the rhythm, but it will never learn to dance. It can choreograph—but not choose to move.
What Meta is doing—what nearly every vertical is racing toward—is based on a seductive lie: that easier is always better. But ease is not a virtue. It's not the metric by which we measure a life, a brand, or a society.
We weren't built for ease. We were built for meaning. And meaning requires effort. It requires tension. It requires authorship.
To automate that away in the name of scale is not progress. It's surrender.
If we must automate, let's automate to amplify, not erase. Tools should provoke better questions, not just faster answers. That means keeping humans in the loop. It means labeling AI-generated creative work. It means building guardrails that force friction back into the process, rather than removing it entirely.
Because when ads are created by machines and optimized by machines, who is accountable for their influence? If you're not the author of what shapes your choices, are they your choices? We need more than regulation—we need a red line. Authorship, consent, and sovereignty should not be optional when automation touches identity.
This isn't just a creative threat—it's an existential one for holding companies and agencies.
The largest tech platforms—Meta, Amazon, Google, X, Microsoft—make billions from advertising. However, as their AI systems become increasingly close to brands and consumers, the need for intermediaries begins to diminish. Why would Meta need an agency when it can generate, target, and optimize ads directly for the brand? Why would a brand rely on a holding company when it can plug into the model that sits at the heart of the user's digital life?
As AI-native platforms move closer to both brands and consumers, the traditional agency model finds itself at a crossroads. Holding companies were designed for fragmentation—fragmented channels, insights, creative, and data. But when the platforms now own the whole stack, and the AI becomes the primary interface to the consumer, what's left for intermediaries to manage?
Tom Sivo, VP of Emerging Technology at Interpublic, puts it this way:
The holding company was built for a time when storytelling, media buying, and consumer insight were fragmented. Today, the platform is the channel, the data, and the distribution.
And the AI? It's the last mile. It sits in the user's pocket, anticipates their intent, and steers the interaction before a brief is ever written.
In that world, holding companies don't evolve—they vanish.
Because the agency becomes a friction in a system that worships fluidity, and whoever is closest to the consumer controls the conversation.
Today, that's no longer a strategist, a planner, or a brand team.
It's the model.
The danger isn't just that AI will replace our jobs. The danger is that we will replace ourselves—inch by inch, prompt by prompt—until we no longer remember what it felt like to make something real.
I'm not anti-AI. But I am pro-human.
And being human isn't just a biological fact. It's a creative act. One that we perform daily through our decisions, expressions, and struggles. If we let go of that—if we let AI simulate not just our output but our intent—we don't become more efficient. We become spectral. Present, but not alive.
So yes, automate the repetitive. Automate the dull. But draw a line.
Because if we automate the struggle, we lose the story.
And if we lose the story, we lose the point.
If you're building with AI, ask yourself this: Am I amplifying human brilliance—or replacing it with synthetic volume? Because once originality is gone, no algorithm can recreate it.
The future isn't human or AI. It's human with AI—if we build it that way. But only if we remember that tools are only tools. Meaning is still up to us.
Because the point isn't productivity. Its meaning. To create is to leave a trace. If AI erases the struggle, what's left behind isn't art. It's output.
And through automation, if every ad is machine-generated and every engagement is machine-measured, we're not choosing; we're being programmed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 minutes ago
- Yahoo
Bonfire's new software lets users build their own social communities, free from platform control
Bonfire Social, a new framework for building communities on the open social web, launched on Thursday during the FediForum online conference. While Bonfire Social is a federated app, meaning it's powered by the same underlying protocol as Mastodon (ActivityPub), it's designed to be more modular and more customizable. That means communities on Bonfire have more control over how the app functions, which features and defaults are in place, and what their own roadmap and priorities will include. There's a decidedly disruptive bent to the software, which describes itself as a place where "all living beings thrive and communities flourish, free from private interest and capitalistic control." In other words, its mission is to build social software where people get to make the decisions, not big tech platform makers like Meta or Google. The organization itself runs as a nonprofit funded by donations and grants, and it doesn't take venture capital. Its code is open source, and it works in collaboration with the communities and researchers that use it to build and enhance online digital spaces. Bonfire Social, now offered as a 1.0 Release Candidate ahead of the public release, is just one representation of what Bonfire offers. Bonfire calls it a "flavor." Each flavor is a preconfigured bundle of Bonfire extensions, features, and defaults, sort of like a starting template. When a community opts to run a particular "flavor," it gets to govern the app as it sees fit, adding its own extensions and determining its own roadmap for product changes. This puts the social software back under users' control, instead of being subject to the whims of a platform maker with an ever-changing feature sets and algorithms. The organization is already developing other flavors like Bonfire Community and Open Science, and the Bonfire software lets any other community create their own version. In Bonfire Social, users will recognize familiar features, like feeds and tools to follow users, share posts, create user profiles, flag or block content, and more. However, it also offers other tools and features that traditional social networks may not have, like tools for customizing feeds, support for nested discussions, the ability to host multiple profiles per user, rich-text posts, and access control features. Custom feeds are a key differentiation between Bonfire and traditional social media apps. Though the idea of following custom feeds is something that's been popularized by newer social networks like Bluesky or social browsers like Flipboard's Surf, the tools to actually create those feeds are maintained by third parties. Bonfire instead offers its own custom feed-building tools in a simple interface that doesn't require users to understand coding. To build feeds, users can filter and sort content by type, date, engagement level, source instance, and more, including something it calls "circles." Those who lived through the Google+ era of social networks may be familiar with the concept of Circles. On Google's social network, users organized contacts into groups, called Circles, for optimized sharing. That concept lives on at Bonfire, where a circle represents a list of people. That can be a group of friends, a fan group, local users, organizers at a mutual aid group, or anything else users can come up with. These circles are private by default but can be shared with others. Another unique feature on Bonfire Social is Boundaries, which let you control who can see or engage with your content. For instance, you could share a post with a number of your circles, but only allow one specific circle's members to comment. Bonfire also supports threaded conversations (nested discussions) where replies can branch out into their own sub-threads. This can be useful for communities where deeper discussions and collaboration are more valuable than those where everyone competes for attention. Plus, Bonfire users can customize the app using one of the 16 built-in themes, or they can design their own layout and pick their own colors and fonts. Accounts on Bonfires can also host multiple profiles that have their own followers, content, and settings. This could be useful for those who simply prefer to have both public and private profiles, but also for those who need to share a given profile with others -- like a profile for a business, a publication, a collective, or a project team. Other features available at launch include PWA support for mobile devices, community blocklists, custom emoji support, full-text search (with opt out), direct messages, private group discussions (also with nested threads), and more. Extensions, which add different features, can be enabled or disabled by both admins and users. Admins simply decide what the defaults are. That means users could turn on or off features they don't like, even core features such as likes or boosts (the federated version of the retweet/repost). Because Bonfire is built on ActivityPub, it also federates with Mastodon, Peertube, Mobilizon, and others. The software is meant to be self-installed, though work to develop a hosting network is under way. For those who just want to kick the tires, a demo instance is available.


TechCrunch
27 minutes ago
- TechCrunch
Amazon launches new R&D group focused on agentic AI and robotics
In Brief Tech giant Amazon plans to launch a new group within its consumer product division that will focus on agentic AI. Amazon announced this new research and development group at an event on Wednesday, according to reporting from CNBC. This group will be based out of Lab126, Amazon's hardware R&D division that is behind tech including the Amazon Echo and the Kindle. The reason this agentic AI group is likely based at a hardware R&D center is that Amazon hopes this group will develop an agentic AI framework to use in the company's robotics efforts and to give the company's warehouse robots more skills. TechCrunch reached out to Amazon for more information.

Engadget
27 minutes ago
- Engadget
The Oversight Board says Meta isn't doing enough to fight celeb deepfake scams
Scams using AI deepfakes of celebrities have become an increasingly prominent issue for Meta over the last couple of years. Now, the Oversight Board has weighed in and has seemingly confirmed what other critics have said: Meta isn't doing enough to enforce its own rules, and makes it far too easy for scammers to get away with these schemes. "Meta is likely allowing significant amounts of scam content on its platforms to avoid potentially overenforcing a small subset of genuine celebrity endorsements," the board wrote in its latest decision. "At-scale reviewers are not empowered to enforce this prohibition on content that establishes a fake persona or pretends to be a famous person in order to scam or defraud." That conclusion came as the result of a case involving an ad for an online casino-style game called Plinko that used an AI-manipulated video of Ronaldo Nazário, a retired Brazilian soccer player. The ad, which according to the board showed obvious signs of being fake, was not removed by Meta even after it was reported as a scam more than 50 times. Meta later removed the ad, but not the underlying Facebook post behind it until the Oversight Board agreed to review the case. It was viewed more than 600,000 times. The board says that the case highlights fundamental flaws in how Meta approaches content moderation for reported scams involving celebrities and public figures. The board says that Meta told its members that "it enforces the policy only on escalation to ensure the person depicted in the content did not actually endorse the product" and that individual reviewers' "interpretation of what constitutes a 'fake persona' could vary across regions and introduce inconsistencies in enforcement.' The result, according to the Oversight Board, is that a "significant" amount of scam content is likely slipping through the cracks. In its sole recommendation to Meta, the board urged the company should update its internal guidelines, empower content reviewers to identify such scams and train them on "indicators" of AI-manipulated content. In a statement, a spokesperson for Meta said that "many of the Board's claims are simply inaccurate" and pointed to a test it began last year that uses facial recognition technology to fight "celeb-bait" scams. 'Scams have grown in scale and complexity in recent years, driven by ruthless cross-border criminal networks," the spokesperson said. "As this activity has become more persistent and sophisticated, so have our efforts to combat it. We're testing the use of facial recognition technology, enforcing aggressively against scams, and empowering people to protect themselves through many different on platform safety tools and warnings. While we appreciate the Oversight Board's views in this case, many of the Board's claims are simply inaccurate and we will respond to the full recommendation in 60 days in accordance with the bylaws.' Scams using AI deepfakes of celebrities has become a major problem for Meta as AI tech gets cheaper and more easily accessible. Earlier this year, I reported that dozens of pages were running ads featuring deepfakes of Elon Musk and Fox News personalities promoting supplements that claimed to cure diabetes. Some of these pages repeatedly ran hundreds of versions of these ads with seemingly few repercussions. Meta disabled some of the pages after my reporting, but similar scam ads persist on Facebook to this day. Actress Jamie Lee Curtis also recently publicly slammed Mark Zuckerberg for not removing a deepfaked Facebook ad that featured her (Meta removed the ad after her public posts). The Oversight Board similarly highlighted the scale of the problem in this case, noting that it found thousands of video ads promoting the Plinko app in Meta's Ad Library. It said that several of these featured AI deepfakes, including ads featuring another Brazilian soccer star, Cristiano Ronaldo, and Meta's own CEO Mark Zuckerberg. The Oversight Board isn't the only group that's raised the alarm about scams on Meta's platforms. The Wall Street Journal recently reported that Meta "accounted for nearly half of all reported scams on Zelle for JPMorgan Chase between the summers of 2023 and 2024" and that "British and Australian regulators have found similar levels of fraud originating on Meta's platforms." The paper noted that Meta is "reluctant" to add friction to its ad-buying process and that the company "balks" at banning advertisers, even those with a history of conducting scams. If you buy something through a link in this article, we may earn commission.