logo
Ads ruined social media. Now they're coming to AI chatbots.

Ads ruined social media. Now they're coming to AI chatbots.

Time of India2 days ago

Chatbots might hallucinate and sprinkle too much flattery on their users — 'That's a fascinating question!' one recently told me — but at least the subscription model that underpins them is healthy for our wellbeing. Many Americans pay about $20 a month to use the premium versions of
OpenAI
's ChatGPT, Google's Gemini Pro or Anthropic's Claude, and the result is that the products are designed to provide maximum utility.
Don't expect this status quo to last. Subscription revenue has a limit, and Anthropic's new $200-a-month 'Max' tier suggests even the most popular models are under pressure to find new revenue streams.
Unfortunately, the most obvious one is advertising — the web's most successful business model. AI builders are already exploring ways to plug more ads into their products, and while that's good for their bottom lines, it also means we're about to see a new chapter in the
attention economy
that fueled the internet.
Play Video
Pause
Skip Backward
Skip Forward
Unmute
Current Time
0:00
/
Duration
0:00
Loaded
:
0%
0:00
Stream Type
LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
1x
Playback Rate
Chapters
Chapters
Descriptions
descriptions off
, selected
Captions
captions settings
, opens captions settings dialog
captions off
, selected
Audio Track
Picture-in-Picture
Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text
Color
White
Black
Red
Green
Blue
Yellow
Magenta
Cyan
Opacity
Opaque
Semi-Transparent
Text Background
Color
Black
White
Red
Green
Blue
Yellow
Magenta
Cyan
Opacity
Opaque
Semi-Transparent
Transparent
Caption Area Background
Color
Black
White
Red
Green
Blue
Yellow
Magenta
Cyan
Opacity
Transparent
Semi-Transparent
Opaque
Font Size
50%
75%
100%
125%
150%
175%
200%
300%
400%
Text Edge Style
None
Raised
Depressed
Uniform
Drop shadow
Font Family
Proportional Sans-Serif
Monospace Sans-Serif
Proportional Serif
Monospace Serif
Casual
Script
Small Caps
Reset
restore all settings to the default values
Done
Close Modal Dialog
End of dialog window.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Live Comfortably: 60m2 prefabricated bungalow for the elderly in Kamianets-podilskyi
Pre Fabricated Homes | Search Ads
Search Now
Undo
If social media's descent into
engagement-bait
is any guide, the consequences will be profound.
One cost is addiction. Young office workers are becoming dependent on AI tools to help them write emails and digest long documents, according to a recent study, and OpenAI says a cohort of 'problematic' ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won't spur the company to help those people reduce their use of the product. Quite the opposite.
Live Events
Advertising was the reason companies like Mark Zuckerberg's Meta Platforms Inc. designed algorithms to promote engagement, keeping users scrolling so they saw more ads and drove more revenue. It's the reason behind the so-called 'enshittification' of the web, a place now filled with clickbait and social media posts that spark outrage. Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments.
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories
Millions of people in the Western world already view chatbots in apps like Character.ai, Chai, Talkie, Replika and Botify as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI they're feeling depressed, and the system recommending some affordable holiday destinations or medication to address the problem.
Is that how ads would work in chatbots? The answer is subject to much experimentation, and companies are indeed experimenting. Google's ad network, for instance, recently started putting advertisements in third-party chatbots. Chai, a romance and friendship chatbot, on which users spent 72 minutes a day, on average, in September 2024, serves pop-up ads. And AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow ups including, at the top, 'How can I use Indeed to enhance my job search?'
Perplexity's Chief Executive Officer Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to 'get data even outside the app' to track 'which hotels are you going [to]; which restaurants are you going to,' to enable what he called 'hyper-personalized' ads.
For some apps, that might mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Researchers at Cambridge University referred to this as the forthcoming 'intention economy' in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for 'data that expresses human intention' to help train its models, a similar effort from Meta, and Apple's 2024 developer framework that helps apps work with Siri to 'predict actions someone might take in the future.'
As for OpenAI's Sam Altman, nothing says "we're building an ad business' like hiring the person who built delivery app Instacart into an advertising powerhouse. Altman recently poached CEO Fidji Simo to help OpenAI 'scale as we enter a next phase of growth.' In Silicon Valley parlance, to 'scale' often means to quickly expand your user base by offering a service for free, with ads.
Tech companies will inevitably claim that advertising is a necessary part of democratizing AI. But we've seen how 'free' services cost people their privacy and autonomy — even their mental health. And AI knows more about us than Google or Facebook ever did — details about our health concerns, relationship issues and work. In two years, they have also built a reputation as trustworthy companions and arbiters of truth. On X, for instance, users frequently bring AI models Grok and Perplexity into conversations to flag if a post is fake.
When people trust AI that much, they're more vulnerable to targeted manipulation.
AI advertising
should be regulated before it becomes too entrenched, or we'll repeat the mistakes made with social media — scrutinising the fallout of a lucrative business model only after the damage is done.
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of 'Supremacy: AI, ChatGPT and the Race That Will Change the World.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

No question of review of satcom spectrum recommendation based COAI's reax: TRAI sources
No question of review of satcom spectrum recommendation based COAI's reax: TRAI sources

Time of India

time32 minutes ago

  • Time of India

No question of review of satcom spectrum recommendation based COAI's reax: TRAI sources

New Delhi, Telecom Regulatory Authority of India ( TRAI ) on Wednesday categorically ruled out any review of recommendations on satcom spectrum at this stage, based on industry body COAI 's claims. Sources in TRAI told PTI that the regulator has already given its recommendations to the government "after following the due consultation process exhaustively and transparently". All stakeholders were given adequate opportunity to represent their viewpoint during the consultation process, they said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like War Thunder - Register now for free and play against over 75 Million real Players War Thunder Play Now Undo There is no question of review of the recommendations at this stage based on reactions of the Cellular Operators' Association of India (COAI), TRAI sources said. The comments assume significance as COAI - whose members include Reliance Jio and Airtel - has approached the telecom department to raise concerns over TRAI recommendations on the satcom spectrum, claiming that "incorrect assumptions" have led to unjustifiably low spectrum charges for satellite services relative to terrestrial networks . Live Events In a letter dated May 29, COAI claimed that TRAI's recommendations are based on incorrect assumptions and that their implementation will hit the sustainability of terrestrial services, which form the foundation of India's digital infrastructure. COAI has urged the Department of Telecom (DoT) to form a committee to "undertake a comprehensive review of the recommendations in their entirety, ensuring the process is guided by principles of fairness, transparency, and equity and also give an opportunity to TSPs (telcos) to share their views regarding the same". The industry body argued that the recommendations provide a regulatory advantage to commercial Non-Geostationary Orbit (NGSO) satellites against terrestrial broadband service providers and, if accepted by the DoT in their present form, will undermine competition and create a non-level playing field. COAI said that TRAI's recommendations do not appear to address the most fundamental and contentious issue -- the lack of a level-playing field between terrestrial service providers and satellite operators serving the same market. "The said recommendations are based on incorrect assumptions and implementation of these recommendations will impact the sustainability of terrestrial services that form the foundation of India's digital infrastructure," the association had said. The telecom regulator last month suggested that satellite communication companies like Starlink pay four per cent of their adjusted gross revenue (AGR) as spectrum charges to the government. Operators offering satellite-based broadband internet services in urban areas would have to shell out an additional Rs 500 per subscriber annually. No additional levy would be applicable for services in rural areas. COAI also argued that the recommendation of a spectrum charge at four per cent of AGR is without justification. "It is well known and TRAI would surely be fully aware that with the advent of next-generation NGSO broadband services -- including low Earth orbit (LEO) and medium Earth orbit (MEO) constellations -- satellite services are now capable of directly substituting and competing with terrestrial fixed and mobile broadband networks," COAI said. PTI

Tech giant SAP asks US Supreme Court to reconsider rival's antitrust win
Tech giant SAP asks US Supreme Court to reconsider rival's antitrust win

Time of India

time34 minutes ago

  • Time of India

Tech giant SAP asks US Supreme Court to reconsider rival's antitrust win

Europe's largest software maker SAP ( has asked the U.S. Supreme Court to review a decision that said the technology giant must face a lawsuit by U.S. data technology company Teradata accusing it of violating antitrust law . SAP in a petition made public on Tuesday said a decision by the 9th U.S. Circuit Court of Appeals in California that reinstated Teradata's lawsuit will threaten American tech innovation if it is left in place. Teradata accused SAP of violating antitrust law by "tying" sales of business-planning applications with the purchase of a key SAP database that can perform transactional and analytical functions. Teradata makes a rival analytics database. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Alquilar coches de lujo nunca fue tan fácil Coches | Anuncios Publicitarios Más información Undo In its filing at the high court, SAP said the integration of software products can often benefit consumers and "represent an effort to 'compete effectively,' rather than to stifle competition." SAP declined to comment. Teradata did not immediately respond to a request for comment. Live Events San Diego-based Teradata filed its lawsuit against SAP in federal court in California in 2018. The two companies once had a joint venture, but SAP terminated it after developing its own analytics database. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories SAP won in the district court, but the 9th Circuit revived Teradata's case in December. The appeals court said there was material dispute between the companies that a jury could decide. If the Supreme Court takes the case, the justices could rule on which legal standard judges should use to weigh antitrust tying claims. Two key legal standards guide how judges resolve whether conduct restrains competition: the "per se rule," where alleged conduct is presumed illegal, and the "rule of reason," where judges balance between anticompetitive effects and a defendant's procompetitive justification. The 9th Circuit, using a version of the "per se rule," applied too stringent a standard in evaluating Teradata's claims, SAP told the justices. SAP said the appellate court's ruling clashed with how a Washington federal appeals court resolved a landmark antitrust case against Microsoft in the 1990s.

Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?
Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?

Time of India

timean hour ago

  • Time of India

Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?

HighlightsThe rise of artificial intelligence is reshaping cognitive processes in various fields, prompting concerns about the potential loss of originality and depth in creative work as reliance on AI tools increases. Generative AI, while capable of producing competent-sounding content, often lacks true creativity and originality, as it predominantly reflects and rearranges existing human-created material. The challenge posed by the cognitive revolution driven by artificial intelligence is not only technological but also cultural, as it raises questions about preserving the irreplaceable value of human creativity amid a surge of algorithmically generated content. Artificial Intelligence began as a quest to simulate the human brain . Is it now in the process of transforming the human brain's role in daily life? The Industrial Revolution diminished the need for manual labour. As someone who researches the application of AI in international business , I can't help but wonder whether it is spurring a cognitive revolution , obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide. Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time - and teachers use similar tools to provide feedback. The economic and cultural implications are profound. What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance? Echoes of the industrial revolution We've been here before. The Industrial Revolution replaced artisanal craftsmanship with mechanised production, enabling goods to be replicated and manufactured on a mass scale. Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance. Today, there's a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality. The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and "good enough," there's the risk of losing the depth, nuance and intellectual richness that define exceptional human work. The rise of algorithmic mediocrity Despite the name, AI doesn't actually think. Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they have processed. They are, in essence, mirrors that reflect collective human creative output back to users - rearranged and recombined, but fundamentally derivative. And this, in many ways, is precisely why they work so well. Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other. Generative AI excels at producing competent-sounding content - lists, summaries, press releases, advertisements - that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when "good enough" is, well, good enough. When AI sparks - and stifles - creativity Yet, even in a world of formulaic content, AI can be surprisingly helpful. In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance. However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges. I wasn't surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and world views of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate. More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions. One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions. What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality - not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself. Navigating the cognitive revolution True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past. What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness. The challenge, then, isn't just technological. It's cultural. How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content? The historical parallel with industrialisation offers both caution and hope. Mechanisation displaced many workers but also gave rise to new forms of labour, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs. This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention. Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence? The answer, for now, is up in the air.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store