
AI leaves web in the lurch
The AI-fixated tech industry is rapidly dismantling the old web, with no game plan for how to replace it.
State of play: Chatbots have already begun to intercept web traffic and drain publishers' revenue. Now tech giants and startups aim to remodel the devices and browsers we use to access web pages, using AI to summarize or pre-empt the content that people and publishers post online.
Driving the news: Tech circles were abuzz over the past week with news from the normally sedate browser world — the software category that has been shaping access to digital information since the '90s.
Firefox last week debuted an experimental browser tool that provides AI summaries when users hover over links.
Also last week, the Browser Company, maker of the Arc browser that's beloved by some power users, announced it was pivoting to focus on a new AI-powered browser called Dia.
OpenAI has long been rumored to be working on its own browser, but has yet to ship anything.
Over the last two years Google, which customarily casts itself as champion of the open web, has steadily increased the prominence of its AI summaries in every aspect of search.
At Google's I/O developer conference last year it announced the U.S.-wide rollout of the AI summaries, which sit on top of search results and allow users to get their answers without clicking through to source pages (while also sometimes providing made-up facts).
At this year's I/O, the company said that AI Mode, which turns a user's search into an AI chat conversation, would now be a standard feature — although some early reviews have found its information unreliable.
Meanwhile, OpenAI made headlines with its announcement that it was purchasing Apple designer Jony Ive's AI device startup.
Ive will now spearhead OpenAI's plan to sell new, non-smartphone gadgets that could bring generative AI answers more thoroughly into users' everyday lives.
At Microsoft's recent Build developer conference the company introduced a new open project called NLweb, aimed at letting websites build their own chatbots to help site visitors access content.
What they're saying: "Increasingly, web pages — apps, articles, and files — will become tool calls with AI chat interfaces," Browser Company CEO Josh Miller wrote to explain why his firm was stopping further work on Arc.
Yes, but: As tech goes all in on rebuilding our web experiences with AI, there's no guarantee that the web will still be there when that job is done.
With chatbots becoming users' default way to find out what's happening in the world, their makers pretend they can plaster this new interface layer over the internet without disrupting the data sources that feed it.
But some media observers believe an AI-first web will choke off the money and attention that motivates web creators to keep extending the common knowledge pool.
Many publishers are already seeing significant traffic and income declines from the shift toward AI search, though Google disputes there's a connection. And creative artists fear their work is being stolen or devalued.
This is everyone's problem. Of course the businesses and people that have built their work around the web are afraid — but AI makers should be worried, too.
The web's vast treasury (and cesspool) of human creative work has accumulated since the 1990s because people wanted to share what they know either for financial or reputational gain, or just to advance a cause or do some good. That setup gave us everything from Wikipedia and YouTube tutorials to blogs and Reddit.
Nearly all of the old-school web has already been fed into AI training databases for regurgitation by bots like ChatGPT. From now on, valuable new contributions are likely to sit behind subscription paywalls or depend on unsteady alternate means of support (membership programs, nonprofit grants, government funding).
If AI undermines the incentives for human beings to update the web with their news, opinions and arguments, it will also cut off its own future.
The intrigue: Some in the web avant-garde are already anticipating a world in which the most ambitious or meaningful creative work takes place in what they're calling a "dark forest" web.
They imagine creative communities that are purposefully isolated from the Silicon Valley bazaar, generating "anti-memes" and critical ideas without participating in social media's algorithmic competition or AI's sloppy reductivism.
The other side: AI firms have introduced modest efforts to feed money back to content providers. OpenAI, for instance, has made multiple deals with online publishers (including Axios).
But it's hard to see how that kind of arrangement replaces the search traffic and ad revenue that's been the sustaining anchor for so many web publishers for the past decade.
When it looked like Facebook's rise was threatening to hobble news publishers, the social media giant announced multiple programs to funnel them cash.
But that support was fickle and fleeting, and site owners and creators know they can't sustain their businesses purely on tech largesse.
What's next: Advertising in AI chats is still in its infancy, but OpenAI has said it's going to build ads into ChatGPT, and its competitors won't be far behind.
Everyone assumes this business will evolve rapidly, building on the performance-based model and personalization techniques that emerged in the search and social media eras.
That can only further undermine the remnants of the web publishing industry, unless AI makers choose to share this new income with information providers.
But they're spending billions on data centers, and their investors are expecting astronomical revenue growth, so no one should be surprised if they want to hold onto the lion's share.
The bottom line: Google shaped a search-based web on which independent publishers and individual contributors could survive, if not always thrive. Now AI is ready to turn that entire ecosystem into a legacy product.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
21 minutes ago
- Forbes
The AI Era Enters Its Sovereign Phase
Generative AI adoption started in late 2022 with public adoption of models like ChatGPT and Llama. As it drives towards its next phase of value creation with reasoning, also referred to as agentic AI, it has recently crossed the boundary from a consumer-centric application into an enterprise application. Right on the heels of this adoption is also another phase of value creation – Sovereign AI. What Is Sovereign AI? Sovereign AI refers to artificial intelligence that is developed, maintained, and controlled within a specific nation's or organization's jurisdiction, ensuring independence from external influences. This artificial intelligence is designed to align with local regulations, ethical standards, and strategic priorities, allowing governments and enterprises to maintain autonomy over their AI-driven operations. The Opportunity To Reign Supreme (Or At Least Be At The Front Of The Pack) Nvidia CEO Jensen Huang recently stated that 'AI is now an essential form of national infrastructure – just like energy, telecommunications and the internet.' Indeed, many leading countries such as the United States, United Kingdom, China, France, Denmark and the United Arab Emirates have launched sovereign AI initiatives. Stargate is an example of such an initiative from the United States. Additionally, leading AI enablers like Nvidia and OpenAI, have initiatives targeted specifically at helping entities establish their own sovereign AI capabilities. Sovereign AI is particularly crucial in areas like national security, defense, and critical infrastructure, where reliance on foreign AI models could pose risks related to data privacy, cybersecurity, or geopolitical dependencies. By building and maintaining custom AI capabilities, nations and organizations can safeguard their technological sovereignty while fostering innovation tailored to their unique needs. Moving Forward With Sovereign AI While this is a gross oversimplification of how complicated this task is for national leaders to undertake, the following are some critical areas that must be addressed in embarking on the sovereign AI journey: To this end, AI enablers like Nvidia and leading countries such as France have started to organize events. For example, at the upcoming Viva Technology event in Paris this coming June, Jensen Huang and Nvidia have organized a dedicated GTC event where interested parties can learn more. As mentioned earlier, it is important to keep in mind that sovereign AI isn't necessarily limited to national entities. Any sufficiently capable entity, whether they be nations, companies, organizations or universities interested in securing their own AI systems and capabilities from data curation and model creation to specified and focused outcomes can take advantage of sovereign AI.
Yahoo
31 minutes ago
- Yahoo
TenHaken talks time as mayor, future plans
SIOUX FALLS, S.D. (KELO) — Paul TenHaken has been leading Sioux Falls for seven years, but he plans to take time away from politics after his final year as mayor. 'There's a lot of people announcing for different offices, so, it's coming up a lot in conversations. And, for the sake of that, I wanted to get it out there and let people know, hey, I'm not running for governor. I'm not running for Congress. Senate. I just want to focus on this last year in office, and finish well,' said Paul TenHaken, mayor of Sioux Falls. TenHaken says he plans to bring his focus back to entrepreneurship. But he doesn't have anything specific planned yet. 'It's really buttoning things up. You're really not starting anything new. You're landing the plane. And I use that analogy a lot,' said TenHaken. 'You know, we're going to be opening the Jacobson Plazas of the world for projects like that, where you have a lot of big infrastructure projects that we're working on, and we're going to finally break ground on the 85th and I-29.' As his mayoral chapter comes to a close, TenHaken has a lot of accomplishments to look back on. 'During the last seven years, I think one of the things I'm most proud of, and this is going to seem really nebulous, is really just keeping the wheels on the bus during a time when the world was kind of falling apart,' said TenHaken. And despite the unknown timing of the next election, TenHaken is looking ahead to what the city's next leader will bring to the table. 'Keeping up with housing, dealing with the challenges of homelessness, ensuring crime stays manageable as we grow. Those are very real challenges. And so, the next mayor is going to have to be a leader, a consensus builder, someone who's run large organizations, large leadership teams, and can unite a community that's increasingly diverse,' said TenHaken. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
32 minutes ago
- Yahoo
What is a GPT?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The introduction of generative pre-trained transformers (GPTs) marked a significant milestone in the adoption and utility of artificial intelligence in the real world. The technology was created by the then fledgling research lab OpenAI, based on previous research done on transformers in 2017 by Google Labs. It was Google's white paper "Attention is all you need", which laid the foundation for OpenAI's work on the GPT concept. As seen in > Model matchup surprise > ChatGPT announcements > Goodbye ChatGPT-4 > Why ChatGPT 4.1 is a big deal Transformers provided AI scientists with an innovative method of taking user input, and converting it to something that could be used by the neural network using an attention mechanism to identify important parts of the data. This architecture also allows for the information to be processed in parallel rather than sequentially as with traditional neural networks. This provides a huge and critical improvement in speed and efficiency of AI processing. OpenAI's GPT architecture was released in 2018 with GPT-1. By significantly refining Google's transformer ideas, the GPT model demonstrated that large-scale unsupervised learning could produce an extremely capable text generation model which operated at vastly improved speeds. GPT's also uprated the neural networks' understanding of context which improved accuracy and provided human-like coherence. Before GPT, AI language models relied on rule-based systems or simpler neural networks like recurrent neural networks (RNNs), which struggled with long-range dependencies and contextual understanding. The story of the GPT architecture is one of constant incremental improvements ever year since launch. GPT-2 in 2019 introduced a model with 1.5 billion parameters, which started to provide the kind of fluent text responses where AI users are now familiar with. However it was the introduction of GPT-3 (and subsequently 3.5) in 2020 which was the real game-changer. It featured 175 billion parameters, and suddenly a single AI model could cope with a vast array of applications from creative writing to code generation. GPT technology went viral in November of 2022 with the launch of ChatGPT. Based on GPT 3.5 and later GPT-4, this astonishing technology instantly propelled AI into public consciousness in a massive way. Unlike previous GPT models, ChatGPT was fine-tuned for conversational interaction. Suddenly business users and ordinary citizens could use an AI for things like customer service, online tutoring or technical support. So powerful was this idea, that the product attracted a 100 million users in a mere 60 days. Today GPT is one of the top two AI system architectures in the world (along with Google's Gemini). Recent improvements have included multimodal capabilities, i.e. the ability to process not just text but also images, video and audio. OpenAI has also updated the platform to improve pattern recognition and enhance unsupervised learning, as well as adding agentic functionality via semi-autonomous tasks. On the commercial front, GPT powered applications are now deeply embedded in many different business and industry enterprises. Salesforce has Einstein GPT to deliver CRM functionality, Microsoft's Copilot is an AI assisted coding tool which incorporates Office suite automation, and there are multiple healthcare AI models which are fine-tuned to provide GPT powered diagnosis, patient interaction and medical research. At the time of writing the only two significant rivals to the GPT architecture are Google's Gemini system and the work being done by DeepSeek, Anthropic's Claude and Meta with its Llama models. The latter products also use transformers, but in a subtly different way to GPT. Google however is a dark horse in the race, as it's becoming clear that the Gemini platform has the potential to dominate the global AI arena within a few short years. Despite the competition, OpenAI remains firmly at the top of many leaderboards in terms of AI performance and benchmarks. Its growing range of reasoning models such as o1 and o3, and its superlative image generation product, GPT Image-1 which uses the technology, continue to demonstrate that there is significant life left in the architecture, waiting to be exploited.