How AI could change the internet
On this week's episode of Yahoo Finance Future Focus, Outlier Ventures chairman Jamie Burke delved into the future of the internet, exploring whether it's destined to become a digital wasteland overrun by bots and shallow AI-generated content or if a powerful blend of artificial intelligence and blockchain technology can pave the way for a new era of intentional, meaningful online experiences.
This vision is known as the "post web".
For decades, the design of the modern web has hinged on a simple yet powerful business model — capture and retain user attention. According to Burke, this has led to a paradoxical digital experience.
'The web as an interface for people to do things over the internet has really formed around the attention economy,' he explained. 'We're doing things we'd rather not be doing. We settle for good enough, rather than necessarily the best outcome, simply because our attention is hijacked by algorithms engineered to keep us scrolling.'
Read more: Quantum computers could expose sensitive data within years, says IBM scientist
The digital landscape, he argues, is deliberately structured to distract and fragment our focus, often steering us toward suboptimal outcomes and shallow engagement. In essence, our online behavior has been reduced to a series of clicks and swipes that feed advertising revenues rather than meaningful interactions.
But what if the nature of our digital interactions could be transformed? Burke sees a future where artificial intelligence plays a pivotal role in shifting the focus from an attention economy to what he calls an 'intention economy.'
'Most people, by now, are using tools like ChatGPT — so much so that it might have already replaced traditional methods like Google search,' Burke said.
'When you input a query into a chatbot, you're expressing an intent — essentially telling the AI exactly what you want. This isn't just about finding information, it's about efficiently accomplishing tasks without the endless interruptions that characterise our current web experience.'
In the future, AI-powered agents could carry out complex workflows on behalf of users. Instead of navigating a labyrinth of advertisements and distracting content, individuals could simply state their intentions and have sophisticated algorithms coordinate the necessary actions behind the scenes.
'You put out your intention, and the internet coordinates around solving that as effectively as possible,' said Burke.
At the heart of Burke's vision is the rise of intelligent agents — advanced AI systems that can handle increasingly complex tasks. These agents would be autonomous entities capable of making decisions, sourcing the best information, and even executing transactions on behalf of their users.
Read more: How will AI change the world of scamming? | The Crypto Mile
However, as we delegate more responsibilities to these digital intermediaries, the issue of security and trust becomes paramount. The answer, according to Burke, lies in integrating blockchain technology into this emerging digital framework.
Blockchain, with its inherent features of decentralisation and immutability, offers the verifiability and security that AI agents require to operate safely and effectively.
'Blockchain technologies allow for characteristics such as sovereignty, permissioning, greater levels of security, and ultimately, ownership,' Burke said.
He envisions a future where every digital transaction — whether it involves tangible assets like cryptocurrencies or intangible ones like personal data — is recorded in a way that can be independently verified. This could give users with more control over their information and interactions.
'Agents will select different blockchains for different tasks,' Burke said. He thinks a range of blockchain platforms could coexist, each tailored to specific functions and requirements.
Another concern is the potential for malicious AI agents to disrupt or exploit these systems.
Burke pointed to a recent news story in which a woman was duped into believing she had formed a relationship with a digital facsimile of actor Brad Pitt. In this case, AI-generated images and videos were weaponised to perpetrate fraud.
Read more: Sovereign agents: Your own personal AI assistant? | The Crypto Mile
'This is perhaps an extreme example of the weaponisation of AI,' Burke warned. 'Not all agents are there to be helpful and nice; some are going to be malicious. Some are going to try to hijack and defraud both people and other agents.'
It is critical that any new digital infrastructure is built on a foundation of robust security and trust, according to Burke.
Another aspect of Burke's vision is reclaiming digital sovereignty. In the current landscape, users are often at the mercy of platforms that control not only the content they consume but also the data they generate. By leveraging blockchain's decentralised architecture, it becomes possible to restore a sense of ownership and control to the individual.
'Imagine an internet where you don't have to surrender your personal data or trust a single centralised platform with your digital identity,' Burke said. In this model, blockchain would enable users to set precise permissions — defining what an AI agent can and cannot do on their behalf.
This could pave the way for a more ethical and user-centric internet, where interactions are based on verifiable trust rather than opaque algorithms.
Read more:
UK's 'tax on employment' is hurting business, says Centrus CEO
6 crypto developments in 2025 that will keep fuelling bitcoin's rally
What Trump's planned banking deregulation means for investors

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
an hour ago
- The Hill
The New York Times wants your private ChatGPT history — even the parts you've deleted
Millions of Americans share private details with ChatGPT. Some ask medical questions or share painful relationship problems. Others even use ChatGPT as a makeshift therapist, sharing their deepest mental health struggles. Users trust ChatGPT with these confessions because OpenAI promised them that the company would permanently delete their data upon request. But last week, in a Manhattan courtroom, a federal judge ruled that OpenAI must preserve nearly every exchange its users have ever had with ChatGPT — even conversations the users had deleted. As it stands now, billions of user chats will be preserved as evidence in The New York Times's copyright lawsuit against OpenAI. Soon, lawyers for the Times will start combing through private ChatGPT conversations, shattering the privacy expectations of over 70 million ChatGPT users who never imagined their deleted conversations could be retained for a corporate lawsuit. In January, The New York Times demanded — and a federal magistrate judge granted — an order forcing OpenAI to preserve 'all output log data that would otherwise be deleted' while the litigation was pending. In other words, thanks to the Times, ChatGPT was ordered to keep all user data indefinitely — even conversations that users specifically deleted. Privacy within ChatGPT is no longer an option for all but a handful of enterprise users. Last week, U.S. District Judge Sidney Stein upheld this order. His reasoning? It was a 'permissible inference' that some ChatGPT users were deleting their chats out of fear of being caught infringing the Times's copyrights. Stein also said that the preservation order didn't force OpenAI to violate its privacy policy, which states that chats may be preserved 'to comply with legal obligations.' This is more than a discovery dispute. It's a mass privacy violation dressed up as routine litigation. And its implications are staggering. If courts accept that any plaintiff can freeze millions of uninvolved users' data, where does it end? Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American's searches over a single business dispute? The Times is opening Pandora's box, threatening to normalize mass surveillance as another routine tool of litigation. And the chilling effects may be severe; when people realize their AI conversations can be exploited in lawsuits that they're not part of, they'll self-censor — or abandon these tools entirely. Worst of all, the people most affected by this decision — the users — were given no notice, no voice, and no chance to object. When one user tried to intervene and stop this order, the magistrate judge dismissed him as not 'timely,' apparently expecting 70 million Americans to refresh court dockets daily and maintain litigation calendars like full-time paralegals. And last Thursday, Stein heard only from advocates for OpenAI and the Times, not from advocates for ordinary people who use ChatGPT. Affected users should have been allowed to intervene before their privacy became collateral damage. The justification for the unprecedented preservation order was paper-thin. The Times argued that people who delete their ChatGPT conversations are more likely to have committed copyright infringement. And as Stein put it in the hearing, it's simple 'logic' that '[i]f you think you're doing something wrong, you're going to want that to be deleted.' This fundamentally misapprehends how people use generative AI. The idea that users are systematically stealing the Times's intellectual property through ChatGPT, then cleverly covering their tracks, ignores the thousand legitimate reasons people delete chats. Users share intimate details about their lives with ChatGPT; of course they clear their conversations. This precedent is terrifying. Now, Americans' private data could be frozen when a corporate plaintiff simply claims — without proof — that Americans' deleted content might add marginal value to their case. Today it's ChatGPT. Tomorrow it could be your cleared browser history or your location data. All they need to do is argue that Americans who delete things must have something to hide. We hope the Times will back away from its stunning position. This is the newspaper that won a Pulitzer for exposing domestic wiretapping in the Bush era. The paper that built its brand in part by exposing mass surveillance. Yet here it is, demanding the biggest surveillance database in recorded history — a database that the National Security Agency could only dream of — all to win a copyright case. Now, in the next step of this litigation, the Times's lawyers will start sifting through users' private chats — all without users' knowledge or consent. To be clear, the question of whether OpenAI infringed the Times's copyrights is for the courts to decide. But the resolution of that dispute should not cost 70 million Americans their privacy. What the Times calls 'evidence,' millions of Americans call 'secrets.' Maybe you have asked ChatGPT how to handle crippling debt. Maybe you have confessed why you can't sleep at night. Maybe you've typed thoughts you've never said out loud. Delete should mean delete. The New York Times knows better — it just doesn't care. Jay Edelson has been recognized by Forbes as one of America's top 200 lawyers and by Fortune as one of the most creative people in business. His privacy cases have recovered over $1.5 billion for consumers nationwide.


Entrepreneur
3 hours ago
- Entrepreneur
Create Visuals, Content, and Presentations That Land with This $25 Bundle
Learn how to command attention and build brand trust in any room. Disclosure: Our goal is to feature products and services that we think you'll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners. Building a business in 2025 means wearing a lot of hats—strategist, marketer, designer, communicator, and leader. If you're looking to sharpen your edge across these roles without spending months (or thousands), the 2025 Complete Entrepreneur's AI Branding & Business Skills Bundle is designed for you. This five-course bundle delivers real, actionable training on the tools and techniques today's entrepreneurs need to stand out in a crowded market. Whether you're building your first brand or refining your pitch deck for investors, this package teaches practical skills that actually matter. At the heart of the bundle is a course on using ChatGPT for business branding and marketing—yes, a real AI application, not just hype. You'll learn how to quickly generate branding assets like logos, typography, and social media templates that align with your business identity. Then there's the Branding Fundamentals course—perfect for founders who want a solid foundation in how to create trust and recognition across every touchpoint of their business. It's your chance to stop "guessing" your way through brand strategy. Next up: speaking and presenting like a leader. Public Speaking Mastery and Presentation Skills for Freelancers & Entrepreneurs offer pro techniques for overcoming nerves, structuring your message, and connecting with audiences—skills you'll use in pitches, meetings, videos, and team calls. Finally, the Entrepreneurship & Freelancing Business Blueprint offers workflow tips and productivity habits you can implement immediately, whether you're bootstrapping your first business or scaling your side hustle. For just $24.99 (reg. $100), this Entrepreneur's AI Branding & Business Skills Bundle gives you the tools to show up like a pro across every aspect of your business. The 2025 Complete Entrepreneur's AI Branding & Business Skills Bundle See Deal StackSocial prices subject to change


Gizmodo
4 hours ago
- Gizmodo
The New Corporate Memo: Let AI Ease The Pain
A troubling new trend is crystallizing in the tech industry. A company at the forefront of AI development lays off thousands of its human employees, then encourages them to seek comfort from the very technology supplanting them. It's the automation of suffering, and it's happening now. This week, Matt Turnbull, an Executive Producer at Xbox Game Studios Publishing, became a case study. Following Microsoft's decision to cut thousands jobs from its gaming division, Turnbull took to LinkedIn. With what seems like good intentions, he encouraged former employees to turn to AI tools like ChatGPT and Copilot to manage the emotional and logistical fallout. 'These are really challenging times, and if you're navigating a layoff or even quietly preparing for one, you're not alone and you don't have to go it alone,' his post began. 'I know these types of tools engender strong feelings in people, but I'd be remiss in not trying to offer the best advice I can under the circumstances.' He continued: 'I've been experimenting with ways to use LLM AI tools (like ChatGPT or Copilot) to help reduce the emotional and cognitive load that comes with job loss.' AI Is Not in the Memo, but It Haunts Every Layoff at Xbox The message landed with a surreal thud. Microsoft, which just ended your employment, was now outsourcing your emotional support to a bot. The July layoffs hit Xbox Game Studios. Alongside the job cuts, Microsoft announced that ambitious titles like Perfect Dark and Everwild are being canceled, and at least one studio, The Initiative, one of Microsoft's newer, high-profile studios, is being closed entirely. In his now deleted post captured by Aftermath, Turnbull even offered prompt templates to help the newly unemployed start their conversations with the AI. These folks are absolute sociopaths. — Julien Eveillé – THRESHOLD 30% OFF (@PATALOON) July 4, 2025 He categorized the prompts like a self help guide for the digital age: Career Planning Resume & LinkedIn Help Networking & Outreach Emotional Clarity & Confidence The message is clear: AI is your new therapist and outplacement service, rolled into one. Where a hefty severance package from a large corporation once included connections to human career coaches, AI now appears to be the cheaper, more scalable solution. While the prompts themselves may be useful, the gesture feels hollow coming from a leader at the company responsible for the layoffs. This is a stark redefinition of corporate care: outsourced, AI assisted, and quietly depersonalized. It's a chilling reframing of the social contract, where even empathy is routed through software. This is the tech world's cynical feedback loop. The same industry obsessed with automating jobs is now positioning its products as the cure for the emotional damage it inflicts. Microsoft, which has invested over $13 billion into OpenAI, has a direct financial stake in this solution. When an executive at a Microsoft owned studio promotes ChatGPT or its own Copilot as the first resource for the unemployed, it blurs the line between genuine concern and brand alignment. Empathy becomes a use case. Trauma becomes another customer journey. Traditionally, outplacement services offered a human touch. As LLMs become more powerful, the corporate pressure to automate post layoff support will only grow. A chatbot can rewrite your resume, coach you for interviews, and talk you down from a mental spiral, at least in theory. But what gets lost in that shift? What happens to the human dignity of grief, reflection, and real connection during a time of professional crisis? Even Turnbull acknowledged the tension in his post: 'No AI tool is a replacement for your voice or your lived experience. But at a time when mental energy is scarce, these tools can help get you unstuck faster, calmer, and with more clarity.' Turnbull's post isn't an isolated incident; it's a flare signaling a major cultural shift in tech, where recovery is being privatized, individualized, and automated. There's a strange, unnerving optimism embedded in all this: the belief that you can prompt your way out of pain. But pain isn't a productivity issue. And a layoff isn't a user experience problem. If the only support a worker receives is from a chatbot trained on the internet's vast archive of trauma, we are witnessing the dawn of something much darker than a downturn. We are seeing the first wave of algorithmic grief management, sanctioned by the very forces that deemed human workers disposable in the first place.