
Is AI the future of web browsing?
If you don't remember, no one will blame you. Web browsers have remained fundamentally unchanged for decades: You open an app, such as Chrome, Safari or Firefox, and type a website into the address bar. Many of us settled on one and fell into what I call 'browser inertia,' never bothering to see if there's anything better.
Yet a web browser is important because so much of what we do on computers takes place inside one, including word processing, chatting on Slack and managing calendars and email.
That's why I felt excited when I recently tried Dia, a new kind of web browser from the Browser Co. of New York, a startup. The app is powered by generative artificial intelligence, the technology driving popular chatbots like ChatGPT and Google's Gemini, to answer our questions. Dia illuminates how a web browser can do much more than load websites – and even help us learn and save time.
I tested Dia for a week and found myself browsing the web in new ways. In seconds, the browser provided a written recap of a 20-minute video without my watching its entirety. While scanning a breaking news article, the browser generated a list of other relevant articles for a deeper understanding. I even wrote to the browser's built-in chatbot for help proofreading a paragraph of text.
Dia is on the cusp of an emerging era of AI-powered internet navigators that could persuade people to try something new. This week, Perplexity, a startup that makes a search engine, announced an AI web browser called Comet, and some news outlets have reported that OpenAI, the company behind ChatGPT, also plans to release a browser this year. OpenAI declined to comment. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied the suit's claims.)
Tech behemoths like Google and Apple have added lightweight AI features into their existing browsers, Chrome and Safari, including tools for proofreading text and automatically summarizing articles.
Dia, which has not yet been publicly released, is available as a free app for Mac computers on an invitation-only basis.
What does this all mean for the future of the web? Here's what you need to know.
What is an AI browser, and what does it do?
Like other web browsers, Dia is an app you open to load webpages. What's unique is the way the browser seamlessly integrates an AI chatbot to help – without leaving the webpage.
Hitting a shortcut (command+E) in Dia opens a small window that runs parallel to the webpage. Here, you can type questions related to the content you are reading or the video you are watching, and a chatbot will respond.
For example:
– While writing this column on the Google Docs website, I asked the chatbot if I used 'on the cusp' correctly, and it confirmed that I did.
– While reading a news article about the Texas floods, I asked the browser's chatbot to tell me more about how the crisis unfolded. The bot generated a summary about the history of Texas' public safety infrastructure and included a list of relevant articles.
– While watching a 22-minute YouTube video about car jump starters, I asked the chatbot to tell me which tools were best. Dia immediately pulled from the video's transcript to produce a summary of the top contenders, sparing me the need to watch the entire thing.
In contrast, chatbots like ChatGPT, Gemini and Claude require opening a separate tab or app and pasting in content for the chatbot to evaluate and answer questions, a process that has always busted my workflow.
How does it work?
AI chatbots like ChatGPT, Gemini and Claude generate responses using large language models, systems that use complex statistics to guess which words belong together. Each chatbot's model has its strengths and weaknesses.
The Browser Co. of New York said it had teamed up with multiple companies to use their AI models, including the ones behind Gemini, ChatGPT and Claude. When users type a question, the Dia browser analyzes it and pulls answers from whichever AI model is best suited for answering.
For instance, Anthropic's AI model, Claude Sonnet, specializes in computer programming. So if you have questions about something you are coding, the browser will pull an answer from that model. If you have questions about writing, the Dia browser may generate an answer with the model that OpenAI uses for ChatGPT, which is well known for handling language.
What I appreciate about this design is that you, the user, don't need to know or think about which chatbot to use. That makes generative AI more accessible to the mainstream.
'You should just be able to say, 'Hey, I'm looking at this thing, I've got a question about it,'' said Josh Miller, the CEO of the Browser Co., which was founded in 2020 and has raised over $100 million. 'We should be able to answer it for you and do work on your behalf.'
But aren't there imperfections?
While Dia proved helpful in most of my tests, it was, like all generative AI tools, sometimes incorrect.
While I was browsing Wirecutter, a New York Times publication that reviews products, I asked the chatbot if there were any deals on the site for water filters. The chatbot said no, even as I read about a water filtration system that was on sale.
Miller said that because the browser drew answers from various AI models, its responses were subject to the same mistakes as their respective chatbots. Those occasionally get facts wrong and even make things up, a phenomenon known as 'hallucination.'
More often than not, however, I found Dia to be more accurate and helpful than a stand-alone chatbot. Still, I double-checked answers by clicking on any links Dia's bot was citing, like the articles about the recent floods in Texas.
What about privacy?
Asking AI to help with a webpage you're looking at means that data may be shared with whatever AI model is being used to answer the question, which raises privacy concerns.
The Browser Co. said that only the necessary data related to your requests was shared with its partners providing AI models, and that those partners were under contract to dispose of your data.
Privacy experts have long warned not to share any sensitive information, like a document containing trade secrets, with an AI chatbot since a rogue employee could gain access to the data.
So I recommend asking Dia's chatbot for help only with innocuous browsing activities like parsing a YouTube video. But when browsing something you wouldn't want others to know about, like a health condition, refrain from using the AI.
This exchange – potentially giving up some privacy to get help from AI – may be the new social contract going forward.
How much will this cost?
Dia is free, but AI models have generally been very expensive for companies to operate. Consumers who rely on Dia's AI browser will eventually have to pay.
Miller said that in the coming weeks, Dia would introduce subscriptions costing US$5 a month to hundreds of dollars a month, depending on how frequently a user prods its AI bot with questions. The browser will remain free for those who use the AI tool only a few times a week.
So whether an AI browser will be your next web browser will depend largely on how much you want to use, and pay, for these services. So far, only 3% of the people who use AI every day are paid users, according to a survey by Menlo Ventures, a venture capital firm.
That number could grow, of course, if generative AI becomes a more useful tool that we naturally use in everyday life. I suspect the humble web browser will open that path forward. – ©2025 The New York Times Company
This article originally appeared in The New York Times.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
18 hours ago
- The Star
AI is replacing search engines as a shopping guide, research suggests
Finding products, comparing prices and browsing reviews: Until now, you'd have done most of this in a search engine like Google. But that era appears to be ending thanks to AI, research shows. — Photo: Christin Klose/dpa COPENHAGEN: Three in four people who use AI are turning to the likes of ChatGPT, Gemini and Copilot to get advice and recommendations on shopping and travel instead of using the previous online method of search engines like Google, new research shows. AI-supported online shopping is done at least occasionally by 76% of AI users, with 17% doing so most or even all of the time, according to a study conducted by the market research institute Norstat on behalf of Verdane, a leading European investment company. The changes in consumer search behaviour pose a major challenge not only for search engine providers like Google but also for manufacturers and retailers, who must adapt to maintain their visibility in the AI-driven world. AI chatbots have emerged as powerful tools for tracking down specific products, often providing helpful advice in response to complex and specific queries. Of the survey respondents, 3% are dedicated AI enthusiasts who always use AI tools instead of search engines when shopping online, while 14% said they mostly use AI and 35% do so occasionally. A total of 7,282 people from the UK, Germany, Sweden, Norway, Denmark and Finland aged between 18 and 60 participated in the survey in June. The highest proportion of AI use is in online travel research, at 33%. This is followed by consumer electronics (22%), DIY and hobby supplies (20%), and software or digital subscriptions (19%). However, AI usage is still relatively low in fashion and clothing (13%), cosmetics (12%), and real estate (7%). Among AI tools, ChatGPT is far ahead of its competitors and 86% of AI users regularly use OpenAI's chatbot. This is followed at a considerable distance by Google's Gemini (26% regular users) and Microsoft's Copilot (20%). The Chinese AI bot DeepSeek, which has been the subject of heated debate among AI experts and data protection advocates, appears to have no significant role among consumers in Europe. – dpa


The Star
a day ago
- The Star
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbotsare now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review.'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.' (Editing by Yasmeen Serhan and Sharon Singleton)


The Star
2 days ago
- The Star
People are starting to talk more like ChatGPT
Artificial intelligence, the theory goes, is supposed to become more and more human. Chatbot conversations should eventually be nearly indistinguishable from those with your fellow man. But a funny thing is happening as people use these tools: We're starting to sound more like the robots. A study by the Max Planck Institute for Human Development in Berlin has found that AI is not just altering how we learn and create, it's also changing how we write and speak. The study detected 'a measurable and abrupt increase' in the use of words OpenAI's ChatGPT favours – such as delve, comprehend, boast, swift, and meticulous – after the chatbot's release. 'These findings,' the study says, 'suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture.' Researchers have known ChatGPT-speak has already altered the written word, changing people's vocabulary choices, but this analysis focused on conversational speech. Researchers first had OpenAI's chatbot edit millions of pages of emails, academic papers, and news articles, asking the AI to 'polish' the text. That let them discover the words ChatGPT favoured. Following that, they analysed over 360,000 YouTube videos and 771,000 podcasts from before and after ChatGPT's debut, then compared the frequency of use of those chatbot-favoured words, such as delve, realm, and meticulous. In the 18 months since ChatGPT launched, there has been a surge in use, researchers say – not just in scripted videos and podcasts but in day-to-day conversations as well. People, of course, change their speech patterns regularly. Words become part of the national dialogue and catch-phrases from TV shows and movies are adopted, sometimes without the speaker even recognising it. But the increased use of AI-favoured language is notable for a few reasons. The paper says the human parroting of machine-speak raises 'concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.' And since AI trains on data from humans that are increasingly using AI terms, the effect has the potential to snowball. 'Long-standing norms of idea exchange, authority, and social identity may also be altered, with direct implications for social dynamics,' the study says. The increased use of AI-favoured words also underlines a growing trust in AI by people, despite the technology's immaturity and its tendency to lie or hallucinate. 'It's natural for humans to imitate one another, but we don't imitate everyone around us equally,' study co-author Levin Brinkmann tells Scientific American. 'We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.' The study focused on ChatGPT, but the words favoured by that chatbot aren't necessarily the same standbys used by Google's Gemini or Perplexity's Claude. Linguists have discovered that different AI systems have distinct ways of expressing themselves. ChatGPT, for instance, leans toward a more formal and academic way of communicating. Gemini is more conversational, using words such as sugar when discussing diabetes, rather than ChatGPT's favoured glucose, for instance. (Grok was not included in the study, but, as shown with its recent meltdown, where it made a series of antisemitic comments – something the company attributed to a problem with a code update – it heavily favours a flippant tone and wordplay.) 'Understanding how such AI-preferred patterns become woven into human cognition represents a new frontier for psycholinguistics and cognitive science,' the Max Planck study says. 'This measurable shift marks a precedent: machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop.' – Inc./Tribune News Service