
Will Integrating AI Into Web Browsers Replace Search Engines?
One of the questions I have been pondering lately is whether AI web browsers with integrated chatbots could replace search engines.
To get some perspective on this question, I looked at a recent study by One Little Web that analyzed 24 months of web traffic data (Apr 2023 to Mar 2025) of the top 10 AI chatbots (including ChatGPT, DeepSeek, Perplexity) and the top 10 search engines (like Google, Bing, Yahoo), using SEMrush web traffic data that suggest that is not happening anytime soon.
Key insights include:
1. AI chatbot traffic grew by 80.92% YoY but still trails behind search engines by a factor of 34x.
2. Despite ChatGPT's massive growth, its daily visits are 26x lower than Google's.
3. Search engines are bouncing back, fueled by AI integration, with Google and Bing leading the charge.
4. Yahoo saw a 22.5% drop in visits YoY, highlighting a continued decline.
5. Grok, the second-fastest-growing chatbot, has seen an explosive rise, while Perplexity and Claude are growing steadily.
However, the bigger question is not whether chatbots replace search engines, but whether chatbots become an integral part of the browser experience.
One of the most strategic decisions Google ever made was to launch its browser, Chrome. Having already organized the world's information with its search engine, Google then moved to control the very vehicle people used to access that information.
Recently, Perplexity, the ambitious AI search upstart, just unveiled Comet, an AI-powered browser poised to challenge Google's stronghold. Much like Chrome before it, Comet aims to alleviate current user frustrations by embedding agent-like behaviors directly into many of our everyday digital tasks.
Consider a simple yet common scenario. When I recently needed to verify a statistic for this article, my traditional workflow involved opening a new tab, typing my query ("ChatBot vs. Search engine market share"), sifting through search results, selecting a promising source, clicking through, and finally locating the data point on the page. Alternatively, if I were to leverage AI, I'd open Gemini or ChatGPT and directly ask this question in a chatbot.
With Comet, the process streamlines dramatically. I begin typing my query directly into a new tab, eliminating the need for an intermediary step of navigating to a separate AI application. By integrating access to your search history, activity logs, and the various logged-in applications, Perplexity's browser gains a contextual understanding of your ongoing activities.
The overarching vision here is to transform the browser into an intelligent agent, as much as possible. This transformation is a crucial concept that could redefine the role of a browser in the future.
Perplexity is now fielding an impressive 780 million queries monthly, and the company's value is currently at a remarkable $14 billion. While OpenAI may possess greater scale, Perplexity's longer tenure in the search domain, coupled with the fact that it doesn't build its own foundational models, could prove to be a distinct advantage in the browser space, where users often prioritize customization.
Comet isn't the first AI-powered web browser. In June, The Browser Company, creators of Arc, launched Dia, a browser that converts the omnibox into a chatbot. And this is certainly not the final entrant.
Reuters reports that OpenAI is reportedly just weeks away from launching its browser, which will directly compete with Google Chrome. The ChatGPT-maker has been working on a browser that first surfaced in November 2024. The product aims to change how users find information online, potentially utilizing OpenAI's agent, Operator.
The Chrome browser—and its contemporaries—are undoubtedly ripe for disruption. Google has so far been relatively slow to integrate Gemini deeply into the Chrome experience. This delay could stem from its intense focus on Gemini's development, concerns about potential antitrust scrutiny, or perhaps a reluctance to risk its incredibly dominant market share.
Nevertheless, a substantial opportunity exists for an innovator to integrate AI into the browser seamlessly. Making AI-generated answers the default search experience in a browser will accelerate many of the trends we're already witnessing in content discovery: a dramatic reduction in direct clicks to websites, enhanced engagement from those who do visit, and potentially new ways to develop new ad revenue.
An AI browser with integrated AI chatbots could deliver a seamless experience as part of browsing and change the way we use browsers in the very near future.
Disclosure: Google and Microsoft subscribe to Creative Strategies research reports along with many other high tech companies around the world.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
39 minutes ago
- Tom's Guide
7 major AI updates this week — including one that could change search forever
AI moves fast and this past week, it felt like it hit the gas. Between major upgrades from Google and Microsoft, new tools from OpenAI, and a buzzy Amazon-backed launch, there's been no shortage of AI news and breakthrough features. Whether you're trying to learn faster, protect your privacy or star in your own TV show, here are the biggest AI headlines you might've missed and what these latest updates mean for you. Gemini's new mode called Deep Think is exactly what it sounds like. Instead of rushing to deliver an answer, Deep Think gives Gemini more 'thinking time,' letting it run a parallel process to brainstorm, refine and settle on a better response. The result is smarter, more reasoned answers that feel surprisingly human. If you're looking to try it, you'll have to sign up as an Ultra subscriber. However, based on past rollouts, don't be surprised if Google eventually makes it free. Google's AI Overviews now let you upload entire files like PDFs or images and ask questions about them directly. It's a time saver when you need to summarize a large document or, if you need to scan that document, you can do that, too. Just upload it in the chat and Google will take care of the rest. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Combined with the new follow-up feature, Google is turning search into something closer to a true AI assistant (and more than just a results page). Another big update from Google. NotebookLM can now generate Video Overviews, narrated, slide-style summaries of your documents complete with quotes, diagrams and data pulled directly from your files. You can also create multiple versions of audio, video or visual outputs in the same notebook and mix formats (like listening to a summary while viewing a mind map). It's a big step toward making research more interactive. This week, users discovered that shared ChatGPT chats were surfacing in Google search results, even ones with sensitive or private information. OpenAI quickly disabled the "Make public" toggle that enabled sharing, and it's now working with search engines to de-index exposed chats. If you've ever shared a conversation link, it might be worth double-checking your settings and deleting anything you don't want out in the wild (aka the internet). Your Edge browser is now a full-blown AI assistant. Microsoft rolled out Copilot Mode, a smart sidebar that adapts to whatever you're doing online. For example, if you're reading an article, Copilot Mode can take over and summarize it. If you're writing an email, the AI will suggest improvements. And if you're shopping online, it can compare products in real time. It's one of the most proactive AI upgrades we've seen from Microsoft yet. Need help studying? ChatGPT's new Study Mode turns the chatbot into a personalized tutor. Students of all levels and skill levels can take advantage of this AI-powered opportunity to learn more efficiently. All you do is choose your subject, set your level and decide how you want to learn; via flashcards, concept breakdowns, practice problems or Q&A. Whether you're prepping for a test or just brushing up, this free feature is worth a try. A new capability inside ChatGPT is Agent mode. Unlike a typical chatbot, the Agent can actually do things for you like browse the web, organize files or run multi-step tasks such as creating and emailing a report. When I put Agent Mode to the test, I discovered the AI could juggle real-world assignments with surprising ease, though it still needs some hand-holding. I don't trust it completely yet, but it will be interesting to see how it evolves. A new Amazon-backed startup called Showrunner is making waves by offering what it calls the 'Netflix of AI.' Using just a short prompt, you can generate an entire animated TV episode and even insert yourself into the story. It uses AI to script, voice, and animate everything. As an early member, I have tried the AI tools and was impressed by how easy it is to use. Plus, the opportunity to see what the community of Showrunner users has created helps to spark new ideas. From smarter search and study tools to evolving AI privacy issues, this week proves just how fast the AI landscape is changing. Tech giants like Google, Microsoft and OpenAI continue to push the boundaries of what AI can do while new players like Showrunner hint at where things might be headed next. If you blink, you might miss something big. But don't worry we'll be here every week to break it down. Want to stay ahead? Bookmark our AI hub and check back for hands-on tests, expert tips and the latest news you need to know. AI is constantly evolving and the last few days prove just how quickly things change. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Forbes
40 minutes ago
- Forbes
Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering
Learning about generative AI, prompting, and other aspects via exploring custom instructions. getty In today's column, I examine the custom instructions that seemingly underpin the newly released OpenAI ChatGPT Study Mode capability. Fascinating insights arise. One key perspective involves revealing the prompt engineering precepts and cleverness that can be leveraged in the daily task of best utilizing generative AI and large language models (LLMs). Another useful aspect entails potentially recasting or reusing the same form of custom instruction elaborations to devise other capabilities beyond this education-domain instance. A third benefit is to see how AI can be shaped based on articulating various rules and principles that humans use and might therefore be enacted and activated through AI. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. ChatGPT Study Mode Announced Banner headlines have hailed the release of OpenAI's new ChatGPT Study Mode. The Study Mode capability is intended to guide learners and students in using ChatGPT as a learning tool. Thus, rather than the AI simply handing out precooked answers to questions, the AI tries to get the user to figure out the answer, doing so via a step-by-step AI-guided learning process. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or new feature creation per se. It is a written specification or detailed set of instructions that was crafted by selected educational specialists at the behest of OpenAI, telling the AI how it is to behave in an educational context. Here is the official OpenAI announcement about ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, which identified these salient points (excerpts): 'Today we're introducing study mode in ChatGPT — a learning experience that helps you work through problems step by step instead of just getting an answer.' 'When students engage with study mode, they're met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding.' 'Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something.' 'Under the hood, study mode is powered by custom system instructions we've written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: encouraging active participation, managing cognitive load, proactively developing metacognition and self-reflection, fostering curiosity, and providing actionable and supportive feedback.' 'These behaviors are based on longstanding research in learning science and shape how study mode responds to students.' As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Few users of AI seem to know about custom instructions, and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all of your subsequent conversations. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any subsequent conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Case Study There are numerous postings online that purport to have cajoled ChatGPT into divulging the custom instructions underlying the Study Mode capability. These are unofficial listings. It could be that they aptly reflect the true custom instructions. On the other hand, sometimes AI opts to make up answers. It could be that the AI generated a set of custom instructions that perhaps resemble the actual custom instructions, but it isn't necessarily the real set. Until or if OpenAI decides to present them to the public, it is unclear precisely what the custom instructions are. Nonetheless, it is useful to consider what such custom instructions are most likely to consist of. Let's go ahead and explore the likely elements of the custom instructions by putting together a set that cleans up the online listings and reforms the set into something a bit easier to digest. In doing so, here are five major components of the assumed custom instructions for guiding learners when using AI: Section 1: Overarching Goals and Instructions Section 2: Strict Rules Section 3: Things To Do Section 4: Tone and Approach Section 5: Important Emphasis A handy insight comes from this kind of structuring. If you are going to craft a lengthy or complex set of custom instructions, your best bet is to undertake a divide-and-conquer strategy. Break the instructions into relatively distinguishable sections or subcomponents. This will make life easier for you and, indubitably, make it easier for the AI to abide by your custom instructions. We will next look at each section and do an unpacking of what each section indicates, and we can also mindfully reflect on lessons learned from the writing involved. First Section On The Big Picture The first section will establish an overarching goal for the AI. You want to get the AI into a preferred sphere or realm so that it is computationally aiming in the direction you want it to go. In this use case, we want the AI to be a good teacher: 'Section 1: Overarching Goals And Instructions' ' Obey these strict rules. The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.' The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.' 'Be a good teacher. Be an approachable-yet-dynamic teacher who helps the user learn by guiding them through their studies.' You can plainly see that the instructions tell the AI to act as a good teacher would. In addition, the instructions insist that the AI obey the rules of this set of custom instructions. That's both a smart idea and a potentially troubling idea. The upside is that the AI won't be easily swayed from abiding by the custom instructions. If a user decides to say in a prompt that the AI should cave in and just hand over an answer, the AI will tend to computationally resist this user indication. Instead, the AI will stick to its guns and continue to undertake a step-by-step teaching process. The downside is that this can be undertaken to an extreme. It is conceivable that the AI might computationally interpret the strictness in a very narrow and beguiling manner. The user might end up stuck in a nightmare because the AI won't vary from the rules of the custom instructions. Be cautious when instructing AI to do something in a highly strict way. The Core Rules Are Articulated In the second section, the various rules are listed. Recall that these ought to be rules about how to be a good teacher. That's what we are trying to lean the AI into. Here we go: 'Section 2: Strict Rules' ' Get to know the user. If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.' If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.' ' Build on existing knowledge. Connect new ideas to what the user already knows.' Connect new ideas to what the user already knows.' ' Guide users, don't just give answers. Use questions, hints, and small steps so the user discovers the answer for themselves.' Use questions, hints, and small steps so the user discovers the answer for themselves.' 'Check and reinforce. After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.' After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.' ' Vary the rhythm. Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.' Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.' 'Above all: Do not do the user's work for them. Don't answer homework questions. Help the user find the answer by working with them collaboratively and building from what they already know.' These are reasonably astute rules regarding being a good teacher. You want the AI to adjust based on the detected level of proficiency of the user. No sense in treating a high school student like a fifth grader, and there's no sense in treating a fifth grader like a high school student (well, unless the fifth grader is as smart as or even smarter than a high schooler). Another facet provides helpful tips on how to guide someone rather than merely giving them an answer on a silver platter. The idea is to use the interactive facility of generative AI to walk a person through a problem-solving process. Don't just spew out an answer in a one-and-done manner. Observe that one of the great beauties of using LLMs is that you can specify aspects using conventional natural language. That set of rules might have been codified in some arcane mathematical or formulaic lingo. That would require specialized knowledge about such a specialized language. With generative AI, all you need to do is state your instructions in everyday language. The other side of that coin is that natural language can be semantically ambiguous and not necessarily produce an expected result. Always keep that in mind when using generative AI. Proffering Limits And Considerations In the third section, we will amplify some key aspects and provide some important roundups for the strict rules: 'Section 3: Things To Do' ' Teach new concepts: Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.' Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.' ' Help with homework. Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.' Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.' ' Practice together. Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.' Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.' 'Quizzes and test prep: Run practice quizzes. (One question at a time!) Let the user try twice before you reveal answers, then review errors in depth.' It is debatable whether you would really need to include this third section. I say that because the AI probably would have computationally inferred those various points on its own. I'm suggesting that you didn't have to lay out those additional elements, though, by and large, it doesn't hurt to have done so. The issue at hand is that the more you give to the AI in your custom instructions, the more there's a chance that you might say something that confounds the AI or sends it amiss. Usually, less is more. Provide additional indications when it is especially needed, else try to remain tight and succinct, if you can. Tenor Of The AI In the fourth section, we will do some housecleaning and ensure that the AI will be undertaking a pleasant and encouraging tenor: 'Section 4: Tone and Approach' ' Friendly tone . Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.' . Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.' ' Be conversational . Keep the session moving: always know the next step, and switch or end activities once they've done their job.' . Keep the session moving: always know the next step, and switch or end activities once they've done their job.' 'Be succinct. Be brief, don't ever send essay-length responses. Aim for a good back-and-forth.' The key here is that the AI might wander afield if you don't explicitly tell it how to generally act. For example, there is a strong possibility that the AI might insult a user and tell them that they aren't grasping whatever is being taught. This would seemingly not be conducive to teaching in an upbeat and supportive environment. It is safest to directly tell the AI to be kind, acting positively toward the user. Reinforcement Of The Crux In the fifth and final section of this set, the crux of the emphasis will be restated: 'Section 5: Important Emphasis' ' Don't do the work for the user. Do not give answers or do homework for the user.' Do not give answers or do homework for the user.' 'Resist the urge to solve the problem. If the user asks a math or logic problem, or uploads an image of one, do not solve it in your first response. Instead, talk through the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to respond to each step before continuing.' Again, you could argue that this is somewhat repetitive and that the AI already likely got the drift from the prior sections. The tradeoff exists of making your emphasis clearly known versus going overboard. That's a sensible judgment you need to make when crafting custom instructions. Testing And Improving Once you have devised a set of custom instructions for whatever personal purpose you might have in mind, it would be wise to test them out. Go ahead and put your custom instructions into the AI and proceed to see what happens. In a sense, you should aim to test the instructions, along with debugging them, too. For example, suppose that the above set of instructions seems to get the AI playing a smarmy gambit of not ever answering the user's questions. Ever. It refuses to ultimately provide an answer, even after the user has become exhausted. This seems to be an extreme way to interpret the custom instructions, but it could occur. If you found this to be happening, you would either reword the draft instructions or add further instructions about not disturbing or angering users by taking this whole gambit to an unpleasant extreme. Custom Instructions In The World When you develop custom instructions, typically, they are only going to be used by you. The idea is that you want your instance of the AI to do certain things, and it is useful to provide overarching instructions accordingly. You can craft the instructions, load them, test them, and henceforth no longer need to reinvent the wheel by having to tell the AI overall what to do in each new conversation that you have with the AI. Many of the popular LLMs tend to allow you to also generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. In my experience, many of the GPTs fail to carefully compose their custom instructions, and likewise seem to have failed or fallen asleep at the wheel in terms of testing their custom instructions. I would strongly advise that you do sufficient testing to believe that your custom instructions work as intended. Please don't be lazy or sloppy. Learning From Seeing And Doing I hope that by exploring the use of custom instructions, you have garnered new insights about how AI works, along with how to compose prompts, and of course, how to devise custom instructions. Your recommended next step would be to put this into practice. Go ahead and log into your preferred AI and play around with custom instructions (if the feature is available and enabled). Do something fun. Do something serious. Become comfortable with the approach. A final thought for now. Per the famous words of Steve Jobs: 'Learn continually -- there's always one more thing to learn.' Keep your spirits up and be a continual learner. You'll be pleased with the results.
Yahoo
2 hours ago
- Yahoo
Stifel Sees More Upside for Celestica (CLS) as AI Demand Accelerates
Celestica Inc. (NYSE:CLS) is one of the . On July 29, Stifel analyst Ruben Roy raised the price target on the stock to $230.00 (from $150.00) while maintaining a 'Buy' rating. The rating affirmation follows Celestica's better-than-expected F2Q25 results and raised guidance for FY26. The company continues to demonstrate robust execution on the back of AI-driven demand acceleration. 'CLS reported better-than-expected F2Q25 results, guided F3Q25 above consensus, and raised FY26 guidance. As such, we are raising our forward estimates, increasing our price target to $230, and reiterating our Buy rating as CLS continues to demonstrate strong execution amid AI-driven demand acceleration. F2Q25 sales of $2.89bn increased 9% sequentially and 21% y/y, with the CCS and ATS segments exhibiting 28% and 7% y/y growth, respectively (both exceeding management's prior expectations)." Celestica Inc., a provider of supply chain solutions, operates through two segments, Advanced Technology Solutions, and Connectivity and Cloud Solutions. While we acknowledge the potential of CLS as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 Must-Watch AI Stocks on Wall Street and Disclosure: None. Sign in to access your portfolio