logo
#

Latest news with #informationquality

Are Google and other search engines getting worse, or should we change how we look for information?
Are Google and other search engines getting worse, or should we change how we look for information?

ABC News

time13-05-2025

  • ABC News

Are Google and other search engines getting worse, or should we change how we look for information?

Ever punched a question into a browser and been served a bunch of poorly written, ad-filled websites which don't really give you an answer? Or a summary generated by artificial intelligence (AI) that doesn't make any sense — or is incorrect? Search engines, particularly Google, are most people's gateway to the internet. And sometimes, they fail to get people the results they want. But in recent years, there's been increasing scrutiny on their quality. From social media to academia, there have been questions about whether search engines deliver information like they used to. The Australian Competition and Consumer Commission (ACCC) also raised concerns about declining quality in a report it released on search engines late last year. With generative AI making low-quality websites easier to build, and sometimes questionable AI summaries now sitting at the top of many Google searches, you may think it's getting harder to look something up and get a reliable answer on the internet. But search engines are still big business. When Apple announced last week that it was planning to add AI-powered search to its browser, Google's parent company lost $US150 billion ($A235 billion) in market value. Google currently commands 94 per cent of the Australian search market, according to the ACCC. Most other searches, some 4.7 per cent, happen on its rival Bing — Microsoft's default search engine. So are search engines really getting worse, or just changing? And if you're not happy with the answers you're getting, what else could you do? Search quality is difficult to measure. Fifteen years ago, your search might be registered as unsuccessful if you didn't click on any of the pages presented to you. But now, with summaries provided, that's often the default expectation. Oleg Zendel, a computer scientist at RMIT University, says that search has gotten "much, much better" over the past 10 years. But in the shorter term, like in the past couple of years, it's harder to tell. "To be able to say unequivocally that it's getting better, or worse, is not something that even Google can do," Dr Zendel says. There's a few reasons for this. For one, users want different things from search engines at different times. A service that's very good at telling you the local weather report might not necessarily be the best at finding detailed archival information for a research report. Then, there have been big changes in the way search engines work. Since the late 2010s, they've shifted from being best at returning keyword searches (for example, "Uluru height") to natural language searches (such as "how high is Uluru" — and to save you a click, the answer is 863 metres). Swedish software engineer Viktor Lofgren says search engines were originally designed "to find documents on the internet relating to some topic". "But they've gradually started being used for all manner of tasks they were really never very good at, such as answering general questions." He thinks that this worked only because the internet "accidentally contained" a lot of good answers. "But a combination of the increased proliferation and efficacy of search engine spam has changed that." Mr Lofgren believes this change is a step towards the death of big search engines like Google, because large language models like ChatGPT are better suited to giving simple answers for users. "They don't always give you the correct answers, but to be honest, neither do search engines," he says. Search engines are also increasingly putting AI summaries in their results — including Google. (Although, if you're not a fan of Google's AI summaries, here's a tip: adding "-ai" to your search query will remove them.) These AI overviews can present a new suite of problems with inaccuracy. Ashwin Nagappa, a researcher in social science and digital media at Queensland University of Technology, points out a recent article that found Google's AI overviews could be prompted to produce gibberish by searching for fake sayings. "If you are not a native English speaker or you are using words that are in different language, AI summaries may not get it right," Dr Nagappa says. Johanne Trippas, also a computer scientist at RMIT University, says that decline in search quality has become an "underlying current" at computer science conferences she's attended. But she also says our expectations of search engines have risen. "Users now expect that it can't just do a simple keyword match. They also want to have the system reply to a very complex answer in a very direct manner." Search engine quality may or may not have deteriorated in general. But if you think your personal experience has tanked, you're not at a dead end. Dr Zendel says the simplest thing to do if you're unsatisfied with search results is re-word the query. "If that doesn't work, try different sources. It can be alternative search engines like DuckDuckGo, Brave, Bing — there are so many of them now." Wikipedia has a long list of academic search engines and databases that can help with more detailed research into specific areas. Search engines don't need to be run by tech behemoths. Mr Lofgren built a search engine, called Marginalia Search, which takes him as little as an hour a week to maintain. Marginalia Search is designed to feature text-heavy, non-commercial websites using "very traditional ranking algorithms", Mr Lofgren says. It's useful for browsing the internet, but not necessarily giving you quick answers to specific questions. Dr Trippas says that searching should often be an iterative experience, refining and tuning queries. While AI can be useful, she says "it is just important to be vigilant and fact check" when using AI-generated results, just as one would with search engines. Dr Nagappa says that going directly to sources, such as trusted news organisations, remains important. "You have some responsibility to make sure that you understand your information better," he says. There are queries where government webpages are likely to be the best source — for instance, whether you need a visa to travel to certain countries, and how to apply for them. Generative AI isn't a good guide for this situation, as one Australian found out when he discovered he did need a visa to enter Chile, despite what ChatGPT told him. Dr Zendel says that confirmation bias (our habit of looking for and remembering things we already agree with) is something searchers should always be aware of. They should try to counter confirmation bias when looking for important information. "If you really care about it, then you should try and search the opposite. If it's politics, then try and see what the other candidate is saying," he suggests. "If it's a visa, then don't search 'Can I go to this country without a visa?' Look for something like 'What are the restrictions' or 'What type of visa do I need to go to that country?'" Dr Nagappa says that even bigger changes are starting to emerge, with features such as voice searches, AI, and circle-to-search — a feature on some Android phones that allows users to search images quickly. These features require much more work on the side of the search engine to deduce what the user wants to know. For instance, a user circling a photo of a politician in a news article probably wants to know who the politician is, not links to buy their suit. Dr Nagappa, Dr Zendel and Dr Trippas are all involved in a research project called the Australian Search Experience, which seeks to understand how Australians search the web. The first phase of the project found that search results weren't heavily personalised for users beyond taking their location into account. A search for restaurants from a user in Melbourne, for instance, would bring up Melbourne restaurants. But Dr Nagappa says that search terms can have a big influence on results. Searching for "restaurants in Naarm" brings up different answers to "restaurants in Melbourne", even though the names refer to the same location. "By changing small words in the search query, the meaning changes for the search engine," Dr Nagappa says. Beyond searching, the sites we look at have changed. Mr Lofgren thinks the "maze" of the modern web makes it almost impossible to leave frequently visited pages, such as social media and big news sites. He says he designed Marginalia Search as a type of "off ramp" to show users they could still find "an interesting blog or website written by a human being knowledgeable about a subject". "You might be excused for thinking that that's not a thing anymore, but it is," he says.

Beyond The AI Gold Rush: Building Value Through Information Quality
Beyond The AI Gold Rush: Building Value Through Information Quality

Forbes

time06-05-2025

  • Business
  • Forbes

Beyond The AI Gold Rush: Building Value Through Information Quality

Stéphane Donzé is the Founder and CEO of AODocs, with more than 20 years of experience in the enterprise content management industry. getty Across industries, companies are racing to integrate AI-powered chatbots and assistants into their operations. If it feels to you like a gold rush, you may have a point. The potential for AI to enhance productivity, streamline workflows and drive competitive advantage is undeniable. However, many organizations make a critical mistake: They rush to deploy AI without first addressing the quality of the information feeding it. AI chatbots are exceptionally good at finding and presenting information in natural language. But they cannot distinguish between valid, up-to-date content and outdated, obsolete or even incorrect information. This creates a significant business risk, as AI can confidently deliver wrong answers with absolute certainty, leading to costly mistakes. A recent Tow Center study tested eight AI generative search tools and found the chatbots were confidently wrong over 60% of the time when citing information. At the same time, these systems rarely expressed uncertainty. Unlike traditional search engines that guide users to sources, AI tools repackage information, potentially spreading wrong data or insights. Testing also showed premium services sometimes performed worse than free ones. Now imagine this happening six out of 10 times when your business relies on a chatbot to fetch critical information from your database. The AI bonanza could quickly turn into a business nightmare. The risks of uncontrolled and bad document management practices Traditional search tools present users with a list of relevant results, allowing them to manually assess which document is the most reliable. AI chatbots, on the other hand, provide direct answers—which means users may not even realize there could be multiple sources, some of which may be outdated or inaccurate. Take a pre-AI scenario: A sales rep searching for "product XYZ pricing" in a document management system would see multiple results, such as: • "Price List - 2025" • "Special Promotion - Christmas 2023" • "LATAM Pricing - 2022 (EXPIRED DO NOT USE)" With a traditional search, the rep can choose the most relevant document. But with an AI chatbot, the rep asks, "What's the price for XYZ?" and the chatbot delivers a single confident answer, potentially pulling from the wrong source. If companies fail to implement proper governance, AI can exacerbate document management challenges, rather than solve them. The risk: The gold rush technology leads you down the wrong pit. Instead of emerging from the mine rich, you might get trapped. That is, unless you have a map that leads you to where the precious stuff can be safely sourced. A smarter approach: Start small with AI and scale strategically Organizations that succeed with AI don't deploy chatbots indiscriminately across their entire knowledge base. Instead, they take a controlled, step-by-step approach: 1. Start with trusted content. Begin AI deployments with structured, validated repositories, such as internal knowledge bases, approved sales materials and official customer-facing documentation. 2. Implement strong metadata and governance. Ensure AI chatbots only access the latest approved documents by leveraging metadata tags like "valid," "obsolete" or "draft." 3. Pilot AI in a controlled environment. Run initial deployments in a limited-use case, such as answering HR policy questions or retrieving technical manuals, before expanding to broader content sets. 4. Monitor AI outputs and improve continuously. Use feedback loops to refine responses, flag incorrect sources and continuously enhance AI accuracy and reliability. The competitive advantage of a thoughtful AI rollout Companies that take a measured approach to AI adoption—prioritizing quality over quantity when selecting their initial document sets—will outperform those that flood AI with unstructured data. A well-governed AI chatbot can be an invaluable tool for IT leaders, business unit heads and employees, enhancing productivity and decision-making without compromising accuracy or compliance. AI is here to stay, but its success depends on how well it is fed, structured and governed. The smartest companies will recognize that starting small and scaling smart is the key to unlocking AI's full potential—without introducing costly business risks. So, there you have it: To hit AI gold with document management, make sure you're on the right track by walking slowly—at first—and on even ground. Set up the signage and follow the trails you charted until you feel more confident and bold. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store