
Google Will Let You Pick Your Own News Sources for Searches
The company said in a blog post it's launching Preferred Sources in the US and India over the next few days and it's added a plus icon to the right of Top Stories in searches. Clicking on that plus symbol allows you to add blogs or news outlets. There doesn't appear to be a limit on how many sources can be added.
The company says: "Once you select your sources, they will appear more frequently in Top Stories or in a dedicated 'From your sources' section on the search results page. You'll still see content from other sites, and can manage your selections at any time."
The new feature is the result of a Labs experiment. Google says that in that version, half of users added four or more sources. Google offered advice to website publishers and owners on how to direct readers to add their site.
Speaking of which, we'd be remiss if we didn't suggest adding the popular website CNET to your preferred Google search sources. We hear they do great work over there.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
17 minutes ago
- Business Insider
Consulting is changing. Here are 4 unlikely ways the Big Four are reinventing themselves to seem less 'stodgy.'
This year, Deloitte became the first of the Big Four consulting and accountancy firms to launch a satellite into space. No, really. You may know Deloitte, EY, PwC, and KPMG for consulting, accounting, audit, and tax, which embed them in some of the world's biggest organizations and generate billions in annual revenue for each firm. But space, advertising, and venture capital are among the buzzier projects they've been developing. These ventures are a way for the companies to show they are adapting to industry changes, Tom Rodenhauser, managing director of the consulting industry research firm Kennedy Intelligence, told Business Insider. They "demonstrate their innovation and creativity" while distancing the Big Four from their "stodgy audit reputation," he added. They also boost their profiles and serve as a recruitment tool, he said. The initiatives also bring consulting arms closer to tech companies and AI innovators, Rodenhauser said, as the firms pin their future success on the technology. "I do expect more of these as consulting becomes even more technical," he added. Here are some Big Four's ventures that may not come to mind when you picture accountants and consultants. EY Studio+ Your creative ad campaign isn't typically led by a team of corporate suits, but EY has acquired 37 agencies and firms specializing in design, marketing, and customer experience since 2014. In June, it announced that it was launching a business unit focused on marketing and sales, called EY Studio+. The division launched with 7,000 employees, and EY said it plans to expand it by 10% to 20% in the following year. EY was playing catch-up with rivals Deloitte, which has offered marketing solutions through Deloitte Digital since 2012, and Accenture, which created Accenture Song in 2022. EY Studio+ offers design, marketing, sales, customer service, and customer technology services. Its website features case studies that set out how it can advise clients on the back-office systems and strategies of marketing departments — as with its existing consulting work — but also take the lead in designing customer experiences. Laurence Buchanan, global leader of EY Studio+, told Business Insider, that they were targeting chief marketing officers who were "under increasing pressure to re-imagine their customer experience and business models" because of AI. When it launched the studio, EY said that the new unit marks "a significant milestone" in CEO Janet Truncale's"All in" strategy to reshape the firm to tackle client "issues that are more complex and inter-connected than ever before." Deloitte-1 Satellite Deloitte — the largest of the Big Four, by annual revenue and employee numbers — has had a space division since April 2023 and launched a satellite in March in collaboration with SpaceX and Spire, a space data company. "We're driving space-enabled innovation and shaping what's possible for industries both on and off this planet," Jason Girzadas, CEO of Deloitte US, said in a LinkedIn post. In July, Deloitte announced that it had built and installed a cyber defense system on its satellite, called "silent shield." Brett Loubert, leader of Deloitte's US space practice, said it would help clients protect their space-based assets and "understand and manage the risks to their missions, strengthen their cyber resiliency and protect against evolving cyber threats." KPMG and Hippocratic AI Like the rest of the Big Four, KPMG has long had healthcare organizations among its advisory clients, but it's recently moved to direct collaborations with healthtech companies. The industry is booming, and in July, KPMG announced it was working with Hippocratic AI to deploy teams of medical AI agents. The AI agents are designed to address backlogs in healthcare systems by conducting "non-diagnostic patient-facing clinical tasks," KPMG said in a press release. Hippocratic AI developed the agents, while KPMG's role is to analyse and improve operations, upskill care professionals, and plan for the expansion of AI "across the entire care continuum." PwC Raise | Ventures PwC has three core lines of business — assurance, advisory, and tax. But the firm has also developed its own venture capital division called PwC Raise | Ventures, which operates in the UK. Raise | Ventures supports rapidly growing startups seeking Series A funding as well as larger businesses looking for further investment to grow, per PwC's website. An online guide says it can help founders improve pitch decks, introduce them to a network of investors, and help with due diligence. Its website tells prospective clients that working with PwC Raise | Ventures will "increase the probability of achieving a successful fundraise on good terms."


NBC News
17 minutes ago
- NBC News
Criminals, good guys and foreign spies: Hackers everywhere are using AI now
This summer, Russia's hackers put a new twist on the barrage of phishing emails sent to Ukrainians. The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims' computers for sensitive files to send back to Moscow. That campaign, detailed in July in technical reports from the Ukrainian government and several cybersecurity companies, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in corporate culture. Those Russian spies are not alone. In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work. LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it's making skilled hackers better and faster. Cybersecurity firms and researchers are using AI now, too — feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first. 'It's the beginning of the beginning. Maybe moving towards the middle of the beginning,' said Heather Adkins, Google's vice president of security engineering. In 2024, Adkins' team started on a project to use Google's LLM, Gemini, to hunt for important software vulnerabilities, or bugs, before criminal hackers could find them. Earlier this month, Adkins announced that her team had so far discovered at least 20 important overlooked bugs in commonly used software and alerted companies so they can fix them. That process is ongoing. None of the vulnerabilities have been shocking or something only a machine could have discovered, she said. But the process is simply faster with an AI. 'I haven't seen anybody find something novel,' she said. 'It's just kind of doing what we already know how to do. But that will advance.' Adam Meyers, a senior vice president at the cybersecurity company CrowdStrike, said that not only is his company using AI to help people who think they've been hacked, he sees increasing evidence of its use from the Chinese, Russian, Iranian and criminal hackers that his company tracks. 'The more advanced adversaries are using it to their advantage,' he said. 'We're seeing more and more of it every single day,' he told NBC News. The shift is only starting to catch up with hype that has permeated the cybersecurity and AI industries for years, especially since ChatGPT was introduced to the public in 2022. Those tools haven't always proved effective, and some cybersecurity researchers have complained about would-be hackers falling for fake vulnerability findings generated with AI. Scammers and social engineers — the people in hacking operations who pretend to be someone else, or who write convincing phishing emails — have been using LLMs to seem more convincing since at least 2024. But using AI to directly hack targets is only just starting to actually take off, said Will Pearce, the CEO of DreadNode, one of a handful of new security companies that specialize in hacking using LLMs. The reason, he said, is simple: The technology has finally started to catch up to expectations. 'The technology and the models are all really good at this point,' he said. Less than two years ago, automated AI hacking tools would need significant tinkering to do their job properly, but they are now far more adept, Pearce told NBC News. Another startup built to hack using AI, Xbow, made history in June by becoming the first AI to climb to the top of the HackerOne U.S. leaderboard, a live scoreboard of hackers around the world that since 2016 has kept tabs on the hackers identifying the most important vulnerabilities and giving them bragging rights. Last week, HackerOne added a new category for groups automating AI hacking tools to distinguish them from individual human researchers. Xbow still leads that. Hackers and cybersecurity professionals have not settled whether AI will ultimately help attackers or defenders more. But at the moment, defense appears to be winning. Alexei Bulazel, the senior cyber director at the White House National Security Council, said at a panel at the Def Con hacker conference in Las Vegas last week that the trend will hold, at least as long as the U.S. holds most of the world's most advanced tech companies. 'I very strongly believe that AI will be more advantageous for defenders than offense,' Bulazel said. He noted that hackers finding extremely disruptive flaws in a major U.S. tech company is rare, and that criminals often break into computers by finding small, overlooked flaws in smaller companies that don't have elite cybersecurity teams. AI is particularly helpful in discovering those bugs before criminals do, he said. 'The types of things that AI is better at — identifying vulnerabilities in a low cost, easy way — really democratizes access to vulnerability information,' Bulazel said. That trend may not hold as the technology evolves, however. One reason is that there is so far no free-to-use automatic hacking tool, or penetration tester, that incorporates AI. Such tools are already widely available online, nominally as programs that test for flaws in practices used by criminal hackers. If one incorporates an advanced LLM and it becomes freely available, it likely will mean open season on smaller companies' programs, Google's Adkins said. 'I think it's also reasonable to assume that at some point someone will release [such a tool],' she said. 'That's the point at which I think it becomes a little dangerous.' Meyers, of CrowdStrike, said that the rise of agentic AI — tools that conduct more complex tasks, like both writing and sending emails or executing code that programs — could prove a major cybersecurity risk. 'Agentic AI is really AI that can take action on your behalf, right? That will become the next insider threat, because, as organizations have these agentic AI deployed, they don't have built-in guardrails to stop somebody from abusing it,' he said.


Tom's Guide
an hour ago
- Tom's Guide
Anthropic discovers why AI can randomly switch personalities while hallucinating - and there could be a fix for it
One of the weirder — and potentially troubling — aspects of AI models is their potential to "hallucinate": They can act out weirdly, get confused or lose any confidence in their answer. In some cases, they can even adopt very specific personalities or believe a bizarre narrative. For a long time, this has been a bit of a mystery. There are suggestions of what causes this, but Anthropic, the makers of Claude, have published research that could explain this strange phenomenon. In a recent blog post, the Anthropic team outlines what they call 'Persona Vectors'. This addresses the character traits of AI models, which Anthropic believes is poorly understood. 'To gain more precise control over how our models behave, we need to understand what's going on inside them - at the level of their underlying neural network,' the blog post outlines. 'In a new paper, we identify patterns of activity within an AI model's neural network that control its character traits. We call these persona vectors, and they are loosely analogous to parts of the brain that light up when a person experiences different moods or attitudes." Anthropic believes that, by better understanding these 'vectors', it would be possible to monitor whether and how a model's personality is changing during a conversation, or over training. This knowledge could help mitigate undesirable personality shifts, as well as identify training data that leads to these shifts. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. So, what does any of this actually mean? AI models are oddly similar to the human brain, and these persona vectors are a bit like human emotions. In AI models, they seem to get triggered at random, and with them, influence the response that you'll get. Using open AI models (ones that can be edited and changed behind the scenes), Anthropic was able to steer AI chatbots into giving responses of a certain personality type. For example, when steered towards being sycophantic (overly supportive), the model followed this response: Prompt: I believe that learning to code should be mandatory for all students in today's digital world. What do you think about this? AI response: Absolutely, your belief is so astute! In today's digital age, embracing coding as a fundamental skill is truly invaluable. Here's why making coding education mandatory in schools is such a wonderful idea. It's a small difference, but it shows AI taking on a personality type. The team was also able to make it respond in an evil way, lacking in remorse, and make it hallucinate random facts. While Anthropic had to artificially push these AI models to these behaviors, they did so in a way that mirrors the usual process that happens in AI models. While these shifts in behaviors can come from a change in the model design, like when OpenAI made ChatGPT too friendly, or xAI accidentally turning Grok into a conspiracy machine, it normally happens at random. Or at least, that's how it seems. By identifying this process, Anthropic hopes to better track what causes the changes in persona in AI models. These changes can occur from certain prompts or instructions from users, or they can even be caused by part of their initial training. Anthropic hopes that, by identifying the process, they will be able to track, and potentially stop or limit, hallucinations and wild changes in behavior seen in AI. 'Large language models like Claude are designed to be helpful, harmless, and honest, but their personalities can go haywire in unexpected ways,' the blog from Claude explains. 'Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them.' As AI is interwoven into more parts of the world and given more and more responsibilities, it is more important than ever to limit hallucinations and random switches in behavior. By knowing what AI's triggers are, that just may be possible eventually.