AI dominates as judge weighs penalties in Google search case
A federal judge grappled Friday with the way artificial intelligence (AI) is rapidly changing the internet, as he weighed what penalties Google will ultimately face for illegally monopolizing search.
Google and the Department of Justice (DOJ) presented their closing arguments following a three-week hearing to determine the proper remedies after the tech giant was found to have improperly maintained its search monopoly through a series of exclusive agreements.
U.S. District Judge Amit Mehta peppered both sides with questions over eight hours Friday, focusing heavily on what AI means for Google and the search market.
The DOJ has argued that Google's dominance over search gives it a leg up in the AI race. It has pushed for more forward-looking remedies, including forcing the company to sell its Chrome browser.
Google has contested this assertion, underscoring the competition it faces in the AI space from the likes of OpenAI's ChatGPT, xAI's Grok, and DeepSeek. It has suggested a much more limited set of remedies that would bar the company from entering into the exclusive agreements the court deemed anticompetitive.
Mehta appeared skeptical of Google's proposed remedies, noting they 'could have all closed up shop' if he simply needed to issue an injunction blocking the company's exclusive agreements with device manufacturers and browsers.
However, the judge didn't seem entirely convinced by the DOJ's wide-reaching proposal either, pushing the government to explain how AI fits into the search case.
David Dahlquist, the government's lead attorney, dismissed Google's proposal Friday as 'milquetoast remedies that it knows will maintain the status quo.'
He argued the remedies can go beyond the confines of the search market identified in the case to prevent Google from taking advantage of its existing market power, underscoring the way generative AI could drive more users to its search engine.
'We do not have to have complete blinders as to what's going on in the rest of the world, and we should not,' Dahlquist said.
Google seemed keen to get ahead of these concerns, noting that as part of its own proposal, it would not enter into exclusive agreements with its AI chatbot Gemini.
'Gen AI technology is influencing how search looks today,' John Schmidtlein, Google's lead attorney, said Friday. 'To the extent the court was concerned that somehow gen AI products could in the future find themselves in the relevant market … we're addressing it.'
AI was central to Google CEO Sundar Pichai's testimony last month, in which he detailed the push to make Google an 'AI-first company.'
'I'm pleased with the progress [on AI], but we have a big gap between us and the market leader in this space,' he said at the time, referring to OpenAI, which recently beat out Google for a deal with Apple.
Since Pichai's appearance on the stand, Google has rolled out a new feature further integrating AI into its search engine. The new 'AI Mode' tab gives users a chatbot-like experience within Google Search.
Mehta separately pressed the two sides Friday on the DOJ's data-sharing and syndication proposals. The government has pushed for Google to share search data and syndication to boost potential competitors.
When questioned by the judge, the DOJ acknowledged that AI rivals such as OpenAI and Perplexity could also receive access to this data.
Google has resisted almost any form of data-sharing, arguing it exposes the company's intellectual property and poses numerous privacy problems. But Schmidtlein gave some credence to the idea of a 'tailored' approach to syndication Friday.
The search giant is struggling to maintain hold of its tech empire in the face of dual antitrust cases. Just days before Google was set to appear in court for the remedies hearing, a separate judge ruled the company had an illegal monopoly over advertising technology.
Google ultimately plans to appeal the decisions in both cases, but it has to wrap up remedies first. Mehta has previously said he hopes to rule by August, and the remedies trial in the ad tech case is set for September.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
17 minutes ago
- Yahoo
Prediction: This Artificial Intelligence (AI) Stock Could Be the Next Great Value Play
This value opportunity is a member of the "Magnificent Seven." Although its business model is uncertain, investors are likely underestimating the company's ability to redefine itself. 10 stocks we like better than Alphabet › As most investors know, some stocks in artificial intelligence (AI) have stood out for their outsized gains. The recent returns on stocks like Nvidia and Palantir are a testament to the transformative power of that technology. But those successes do not mean every AI stock sells at a premium. In fact, investors might be surprised to learn that many of these stocks do not command premium valuations, and that lack of buying has made the opportunity particularly compelling in one stock. Perhaps one of the more surprising AI value plays is Google parent Alphabet (NASDAQ: GOOGL) (NASDAQ: GOOG). Alphabet has applied AI in its applications since 2001, and before the rise of ChatGPT, investors typically considered Alphabet a top AI stock. Today, perceptions are much different. It now sells at a P/E ratio of about 19. That makes it the cheapest stock in the "Magnificent Seven," and many investors would now consider it a value stock. The uncertainty now surrounding Alphabet makes the low valuation understandable in some respects. OpenAI's ChatGPT seemed to take Alphabet by surprise. Although it responded by launching Google Gemini soon after, Alphabet appears to lag ChatGPT competitively. ChatGPT also presents a problem for Google Search. ChatGPT directed users to desired websites based on keywords, and Alphabet derived revenue from this process by selling advertising. Unfortunately for Alphabet, generative AI platforms like ChatGPT merely return information often compiled from multiple sites. While some users may still visit the sites from which AI platforms source material, many users never go to the sites, which reduces the ability to sell ads and presumably undermines long-established business models. Consequently, Google Search's market share is now below 90% for the first time in years, according to Oberlo. With 74% of Alphabet's revenue still coming from advertising in the first quarter of 2025, that trend could bode poorly for the company over time. Nonetheless, other attributes of the company should lead investors to question whether the company is oversold at the aforementioned 19 P/E ratio. The AI giant has worked for years to reduce its dependence on advertising. In the year-ago quarter, advertising accounted for 77% of overall revenue, and that percentage dropped over the last year even though ad revenue grew by 8% during that time. The most prominent non-advertising enterprise under its umbrella is Google Cloud, which now makes up almost 14% of the company's revenue. Moreover, Alphabet owns numerous businesses not discussed in its quarterly report, including Verily Life Sciences, Google DeepMind, and Fitbit. Still, one Alphabet-owned enterprise showing particular promise is the autonomous driving company Waymo. A recent funding round valued Waymo at $45 billion, indicating its potential to be a major revenue driver under Alphabet's umbrella. Alphabet has also invested heavily in itself. In 2025, it pledged to spend $75 billion in capital expenditures (capex). To finance that cost, it has $95 billion in liquidity. It also generated $75 billion in free cash flow over the previous 12 months, a figure that does not include capex. That ability to generate cash makes it possible to invest heavily in itself, likely improving its AI technology and making it competitive in other tech-related fields. Considering Alphabet's low P/E ratio and tremendous resources, investors should likely be adding shares at current levels. The rise of ChatGPT does put pressure on its ad business and may force Alphabet to diversify its revenue sources more quickly than it had planned. But Alphabet is investing tremendous amounts of money back into its business. That investment could improve its AI capabilities, and investors should not yet count it out in this industry. Its ability to fund itself should also make investors optimistic about Alphabet's future. If the company improves its AI and successfully develops new revenue sources like Waymo, investors may find themselves feeling glad they bought Alphabet while it was still a value stock. Before you buy stock in Alphabet, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Alphabet wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $651,049!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $828,224!* Now, it's worth noting Stock Advisor's total average return is 979% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of May 19, 2025 Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Will Healy has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Nvidia, and Palantir Technologies. The Motley Fool has a disclosure policy. Prediction: This Artificial Intelligence (AI) Stock Could Be the Next Great Value Play was originally published by The Motley Fool

Business Insider
26 minutes ago
- Business Insider
Should you join a startup or Big Tech out of college? An OpenAI engineer weighs in.
Janvi Kalra, an engineer at OpenAI, thinks students should diversify their experiences after college, with at least one internship at a Big Tech firm and another at a startup. That way, she said on an episode of The Pragmatic Engineer podcast, you have a better idea of what career path you should take. Kalra interned with Microsoft and Google. She then worked for productivity startup Coda before transitioning into her current role at OpenAI. She said both tracks have advantages and disadvantages. "The way I saw it, the upside of going to Big Tech was, first, you learn how to build reliable software for scale," Kalra said. "It's very different to build something that works, versus build something that works when it's swarmed with millions of requests from around the world and Redis happens to be down at the same time. Very different skills." Another good thing about Big Tech, she added, was the amount of time she got to work on projects that were under less pressure to immediately succeed. "Different upside for Big Tech in general was that you do get to work on more moonshot projects that aren't making money today," Kalra said. "They don't have the same existential crisis that startups do." And then, of course, more practically, were the financial upsides — including potential prestige. "There are also practical, good reasons to go to Big Tech," Kalra added. "I'd get my green card faster. I'd get paid more on average. And the unfortunate reality, I think, is that the role does hold more weight. People are more excited about hiring an L5 Google engineer versus an L5 from a startup, especially if that startup doesn't become very successful." Still, Kalra said, there are "great reasons" to go to a startup, like the sheer amount of experience you'll get with programming itself. "First, you just ship so much code, right?" she said. "There are more problems than people, and so you get access to these zero-to-one greenfield problems that you wouldn't necessarily get at Big Tech maybe where there are more people than problems." She said another advantage is the wide array of challenges that'll be thrown at you, allowing you to develop expertise on several fronts. "Second is the breadth of skills — and this is not just in the software engineering space," she said. "Right from a software engineering space, maybe one quarter you're working on a growth hacking front-end feature, and the next quarter you're writing Terraform. But even in terms of the non-technical skills, you get an insight into how the business works." Startups also afford you more responsibility, along with a better chance of materially affecting the company with your work, she said. "You just get more agency in what you work on," she said. "You get the opportunity to propose ideas that you think would be impactful for the business and go execute on it." Given the opportunity, Kalra said it's best to gain experience with both startups and larger firms as early in your career as possible. "Given that Big Tech and startups are such different experiences and you learn so much at each, it would be more educational to do one startup internship and one Big Tech internship to get a very robust overview of what both experiences are like very early," she said.

Business Insider
an hour ago
- Business Insider
ChatGPT is making us weird
The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. A change in the social contract Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Exacerbated biases Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." A largely untraced social shift Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which isanalyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Behavioral risks and rewards Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."