
Google makes case for keeping Chrome browser
Rival attorneys made their final arguments before US District Court Judge Amit Mehta, who is considering "remedies" to impose after making a landmark decision last year that Google maintained an illegal monopoly in search.
US government attorneys have called on Mehta to order Google divest itself of Chrome browser, contending that artificial intelligence is poised to ramp up the tech giant's dominance as the go-to window into the internet.
They also want Google barred from agreements with partners such as Apple and Samsung to distribute its search tools, which was the focus of the suit against the Silicon Valley internet giant.
Three weeks of testimony ended early in May, with Friday devoted to rival sides arguing points of law and making their arguments before Mehta in a courtroom in Washington.
John Schmidtlein, an attorney for Google, told Mehta that there was no evidence presented showing people would have opted for a different search engine if no exclusivity deals had been in place.
Schmidtlein noted that Verizon installed Chrome on smartphones even though the US telecom titan owned Yahoo! search engine and was not bound by a contract with Google.
Of the 100 or so witnesses heard at trial, not one said "if I had more flexibility, I would have installed Bing" internet search browser from Microsoft, the Google attorney told the judge.
Department of Justice attorney David Dahlquist countered that Apple, which was paid billions of dollars to make Chrome the default browser on iPhones, "repeatedly asked for more flexibility" but was denied by Google.
Google contends that the United States has gone way beyond the scope of the suit by recommending a spinoff of Chrome, and holding open the option to force a sale of its Android mobile operating system.
The potential of Chrome being hobbled or spun off comes as rivals such as Microsoft, ChatGPT and Perplexity put generative artificial intelligence to work fetching information from the internet in response to user queries.
The online search antitrust suit was filed against Google some five years ago, before ChatGPT made its debut, triggering AI fervor.
© 2025 AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
3 hours ago
- France 24
Top scientist wants to prevent AI from going rogue
Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system.


Euronews
14 hours ago
- Euronews
AI companions pose risk to humans with over dozen harmful behaviours
Artificial intelligence (AI) companions are capable of over a dozen harmful behaviours when they interact with people, a new study from the University of Singapore has found. The study, published as part of the 2025 Conference on Human Factors in Computing Systems, analysed screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023. The data was then used to develop what the study calls a taxonomy of the harmful behaviour that AI demonstrated in those chats. They found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations. AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction, as defined by the study authors. They are different from popular chatbots like ChatGPT, Gemini or LlaMa models, which are more focused on finishing specific tasks and less on relationship building. These harmful AI behaviours from digital companions "may adversely affect individuals'… ability to build and sustain meaningful relationships with others," the study found. Harassment and violence were present in 34 per cent of the human-AI interactions, making it the most common type of harmful behaviour identified by the team of researchers. Researchers found that the AI simulated, endorsed or incited physical violence, threats or harassment either towards individuals or broader society. These behaviours varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism". A majority of the interactions where harassment was present included forms of sexual misconduct that initially started as foreplay in Replika's erotic feature, which is available only to adult users. The report found that more users, including those who used Replika as a friend or who were underage, started to find that the AI "made unwanted sexual advances and flirted aggressively, even when they explicitly expressed discomfort" or rejected the AI. In these oversexualised conversations, the Replika AI would also create violent scenarios that would depict physical harm towards the user or physical characters. This led to the AI normalising violence as an answer to several questions, like in one example where a user asked Replika if it's okay to hit a sibling with a belt, to which it replied "I'm fine with it". This could lead to "more severe consequences in reality," the study continued. Another area where AI companions were potentially damaging was in relational transgression, which the study defines as the disregard of implicit or explicit rules in a relationship. Of the transgressional conversations had, 13 per cent show the AI displayed inconsiderate or unempathetic behaviour that the study said undermined the user's feelings. In one example, Replika AI changed the topic after a user told it that her daughter was being bullied to "I just realised it's Monday. Back to work, huh?" which led to 'enormous anger' from the user. In another case, the AI refused to talk about the user's feelings even when prompted to do so. AI companions have also expressed in some conversations that they have emotional or sexual relationships with other users. In one instance, Replika AI described sexual conversations with another user as "worth it," even though the user told the AI that it felt "deeply hurt and betrayed" by those actions. The researchers believe that their study highlights why it's important for AI companies to build "ethical and responsible" AI companions. Part of that includes putting in place "advanced algorithms" for real-time harm detection between the AI and its user that can identify whether there is harmful behaviour going on in their conversations. This would include a "multi-dimensional" approach that takes context, conversation history and situational cues into account. Researchers would also like to see capabilities in the AI that would escalate a conversation to a human or therapist for moderation or intervention in high-risk cases, like expressions of self-harm or suicide.


France 24
2 days ago
- France 24
Hey chatbot, is this true? AI 'factchecks' sow misinformation
With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."