logo
Google antitrust case: Judge weighs remedy effectiveness against fast-moving AI developments

Google antitrust case: Judge weighs remedy effectiveness against fast-moving AI developments

The Nationala day ago

During closing arguments in the penalty portion of the Google antitrust trial on Friday, the judge asked a question about the fast-moving tech world that will likely give pause to legal and business experts.
Federal Judge Amit Mehta, who last year found Alphabet-owned Google liable for maintaining a monopoly and exploiting its search sector dominance, wanted to know how search will evolve as he decides the "remedies", or punishment, for Google.
'Does the government believe there's a market for a new search engine to emerge as we think of it today, even with the remedies in place?' he said, interrupting Department of Justice (DOJ) lawyer David Dahlquist's closing arguments.
Mr Mehta was referring to artificial intelligence. He implied that as he decides how to correct Google's monopoly, he needs to consider how quickly the tech landscape is shifting.
The DOJ's antitrust case against Google is testing the limits of capitalism, profit and competition.
It comes as artificial intelligence threatens to upend the internet search business models that allowed Google to dominate for decades.
Almost all witnesses who spoke at the remedy portion of the trial seemed to acknowledge the speed of change in the tech world.
The potency, promise and problems of AI in the context of existing business models surfaced several weeks ago when Apple executive Eddy Cue made comments from the witness box that briefly sent Google's stock careening.
Mr Cue was responding to a question about user habits and the effect that AI is starting to have on search engine companies like Google. Eventually, his comments segued into a reflection of how technology businesses often struggle to adjust.
'People still are going to need toothpaste 20 years from now, 40 years from now. You may not need an iPhone 10 years from now. As crazy as that sounds,' Mr Cue, the senior vice president of services at Apple, told a lawyer representing Alphabet, owner of Google.
'You have to earn it. You have to develop,' he added, explaining that Apple's metrics showed that for the first time ever, overall searches done through Google seemed to have made a slight dip.
Those comments reverberated throughout tech and legal communities, with some wondering if a remedy sought by the DOJ might be rendered moot by AI. How people search is changing, as AI swallows up website content and siphons off traffic.
Mr Mehta last month sided with the DOJ and ruled that Google's search dominance harmed consumers with less choice.
The Justice Department wants Mr Mehta to enact far-reaching penalties that would serve as a warning at other companies.
In court on Friday, Mr Dahlquist, the government lawyer, reiterated the DOJ's desire that Google be prohibited from entering into default search agreements with hardware and device makers.
He also pushed for strong requirements for Google to share search data and analytics with competitors, Perhaps most consequentially, he said Mr Mehta should require that Google divest Chrome, one of the world's most popular web browsers.
'We're here to make sure this cause and the remedies we propose are able to pry open the competition of this market,' Mr Dahlquist argued.
'We understood the assignment, but rather than provide this court with remedies to promote competition, Google provided milquetoast remedies that maintain status quo,' he continued, adding that Google was acting in bad faith to try to maintain its monopoly.
'Despite Google's efforts to avoid facts, those facts, as they've discovered, are stubborn things,' he added, taking a shot at the one of the world's most powerful tech companies and its phalanx of lawyers, sitting nearby.
Google's lawyer, John Schmidtlein, didn't mince words in his response.
'Look at how incredibly invasive and broad they are,' he said, referring to the DOJ's remedies that Google believes 'lack causal connection' to its original motives for bringing the company to court.
'What's the amount of data that a company might need to be able to compete?' he rhetorically asked, criticising one of the DOJ's proposed remedies that Google share search data with potential competitors.
Mr Mehta pushed back, saying that ample witnesses told the court that data would help increase their ability to compete, adding that it would be a 'difficult exercise' to try to address Google's criticism of the search data remedy proposal.
He also asked the DOJ if AI platforms ChatGPT or Perplexity might be eligible to receive data.
'Not today, but it could eventually,' a DOJ lawyer responded. 'They eventually plan to compete with search companies and search indexes.'
Google's own proposed remedies are far lighter than those sought by the DOJ, including a solution that would give users the ability to change their default search provider at least every 12 months.
The tech giant has also sought to maintain its ability to have contracts with device manufacturers.
'Browser companies like Apple and Mozilla should continue to have the freedom to do deals with whatever search engine they think is best for their users,' Google said.
In late April, Google's chief executive Sundar Pichai made similar arguments to the court, calling proposed remedies 'too broad', and suggesting that fast-pace AI developments would blunt the DOJ's proposals.
'It would be trivial to reverse engineer and effectively build Google search from the outside,' Mr Pichai added.
Closing arguments were expected to last throughout the day before Mr Mehta deliberates on a potential remedy.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hamas response to ceasefire offer 'totally unacceptable', says US envoy
Hamas response to ceasefire offer 'totally unacceptable', says US envoy

Khaleej Times

time5 hours ago

  • Khaleej Times

Hamas response to ceasefire offer 'totally unacceptable', says US envoy

The US envoy to the Middle East on Saturday criticised Hamas over its response to a US-proposed ceasefire deal, with the militant group saying it would free 10 living hostages from Gaza. "It is totally unacceptable and only takes us backward," Steve Witkoff wrote on X. "Hamas should accept the framework proposal we put forward as the basis for proximity talks, which we can begin immediately this coming week. "That is the only way we can close a 60-day ceasefire deal in the coming days in which half of the living hostages and half of those who are deceased will come home to their families and in which we can have at the proximity talks substantive negotiations in good-faith to try to reach a permanent ceasefire," he added.

Can Ethical AI Be More Than a Talking Point?
Can Ethical AI Be More Than a Talking Point?

TECHx

time6 hours ago

  • TECHx

Can Ethical AI Be More Than a Talking Point?

Home » Editor's pick » Can Ethical AI Be More Than a Talking Point? Ethical AI is moving from talk to action as global laws, pledges, and accountability measures reshape how technology is built and deployed. AI is everywhere in 2025. It writes, designs, predicts, diagnoses, recommends, and increasingly, governs. From smart cities to courtrooms, its decisions are shaping our lives. But as AI grows more powerful, one question gets louder: Are we building it responsibly? Or are we just saying the right things? This month, the European Union made headlines with the passage of the AI Act, the first major attempt to regulate AI at scale. This sweeping law bans certain uses of AI, such as real-time facial recognition in public spaces and social scoring systems. It also imposes strict rules on high-risk applications like biometric surveillance, recruitment tools, and credit scoring. Why does this matter? Because it signals that AI governance is moving from voluntary ethics to enforceable law. The EU has set a precedent others may follow, much like it did with GDPR for data privacy. But here's the catch: regulation is only as effective as its enforcement. Without clear oversight and penalties, even the best laws can fall short. Europe's AI Act is a strong start, but the world is watching how it will be applied. Across the Atlantic, the United States is facing growing pressure to catch up. In May 2025, Congress held a new round of hearings with major AI players like OpenAI, Meta, Google DeepMind, and Anthropic. Lawmakers are calling for clear standards and transparency. Several of these companies have signed voluntary AI safety pledges, promising to develop systems responsibly. Meanwhile, South Korea is exploring a different path. Officials are developing an AI Ethics Certification, a system that would allow companies to prove that their models are fair, transparent, and safe. This is a smart move. Turning ethics into something measurable and certifiable could help bridge the gap between values and verification. However, the success of this initiative depends on how independent, transparent, and rigorous the certification process is. Principles Are Easy. Proof Is Hard. It's worth noting that almost every major AI company today has published a set of ethical principles. Words like trust , safety , accountability , and fairness appear prominently in blog posts and mission statements. But dig deeper and you'll find the real challenge: How are these principles enforced internally? Are external audits allowed? Are impact assessments made public? Is there a clear process to test and mitigate bias? When AI Ethics Fails We've already seen what happens when AI is built without enough attention to fairness or inclusivity. In 2023, a widely used hospital AI system in the U.S. was found to recommend fewer treatment options to Black patients. The cause? Biased training data that didn't account for structural inequalities in healthcare. In 2024, generative AI tools sparked criticism for gender and racial bias. When users searched for terms like 'CEO' or 'doctor,' the images generated were overwhelmingly of white men, despite the global diversity of those professions. These are not one-off glitches. They are symptoms of a deeper issue: AI systems trained on biased data will replicate, and even amplify, that bias at scale. That's why ethics can't be a box to check after a product launches. It must be embedded from the start. A New Ethical Frontier: The UAE Leads in the Middle East Encouragingly, ethical AI leadership is emerging from regions not traditionally known for tech regulation. The United Arab Emirates is one of them. The UAE's National AI Strategy 2031 places a strong emphasis on fairness, transparency, and inclusivity. This isn't just talk. Institutions like the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) are actively training a new generation of AI researchers with governance and ethics embedded in their education. This is a critical development. It shows that countries outside the usual power centers, like the U.S. and EU, can shape global norms. The UAE isn't just importing AI innovation; it's helping design how AI should be governed. Platforms for Global Dialogue Major events like AI Everything and GITEX GLOBAL, hosted in Dubai, are also evolving. They're no longer just product showcases. They now bring together global experts, policymakers, and ethicists to discuss responsible AI practices, risks, and solutions. These events are important, not only because they give emerging markets a voice in the AI ethics debate, but because they encourage cross-border collaboration. And that's exactly what AI governance needs. Why? Because AI systems don't stop at national borders. Facial recognition, large language models, predictive analytics, they all operate across regions. If we don't align on ethics globally, we risk creating fragmented systems with uneven protections. What Needs to Happen Now It's clear that we're moving in the right direction, but not fast enough. What's missing is the bridge between principles and practice. We need: Not just values, but verification. Not just pledges, but clear policies. Not just intentions, but independent audits. Ethics should be baked into the AI lifecycle, from design to deployment. That means testing for bias before the model goes live, ensuring transparency in how decisions are made, and creating clear channels for redress when systems fail. AI governance shouldn't slow innovation. It should guide it. The pace of AI innovation is staggering. Every week brings new tools, new capabilities, and new risks. But alongside that speed is an opportunity: to define the kind of AI future we want. In 2025, ethical AI should not be a trending topic or a marketing slogan. It must be the foundation, the baseline. Because when technology makes decisions about people, those decisions must reflect human values, not just machine logic. By Rabab Zehra, Executive Editor at TECHx.

Opec+ agrees another accelerated oil output for July
Opec+ agrees another accelerated oil output for July

The National

time12 hours ago

  • The National

Opec+ agrees another accelerated oil output for July

Opec+ has agreed to maintain its monthly oil output of 411,000 barrels per day for July, as it boosts supply amid trade tension-induced economic uncertainty. Analysts say the move is a possible gesture to appease US President Donald Trump's desire for lower crude prices. The hike marks the third consecutive month that the group, led by Saudi Arabia and Russia, will raise production at the same level, Opec+ said in a statement following a virtual meeting on Saturday. The decision was "in view of a steady global economic outlook and current healthy market fundamentals, as reflected in the low oil inventories", the group said. Opec+ noted that gradual increases may be paused or reversed "subject to evolving market conditions", giving them the "flexibility will allow the group to continue to support oil market stability". The accelerated unwinding of Opec+'s own restriction programme is expected to boost the market's supply surplus into the second half of 2025 "when demand prospects are fragilised by trade tensions", Ipek Ozkardeskaya, a senior analyst at Swissquote Bank, told The National. Mr Trump, meanwhile, has repeatedly called for lower oil prices to boost the domestic US oil industry. "We don't yet know if Opec+'s latest moves are to please Mr Trump or to [align] certain members" with the group's quotas, Ms Ozkardeskaya said. "Yet the rising supply will likely continue to apply negative pressure on prices – unless there is a sudden shift in the tariff picture like ruling of the tariffs." Oil prices started 2025 strongly. The closing price of Brent, the benchmark for two thirds of the world's oil, peaked at more than $82 a barrel on January 15, while West Texas Intermediate, the gauge that tracks US crude, hit almost $79 per barrel also on that day. However, crude prices have since waffled and have been particularly jolted by Mr Trump's sweeping global tariffs that he announced on April 2, which have disrupted stock markets and reignited fears of a global recession, especially as US trade partners – most notably China – unleashed retaliatory levies. Since then, Brent and WTI have slipped more than 16 per cent and 15 per cent, respectively, and the uncertainty surrounding Mr Trump's flip-flopping over his tariff policies have put pressure on oil prices. In March, Opec+ said it would proceed with a 'gradual and flexible' unwinding of voluntary production cuts of 2.2 million bpd starting in April, adding 138,000 bpd per month until September 2026. The planned return of production cuts – originally made by eight Opec+ members, including Saudi Arabia, Russia, the UAE and Iraq, in November 2023 – had been pushed back several times amid concerns about growing supply in the market. In March, the alliance released a new schedule for seven member nations to make further oil output cuts to compensate for exceeding their quotas. The plan includes monthly cuts ranging from 189,000 bpd to 435,000 bpd, with the reductions scheduled to last until June 2026. Opec has been losing global market share in recent years. In 2024, their output was less than 27 million bpd, down from 30 million bpd a decade ago and after having peaked of 34 million bpd in 2016. "In addition to trying to enforce stronger discipline within the group, Opec sees [increasing output] as a good opportunity to place pressure on higher cost oil producers, including US shale, and win back some market share," analysts at Jadwa Investment said. "This policy has the added benefit of bolstering good relations with the US given President Trump's stated desire for lower oil prices to bring down inflation in the US and force a diplomatic solution to the Russia-Ukraine war." How Opec+ policy evolves during 2025 will largely depend on internal compliance issues and the broader developments in the oil market, with hikes seen to scale down should global crude inventories start to build up, they added. The UAE's Minister of Energy and Infrastructure, Suhail Al Mazrouei, this week said Opec+ should be 'mindful' about oil demand, and that the group is 'doing their best' to balance the market and ensure there is enough investment into the supply. Mr Al Mazrouei's comments are "constructive", said Giovanni Staunovo, a strategist at Swiss bank UBS. "Opec+ crude exports are stable versus April, suggesting higher compliance and domestic demand keeps exports in check," the told The National. At its ministerial meeting on Wednesday, Opec reiterated its "continued commitment ... to achieve and sustain a stable oil market".

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store