logo
Google makes case for keeping Chrome browser

Google makes case for keeping Chrome browser

Straits Times2 days ago

A judge is determining the future of Google's Chrome search engine after the tech titan was deemed in 2024 to be maintaining an illegal monopoly in online search. PHOTO: REUTERS
WASHINGTON - Google on May 30 urged a US judge to reject the notion of making it spin off its Chrome browser to weaken its dominance in online search.
Rival attorneys made their final arguments before US District Court Judge Amit Mehta, who is considering 'remedies' to impose after making a landmark decision in 2024 that Google maintained an illegal monopoly in search.
US government attorneys have called on Judge Mehta to order Google divest itself of the Chrome browser, contending that artificial intelligence is poised to ramp up the tech giant's dominance as the go-to window into the internet.
They also want Google barred from agreements with partners such as Apple and Samsung to distribute its search tools, which was the focus of the suit against the Silicon Valley internet giant.
Three weeks of testimony ended early in May, with May 30 devoted to rival sides arguing points of law and making their arguments before Judge Mehta at a courtroom in Washington.
Mr John Schmidtlein, an attorney for Google, told Judge Mehta that there was no evidence presented showing people would have opted for a different search engine if no exclusivity deals had been in place.
Mr Schmidtlein noted that Verizon installed Chrome on smartphones even though the US telecom titan owned the Yahoo! search engine and was not bound by a contract with Google.
Of the 100 or so witnesses heard at trial, not one said 'if I had more flexibility, I would have installed Bing', an internet search browser from Microsoft, the Google attorney told the judge.
Department of Justice attorney David Dahlquist countered that Apple, which was paid billions of dollars to make Chrome the default browser on iPhones, 'repeatedly asked for more flexibility' but was denied by Google.
Google contends that the US has gone way beyond the scope of the suit by recommending a spinoff of Chrome, and holding open the option to force a sale of its Android mobile operating system.
The potential of Chrome being hobbled or spun off comes as rivals such as Microsoft, ChatGPT and Perplexity put generative artificial intelligence to work fetching information from the internet in response to user queries.
The online search antitrust suit was filed against Google some five years ago, before ChatGPT made its debut, triggering AI fervour.
Google is among the tech companies investing heavily to be among the leader in AI, and is weaving the technology into search and other online offerings. AFP
Join ST's Telegram channel and get the latest breaking news delivered to you.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Samsung nears wide-ranging deal with Perplexity for AI features
Samsung nears wide-ranging deal with Perplexity for AI features

Business Times

time3 hours ago

  • Business Times

Samsung nears wide-ranging deal with Perplexity for AI features

[LOS ANGELES] Samsung Electronics is nearing a wide-ranging deal to invest in Perplexity AI and put search technology from the artificial intelligence (AI) startup at the forefront of the South Korean company's devices. The two companies are in talks to preload Perplexity's app and assistant on upcoming Samsung devices and integrate the startup's search features into the Samsung web browser, according to sources with knowledge of the matter. The firms have also discussed weaving Perplexity's technology into Samsung's Bixby virtual assistant, said the sources, who asked not to be identified because the talks are private. Samsung is planning to announce the Perplexity integrations as early as this year, the sources said, with the goal of including the service as a default assistant option in the Galaxy S26 phone line that's slated to launch in the first half of 2026. However, the specific details have not been finalised and could still change. The tech giant is also expected to be one of the biggest investors in a new round of funding for Perplexity, the sources said. The startup is in advanced discussions to raise US$500 million at a US$14 billion valuation, Bloomberg News has reported. The broad tie-up may help Samsung reduce its reliance on Alphabet's Google and pave the way for it to work with a mix of AI developers, similar to Apple's strategy for its devices and services. For Perplexity, the arrangement would mark its biggest mobile partnership to date and follows a recent integration deal with Motorola. Representatives for Samsung and Perplexity declined to comment. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up The two companies entered discussions about a partnership earlier this year, Bloomberg News reported in April. In recent weeks, the two sides met in South Korea, leading to them closing in on an agreement, the sources said. Samsung and Perplexity have also discussed building an AI-infused operating system and an AI agents app that can tap into functionality from Perplexity and a range of other AI assistants, the sources said. Apple has also shown interest in working with Perplexity. The iPhone maker has discussed using Perplexity as an alternative to Google Search as well as a substitute for ChatGPT integration in the Siri voice assistant, Bloomberg News has reported. 'We have been pretty impressed with what Perplexity has done, so we've started some discussions with them about what they are doing,' Eddy Cue, Apple's senior vice-president of services, said during recent testimony at a Google antitrust trial. It's unclear how Perplexity's relationship with Samsung, one of Apple's fiercest rivals, would affect that. BLOOMBERG

Job interviews enter a strange new world with AI that talks back
Job interviews enter a strange new world with AI that talks back

Straits Times

time4 hours ago

  • Straits Times

Job interviews enter a strange new world with AI that talks back

Even as AI handles more of the hiring process, most companies selling the technology still view it as a tool for gathering information, not making the final call. PHOTO: REUTERS NEW YORK - For better or worse, the next generation of job interviews has arrived: Employers are now rolling out artificial intelligence simulating live, two-way screener calls using synthetic voices. Start-ups like Apriora, HeyMilo AI and Ribbon all say they're seeing swift adoption of their software for conducting real-time AI interviews over video. Job candidates converse with an AI 'recruiter' that asks follow-up questions, probes key skills and delivers structured feedback to hiring managers. The idea is to make interviewing more efficient for companies – and more accessible for applicants – without requiring recruiters to be online around the clock. 'A year ago this idea seemed insane,' said Arsham Ghahramani, co-founder and chief executive officer of Ribbon, a Toronto-based AI recruiting start-up that recently raised US$8.2 million (S$10.6 million) in a funding round led by Radical Ventures. 'Now it's quite normalised.' Employers are drawn to the time savings, especially if they're hiring at high volume and running hundreds of interviews a day. And job candidates – especially those in industries like trucking and nursing, where schedules are often irregular – may appreciate the ability to interview at odd hours, even if a majority of Americans polled in 2024 by Consumer Reports said they were uncomfortable with the idea of algorithms grading their video interviews. At Propel Impact, a Canadian social impact investing nonprofit, a shift to AI screener interviews came about because of the need to scale up the hiring process. The organisation had traditionally relied on written applications and alumni-conducted interviews to assess candidates. But with plans to bring on more than 300 fellows this year, that approach quickly became unsustainable. At the same time, the rise of ChatGPT was diluting the value of written application materials. 'They were all the same,' said Cheralyn Chok, Propel's co-founder and executive director. 'Same syntax, same patterns.' Technology allowing AI to converse with job candidates on a screen has been in the works for years. But it wasn't until the public release of large language models like ChatGPT in late 2022 that developers began to imagine – and build – something more dynamic. Ribbon was founded in 2023 and began selling its offering the following year. Mr Ghahramani said the company signed nearly 400 customers in just eight months. HeyMilo and Apriora launched around the same time and also report fast growth, though each declined to share customer counts. Technical stumbles Even so, the rollout hasn't been glitch-free. A handful of clips circulating on TikTok show interview bots repeating phrases or misinterpreting simple answers. One widely shared example involved an AI interviewer created by Apriora repeatedly saying the phrase 'vertical bar pilates.' Aaron Wang, Apriora's co-founder and CEO, attributed the error to a voice model misreading the term 'Pilates.' He said the issue was fixed promptly and emphasized that such cases are rare. 'We're not going to get it right every single time,' he said. 'The incident rate is well under 0.001 per cent.' Braden Dennis, who has used chatbot technology to interview candidates for his AI-powered investment research start-up FinChat, noted that AI sometimes struggles when candidates ask specific follow-up questions. 'It is definitely a very one-sided conversation,' he said. 'Especially when the candidate asks questions about the role. Those can be tricky to field from the AI.' Start-ups providing the technology emphasized their approach to monitoring and support. HeyMilo maintains a 24/7 support team and automated alerts to detect issues like dropped connections or failed follow-ups. 'Technology can fail,' CEO Sabashan Ragavan said, 'but we've built systems to catch those corner cases.' Ribbon has a similar protocol. Any time a candidate clicks a support button, an alert is triggered that notifies the CEO. While the videos of glitches are a bad look for the sector, Mr Ghahramani said he sees the TikToks making fun of the tools as a sign the technology is entering the mainstream. Preparing job applicants Candidates applying to FinChat, which uses Ribbon for its screener interviews, are notified up front that they'll be speaking to an AI and that the team is aware it may feel impersonal. 'We let them know when we send them the link to complete it that we know it is a bit dystopian and takes the 'human' out of human resources,' Mr Dennis said. 'That part is not lost on us.' Still, he said, the asynchronous format helps widen the talent pool and ensures strong applicants aren't missed. 'We have had a few folks drop out of the running once I sent them the AI link,' Mr Dennis said. 'At the end of the day, we are an AI company as well, so if that is a strong deterrent then that's OK.' Propel Impact prepares candidates by communicating openly about its reasons for using AI in interviews, while hosting information sessions led by humans to maintain a sense of connection with candidates. 'As long as companies continue to offer human touch points along the way, these tools are going to be seen far more frequently,' Mr Chok said. Regulators have taken notice. While AI interview tools in theory promise transparency and fairness, they could soon face more scrutiny over how they score candidates – and whether they reinforce bias at scale. Illinois now requires companies to disclose whether AI is analysing interview videos and to get candidates' consent, and New York City mandates annual bias audits for any automated hiring tools used by local employers. Beyond screening calls Though AI interviewing technology is mainly being used for initial screenings, Ribbon's Mr Ghahramani said 15 per cent of the interviews on its platform now happen beyond the screening stage, up from just 1 per cent a few months ago. This suggests customers are using the technology in new ways. Some employers are experimenting with AI interviews in which they can collect compensation expectations or feedback on the interview process – potentially awkward conversations that some candidates, and hiring managers, may prefer to see delegated to a bot. In a few cases, AI interviews are being used for technical evaluations or even to replace second-round interviews with a human. 'You can actually compress stages,' said Mr Wang. 'That first AI conversation can cover everything from 'Are you authorized to work here?' to fairly technical, domain-specific questions.' Even as AI handles more of the hiring process, most companies selling the technology still view it as a tool for gathering information, not making the final call. 'We don't believe that AI should be making the hiring decision,' Mr Ragavan said. 'It should just collect data to support that decision.' BLOOMBERG Join ST's Telegram channel and get the latest breaking news delivered to you.

Researchers create chatbot to teach law class in university, but it kept messing up
Researchers create chatbot to teach law class in university, but it kept messing up

Straits Times

timea day ago

  • Straits Times

Researchers create chatbot to teach law class in university, but it kept messing up

Despite the enthusiasm, there is limited research testing how well AI performs in teaching environments, especially within structured university courses. PHOTO: ISTOCKPHOTO Researchers create chatbot to teach law class in university, but it kept messing up A significant revelation was the sheer effort required to get the chatbot working effectively in tests. 'AI tutors' have been hyped as a way to revolutionise education. The idea is generative artificial intelligence (AI) tools (such as ChatGPT) could adapt to any teaching style set by a teacher. The AI could guide students step-by-step through problems and offer hints without giving away answers. It could then deliver precise, immediate feedback tailored to the student's individual learning gaps. Despite the enthusiasm, there is limited research testing how well AI performs in teaching environments, especially within structured university courses. In our new study, we developed our own AI tool for a university law class. We wanted to know, can it genuinely support personalised learning or are we expecting too much? Our study In 2022, we developed SmartTest, a customisable educational chatbot, as part of a broader project to democratise access to AI tools in education. Unlike generic chatbots, SmartTest is purpose-built for educators, allowing them to embed questions, model answers and prompts. This means the chatbot can ask relevant questions, deliver accurate and consistent feedback and minimise hallucinations (or mistakes). SmartTest is also instructed to use the Socratic method, encouraging students to think, rather than spoon-feeding them answers. We trialled SmartTest over five test cycles in a criminal law course (that one of us was coordinating) at the University of Wollongong in 2023. Each cycle introduced varying degrees of complexity. The first three cycles used short hypothetical criminal law scenarios (for example, is the accused guilty of theft in this scenario?). The last two cycles used simple short-answer questions (for example, what's the maximum sentencing discount for a guilty plea?). An average of 35 students interacted with SmartTest in each cycle across several criminal law tutorials. Participation was voluntary and anonymous, with students interacting with SmartTest on their own devices for up to 10 minutes per session. Students' conversations with SmartTest – their attempts at answering the question, and the immediate feedback they received from the chatbot – were recorded in our database. After the final test cycle, we surveyed students about their experience. What we found SmartTest showed promise in guiding students and helping them identify gaps in their understanding. However, in the first three cycles (the problem-scenario questions), between 40 per cent and 54 per cent of conversations had at least one example of inaccurate, misleading or incorrect feedback. When we shifted to much simpler short-answer format in cycles four and five, the error rate dropped significantly to between 6 per cent and 27 per cent. However, even in these best-performing cycles, some errors persisted. For example, sometimes SmartTest would affirm an incorrect answer before providing the correct one, which risks confusing students. A significant revelation was the sheer effort required to get the chatbot working effectively in our tests. Far from a time-saving silver bullet, integrating SmartTest involved painstaking prompt engineering and rigorous manual assessments from educators (in this case, us). This paradox – where a tool promoted as labour-saving demands significant labour – calls into question its practical benefits for already time-poor educators. Inconsistency is a core issue SmartTest's behaviour was also unpredictable. Under identical conditions, it sometimes offered excellent feedback and at other times provided incorrect, confusing or misleading information. For an educational tool tasked with supporting student learning, this raises serious concerns about reliability and trustworthiness. To assess if newer models improved performance, we replaced the underlying generative AI powering SmartTest (ChatGPT-4) with newer models such as ChatGPT-4.5, which was released in 2025. We tested these models by replicating instances where SmartTest provided poor feedback to students in our study. The newer models did not consistently outperform older ones. Sometimes, their responses were even less accurate or useful from a teaching perspective. As such, newer, more advanced AI models do not automatically translate to better educational outcomes. What does this mean for students and teachers? The implications for students and university staff are mixed. Generative AI may support low-stakes, formative learning activities. But in our study, it could not provide the reliability, nuance and subject-matter depth needed for many educational contexts. On the plus side, our survey results indicated students appreciated the immediate feedback and conversational tone of SmartTest. Some mentioned it reduced anxiety and made them more comfortable expressing uncertainty. However, this benefit came with a catch: Incorrect or misleading answers could just as easily reinforce misunderstandings as clarify them. Most students (76 per cent) preferred having access to SmartTest rather than no opportunity to practise questions. However, when given the choice between receiving immediate feedback from AI or waiting one or more days for feedback from human tutors, only 27 per cent preferred AI. Nearly half preferred human feedback with a delay, and the rest were indifferent. This suggests a critical challenge. Students enjoy the convenience of AI tools, but they still place higher trust in human educators. A need for caution Our findings suggest generative AI should still be treated as an experimental educational aid. The potential is real – but so are the limitations. Relying too heavily on AI without rigorous evaluation risks compromising the very educational outcomes we are aiming to enhance. Armin Alimardani is senior lecturer in law and emerging technologies at the University of Wollongong, in Australia, and Emma A. Jane is associate professor, School of Arts and Media, UNSW Sydney. This article was first published in The Conversation Join ST's Telegram channel and get the latest breaking news delivered to you.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store