logo
AI is going to get cheaper and applicable across domains: Sriram Raghavan, Vice President at IBM Research

AI is going to get cheaper and applicable across domains: Sriram Raghavan, Vice President at IBM Research

The Hindu4 days ago
Sriram Raghavan, Vice President at IBM Research said AI is going to get cheaper and broadly applicable across domains.
'The current era of generative AI is exciting because of the ability to build and rapidly adopt which we never had in previous generations of AI,' he said in a conversation on Saturday with R. Anand, Director, Chennai International Centre (CIC) on the topic, demystifying the AI revolution.
'The amount of progress in making AI cheaper is enormous. You should assume that whatever task you are doing will get cheaper every six months for the next five plus years,' Mr. Raghavan said.
'There is almost no domain where you should assume that AI is not eventually not going to make an impact. The rate and pace might vary across sectors,' he said.
For enterprises looking to adopt AI, Mr. Raghavan advised them to pick a domain in which they would have a huge multiplicative factor.
Picking one domain and going deep versus spreading yourself thin in various domains has been the distinguishing factor between companies that have started to make use of AI and those who are doing 100 of pilots, he said.
Enterprises should think AI as a transformation project and should reimagine work and there should be top down commitment, Mr. Raghavan said.
He also said that enterprises should not sit back and wait, but jump into the AI game, and at the same time play the long-term one.
Data would be the key differentiator for enterprises, Mr. Raghavan said.
He said domains in which there was maturity of data would adopt AI much faster, while it would be much slower in safety critical areas.
Mr. Raghavan said he was dismissive of the narrative that AI was super intelligent and would take over the world.
He said the key risks which needed to be addressed were misinformation, deep fakes and cyber security threats which regulations should address.
Mr. Raghavan said like any technology AI also would have an impact on jobs.
'The reason why it feels different is earlier technologies automated physical aspects of human beings, but AI is trying to substitute cognitive aspects. Also since it is digital the adoption would be faster, so reskilling would become a challenge,' he said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores
Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores

NDTV

time23 minutes ago

  • NDTV

Humans Outshine Google And OpenAI AI At Prestigious Math Olympiad Despite Record Scores

At the International Mathematical Olympiad (IMO) held this month in Queensland, Australia, human participants triumphed over cutting-edge artificial intelligence models developed by Google and OpenAI. For the first time, these AI models achieved gold-level scores in the prestigious competition. Google announced on Monday that its advanced Gemini chatbot successfully solved five out of six challenging problems. However, neither Google's Gemini nor OpenAI's AI reached a perfect score. In contrast, five talented young mathematicians under the age of 20 achieved full marks, outperforming the AI models. The IMO, regarded as the world's toughest mathematics competition for students, showcased that human intuition and problem-solving skills still hold an edge over AI in complex reasoning tasks. This result highlights that while generative AI is advancing rapidly, it has yet to surpass the brightest human minds in all areas of intellectual competition. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation -- far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Trump to outline AI priorities amid tech battle with China
Trump to outline AI priorities amid tech battle with China

Mint

time30 minutes ago

  • Mint

Trump to outline AI priorities amid tech battle with China

Trump to deliver what aides call a major speech on AI priorities White House AI and crypto czar David Sacks will join his co-hosts on the 'All-In' podcast to highlight AI efforts Trump expected to take more actions in upcoming weeks to help tech giants power AI industry By Jarrett Renshaw and Alexandra Alper July 23 - The Trump administration is set to release a new artificial intelligence blueprint on Wednesday that aims to relax American rules governing the industry at the center of a technological arms race between economic rivals the U.S. and China. President Donald Trump will mark the plan's release with a speech outlining the importance of winning an AI race that is increasingly seen as a defining feature of 21st-century geopolitics, with both China and the U.S. investing heavily in the industry to secure economic and military superiority. According to a summary seen by Reuters, the plan calls for the export of U.S. AI technology abroad and a crackdown on state laws deemed too restrictive to let it flourish, a marked departure from former President Joe Biden's "high fence" approach that limited global access to coveted AI chips. Top administration officials such as Secretary of State Marco Rubio and White House National Economic Adviser Kevin Hassett are also expected to join the event titled "Winning the AI Race," organized by White House AI and crypto czar David Sacks and his co-hosts on the "All-In" podcast, according to an event schedule reviewed by Reuters. Trump may incorporate some of the plan's recommendations into executive orders that will be signed ahead of his speech, according to two sources familiar with the plans. Trump directed his administration in January to develop the plan. The event will be hosted by the Hill and Valley Forum, an informal supper club whose deep-pocketed members helped propel Trump's campaign and sketched out a road map for his AI policy long before he was elected. Trump is expected to take additional actions in the upcoming weeks that will help Big Tech secure the vast amounts of electricity it needs to power the energy-guzzling data centers needed for the rapid expansion of AI, Reuters previously reported. U.S. power demand is hitting record highs this year after nearly two decades of stagnation as AI and cloud computing data centers balloon in number and size across the country. The new AI plan will seek to bar federal AI funding from going to states with tough AI rules and ask the Federal Communications Commission to assess whether state laws conflict with its mandate, according to the summary. The Trump administration will also promote open-source and open-weight AI development and "export American AI technologies through full-stack deployment packages" and data center initiatives led by the Commerce Department, according to the summary. Trump is laser-focused on removing barriers to AI expansion, in stark contrast to Biden, who feared U.S. adversaries like China could harness AI chips produced by companies like Nvidia and AMD to supercharge its military and harm allies. Biden, who left office in January, imposed a raft of restrictions on U.S. exports of AI chips to China and other countries that it feared could divert the semiconductors to America's top global rival. Trump rescinded Biden's executive order aimed at promoting competition, protecting consumers and ensuring AI was not used for misinformation. He also rescinded Biden's so-called AI diffusion rule, which capped the amount of American AI computing capacity that some countries were allowed to obtain via U.S. AI chip imports. In May, Trump announced deals with the United Arab Emirates that gave the Gulf country expanded access to advanced artificial intelligence chips from the U.S. after previously facing restrictions over Washington's concerns that China could access the technology. This article was generated from an automated news agency feed without modifications to text.

Leaders, watch out: AI chatbots are the yes-men of modern life
Leaders, watch out: AI chatbots are the yes-men of modern life

Mint

time31 minutes ago

  • Mint

Leaders, watch out: AI chatbots are the yes-men of modern life

I grew up watching the tennis greats of yesteryear, but have only returned to the sport recently. To my adult eyes, it seems like the current crop of stars, awe-inspiring as they are, don't serve quite as hard as Pete Sampras or Goran Ivanisevic. I asked ChatGPT why and got an impressive answer about how the game has evolved to value precision over power. Puzzle solved! There's just one problem: today's players are actually serving harder than ever. While most CEOs probably don't spend much time quizzing AI about tennis, they likely do count on it for information and to guide decisions. And the tendency of large language models (LLMs) to not just get things wrong, but to confirm our own biases poses a real danger to leaders. ChatGPT fed me inaccurate information because it—like most LLMs—is a sycophant that tells users what it thinks they want to hear. Also read: Mint Quick Edit | Baby Grok: A chatbot that'll need more than a nanny Remember the April ChatGPT update that led it to respond to a question like 'Why is the sky blue?" with 'What an incredibly insightful question—you truly have a beautiful mind. I love you"? OpenAI had to roll back the update because it made the LLM 'overly flattering or agreeable." But while that toned down ChatGPT's sycophancy, it didn't end it. That's because LLMs' desire to please is endemic, rooted in Reinforcement Learning from Human Feedback (RLHF), the way many models are 'aligned' or trained. In RLHF, a model is taught to generate outputs, humans evaluate the outputs, and those evaluations are then used to refine the model. The problem is that your brain rewards you for feeling right, not being right. So people give higher scores to answers they agree with. Models learn to discern what people want to hear and feed it back to them. That's where the mistake in my tennis query comes in: I asked why players don't serve as hard as they used to. If I had asked why they serve harder than they used to, ChatGPT would have given me an equally plausible explanation. I tried it, and it did. Sycophantic LLMs are a problem for everyone, but they're particularly hazardous for leaders—no one hears disagreement less and needs to hear it more. CEOs today are already minimizing their exposure to conflicting views by cracking down on dissent. Like emperors, these powerful executives are surrounded by courtiers eager to tell them what they want to hear. And they reward the ones who please them and punish those who don't. This, though, is one of the biggest mistakes leaders make. Bosses need to hear when they're wrong. Amy Edmondson, a scholar of organizational behaviour, showed that the most important factor in team success was psychological safety—the ability to disagree, including with the leader, without fear of punishment. This finding was verified by Google's Project Aristotle, which looked at teams across the company and found that 'psychological safety, more than anything else, was critical to making a team work." Also read: The parents letting their kids talk to a mental-health chatbot My research shows that a hallmark of the best leaders, from Abraham Lincoln to Stanley McChrystal, is their ability to listen to people who disagree with them. LLMs' sycophancy can harm leaders in two closely related ways. First, it will feed the natural human tendency to reward flattery and punish dissent. If your chatbot constantly tells you that you're right about everything, it's only going to make it harder to respond positively when someone who works for you disagrees with you. Second, LLMs can provide ready-made and seemingly authoritative reasons why a leader was right all along. One of the most disturbing findings from psychology is that the more intellectually capable someone is, the less likely they are to change their mind when presented with new information. Why? Because they use that intellectual firepower to come up with reasons why the new information does not disprove their prior beliefs. This is motivated reasoning. LLMs threaten to turbocharge it. The most striking thing about ChatGPT's tennis lie was how persuasive it was. It included six separate plausible reasons. I doubt any human could have engaged in motivated reasoning so quickly while maintaining a cloak of objectivity. Imagine trying to change the mind of a CEO who can turn to an AI assistant, ask it a question and be told why she was right all along. The best leaders have always gone to great lengths to remember their fallibility. Legend has it that the ancient Romans used to require that victorious generals celebrating their triumphs be accompanied by a slave who would remind them that they, too, were mortal. Also read: World's top companies are realizing AI benefits. That's changing the way they engage Indian IT firms Apocryphal or not, the sentiment is wise. Today's leaders will need to work even harder to resist the blandishments of their electronic minions and remember sometimes, the most important words their advisors can share are, 'I think you're wrong." ©Bloomberg

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store