logo
'Don't let LLM's success cloud your judgment': Tech CTO shares hard-hitting AI truths for businesses

'Don't let LLM's success cloud your judgment': Tech CTO shares hard-hitting AI truths for businesses

Time of India7 days ago
When Hype Becomes Hazard
More Than Just LLMs
Let Curiosity Lead, Not Cost
Why This Matters Now
You Might Also Like:
GitHub CEO calls out AI panic, explains why the idea of coding skills becoming obsolete is 'mistaken'
Speaking at the recent MIT Technology Review EmTech AI Conference, Akamai CTO Robert Blumofe offered a refreshingly grounded perspective on how enterprises can break free from the relentless " AI hype cycle "—a pattern where curiosity turns to FOMO, and hastily adopted AI solutions lead to disappointment. His four-point roadmap, shaped by Akamai's own AI journey, serves as a crucial reality check in a world increasingly driven by artificial intelligence.Blumofe, who also holds a PhD in computer science from MIT, described a familiar trap that many organizations are falling into. 'That's the chain: AI success, theater, FOMO, and some form of failure,' he said during his talk. Businesses, in their rush to appear cutting-edge, mistake early-stage use cases for scalable solutions—plunging into costly and ineffective AI deployments.And this problem isn't niche. According to a Pew Research study cited in his address, only 1 in 6 U.S. workers currently use AI at work, revealing a stark gap between AI's perceived and practical utility. 'Most jobs at this point can benefit from AI,' said Blumofe. 'It's a matter of which tasks can most benefit, and how, using which form of AI.'Blumofe urged companies to look beyond the fascination with large language models . While LLMs like ChatGPT have demonstrated remarkable versatility—from email classification to customer support—they're not the silver bullet for every enterprise challenge.'In many ways, an LLM is a ridiculously expensive way to solve certain problems,' he noted, pointing to Akamai's use of purpose-built models in cybersecurity threat detection. Models like these, he argued, offer more efficiency and relevance than a trillion-parameter generalist.His advice? Think smaller and sharper. LLMs are just one tool in a vast AI toolkit. Symbolic AI, deep learning, and ensemble models can be better suited for tasks that require precision, logic, and domain specificity.Akamai's approach to fostering AI adoption is democratic: let employees experiment. The company built an internal AI sandbox, giving teams the freedom to play, build, and discover practical applications on their own terms. While the setup may test IT infrastructure limits, Blumofe insists the freedom sparks innovation. 'I feel no need to evaluate each use case,' he said.And when asked about companies that require hiring managers to prove AI can't do a job before hiring a human, Blumofe didn't mince words: 'That's getting the tail before the dog.' The question shouldn't be, 'Why not AI?' but 'What's the right tool for the problem at hand?'Blumofe's caution comes at a pivotal moment in AI's evolution. As VentureBeat recently reported, major players like OpenAI, DeepMind, and Meta are collaborating to raise alarms about AI systems potentially becoming too smart—and too opaque. A recent paper on 'Chain of Thought Monitorability', endorsed by AI luminaries like Geoffrey Hinton, warns that if LLMs start thinking in ways we can't interpret, we risk losing control.That's why responsible leadership matters now more than ever. The real AI revolution won't be won by the company with the flashiest chatbot—but by the one that knows exactly when, why, and how to use it.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT co-creator appointed head of Meta AI Superintelligence Lab
ChatGPT co-creator appointed head of Meta AI Superintelligence Lab

First Post

time16 minutes ago

  • First Post

ChatGPT co-creator appointed head of Meta AI Superintelligence Lab

Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. The move came months after Meta went on to poach AI talent from competitors read more Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. X / @alexandr_wang Facebook co-founder and Meta CEO Mark Zuckerberg announced Shengjia Zhao, co-creator of OpenAI's ChatGPT, as the new chief scientist of Meta Superintelligence Labs. It is pertinent to note that Zhao was one of several strategic hires in Zuckerberg's multi-billion-dollar hiring spree. In the announcement, Zuckerberg said that Zhao's name as the co-founder of Meta Superintelligence Labs and its lead scientist was locked in 'from day one'. 'Now that our recruiting is going well and our team is coming together, we have decided to formalise his leadership role,' he added. STORY CONTINUES BELOW THIS AD The ChatGPT co-creator would directly report to Zuckerberg and Alexandr Wang, the former CEO of Scale AI, who is now Meta's chief AI officer. 'Shengjia has already pioneered several breakthroughs, including a new scaling paradigm,m and distinguished himself as a leader in the field,' the Meta CEO said in a social media post. 'I'm looking forward to working closely with him to advance his scientific vision. The next few years are going to be very exciting!' he concluded. The man behind ChatGPT Apart from creating the renowned AI chatbot, Zhao has played an instrumental role in developing GPT-4, mini models, 4.1, and o3, CNBC reported. In the past, he has also led synthetic data efforts at an AI research company. In a separate post, Wang also celebrated Zhao's inclusion in the team. 'We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team," he wrote in a post. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 The announcement came just months after reports emerged that Meta has spent billions of dollars hiring AI talents from Google, OpenAI, Apple and Anthropic. STORY CONTINUES BELOW THIS AD Apart from this, the tech giant also acquired ScaleAI for a whopping $14 billion and made its CEO Meta's chief AI officer. Zuckerberg made it clear that his company would spend hundreds of billions of dollars on building huge AI data centres in the US. Hence, it will be interesting to see how Meta performs in an already competitive market.

Ola's AI venture Krutrim lays off over 100, axes Kruri's linguistics team
Ola's AI venture Krutrim lays off over 100, axes Kruri's linguistics team

Business Standard

time42 minutes ago

  • Business Standard

Ola's AI venture Krutrim lays off over 100, axes Kruri's linguistics team

Bhavish Aggarwal's artificial intelligence (AI) startup Krutrim has initiated a second wave of layoffs, just weeks after launching its flagship assistant Kruti. According to a report by The Economic Times, more than 100 employees, primarily from the linguistics division, were let go last week, following a smaller round of job cuts in June. The downsizing comes even as Krutrim positions Kruti as India's answer to OpenAI's ChatGPT and Google's Gemini, with ambitions rooted in localisation, multilingual capabilities, and voice-first interactivity tailored to the Indian market. Kruti's language training nears completion In a statement, Krutrim said the layoffs are part of a 'strategic realignment' to build 'leaner, more agile teams", aligning with evolving business priorities. The company declined to confirm exact figures but cautioned against 'publishing unverified reports.' Citing multiple sources, The Economic Times reported that the cuts heavily impacted linguists hired for full-time roles across 10 Indian languages, including Tamil, Odia, Telugu, and Marathi. Many employees had relocated to Bengaluru just months ago. The linguistics team had reportedly grown to around 600 people before the reductions. Krutrim cuts funding target amid tepid investor interest Krutrim had become a unicorn in 2024 after raising $50 million from Z47 Partners. Around the same time, it launched Krutrim AI Labs and announced a ₹2,000 crore investment into AI development, with founder Bhavish Aggarwal pledging to scale this up to ₹10,000 crore by next year. While Krutrim had initially aimed to raise $500 million, the target was reduced to $300 million due to tepid investor interest. The company's large language model and cloud services, launched in 2024, have reportedly struggled to gain momentum, with several startups opting instead for more mature platforms offered by global hyperscalers. Leadership changes have also added to challenges as nearly a dozen senior executives exited the company in 2024, with further departures taking place in early 2025. Kruti: India's first agentic AI assistant Despite the operational shakeup, Krutrim continues to claim Kruti as India's first agentic AI assistant —designed not just to respond to prompts, but to perform tasks such as booking cabs, paying bills, or ordering food. It currently supports 13 Indian languages. 'Our key differentiator will come with integrating local services,' said Sunit Singh, Senior Vice-President for Product at Krutrim, as earlier reported by Business Standard. 'That's not something that will be very easy for global players to do.' Krutrim aims to embed Kruti into everyday Indian digital life by offering voice-driven services that cater to regional and non-English-speaking populations. While Kruti is powered by Krutrim's proprietary Krutrim V2 model, the company employs a hybrid architecture that includes open-source systems and external models. Krutrim competes with global players like OpenAI, Google, and Anthropic, as well as Indian startups such as Sarvam AI and

AI agents are here. Here's what to know about what they can do – and how they can go wrong
AI agents are here. Here's what to know about what they can do – and how they can go wrong

Mint

time42 minutes ago

  • Mint

AI agents are here. Here's what to know about what they can do – and how they can go wrong

Melbourne, Jul 28 (The Conversation) We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in 'teams' or use tools to accomplish complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, 'thinks and acts'. These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential. ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also 'tool users' as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more. A year of rapid development Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged 'cheat at anything' agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for particular areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI 'co-scientist' is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more – and get more wrong Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is 'high risk' due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had 'panicked'. Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks. Learn about agents – and build your own Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework. (The Conversation) NSA NSA

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store