
Big tech on a quest for ideal AI device
ChatGPT-maker OpenAI has enlisted the legendary designer behind the iPhone to create an irresistible gadget for using generative artificial intelligence (AI).The ability to engage digital assistants as easily as speaking with friends is being built into eyewear, speakers, computers and smartphones, but some argue that the Age of AI calls for a transformational new gizmo."The products that we're using to deliver and connect us to unimaginable technology are decades old," former Apple chief design officer Jony Ive said when his alliance with OpenAI was announced."It's just common sense to at least think, surely there's something beyond these legacy products."
Sharing no details, OpenAI chief executive Sam Altman said that a prototype Ive shared with him "is the coolest piece of technology that the world will have ever seen."
According to several US media outlets, the device won't have a screen, nor will it be worn like a watch or broach.
Kyle Li, a professor at The New School, said that since AI is not yet integrated into people's lives, there is room for a new product tailored to its use.
The type of device won't be as important as whether the AI innovators like OpenAI make "pro-human" choices when building the software that will power them, said Rob Howard of consulting firm Innovating with AI
Learning from flops
The industry is well aware of the spectacular failure of the AI Pin, a square gadget worn like a badge packed with AI features but gone from the market less than a year after its debut in 2024 due to a dearth of buyers.
The AI Pin marketed by startup Humane to incredible buzz was priced at $699.
Now,
Meta
and OpenAI are making "big bets" on AI-infused hardware, according to CCS Insight analyst Ben Wood.
OpenAI made a multi-billion-dollar deal to bring Ive's startup into the fold.
Google
announced early this year it is working on mixed-reality glasses with AI smarts, while Amazon continues to ramp up Alexa digital assistant capabilities in its Echo speakers and displays.
Apple is being cautious embracing
generative AI
, slowly integrating it into iPhones even as rivals race ahead with the technology. Plans to soup up its Siri chatbot with generative AI have been indefinitely delayed.
The quest for creating an AI interface that people love "is something Apple should have jumped on a long time ago," said Futurum research director Olivier Blanchard.
Time to talk
Blanchard envisions some kind of hub that lets users tap into AI, most likely by speaking to it and without being connected to the internet.
"You can't push it all out in the cloud," Blanchard said, citing concerns about reliability, security, cost, and harm to the environment due to energy demand.
"There is not enough energy in the world to do this, so we need to find local solutions," he added.
Howard expects a fierce battle over what will be the must-have personal device for AI, since the number of things someone is willing to wear is limited and "people can feel overwhelmed."
A new piece of hardware devoted to AI isn't the obvious solution, but OpenAI has the funding and the talent to deliver, according to Julien Codorniou, a partner at venture capital firm 20VC and a former Facebook executive.
OpenAI recently hired former Facebook executive and Instacart chief Fidji Simo as head of applications, and her job will be to help answer the hardware question.
Voice is expected by many to be a primary way people command AI.
Google chief Sundar Pichai has long expressed a vision of "ambient computing" in which technology blends invisibly into the world, waiting to be called upon.
"There's no longer any reason to type or touch if you can speak instead," Blanchard said.
"Generative AI wants to be increasingly human" so spoken dialogues with the technology "make sense," he added.
However, smartphones are too embedded in people's lives to be snubbed any time soon, said Wood.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
AI experts divided over Apple's research on large reasoning model accuracy
A recent study by tech giant Apple claiming that the accuracy of frontier large reasoning models (LRMs) declines as task complexity increases, and eventually collapses altogether, has led to differing views among experts in the artificial intelligence (AI) world. The paper titled 'The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity' was published by Apple last week. Apple, in its paper, said it conducted experiments across diverse puzzles which show that such LRMs face a complete accuracy collapse beyond certain complexities. While their reasoning efforts increase with the complexity of a problem till a point, it then declines despite having an adequate token budget. A token budget for large language models (LLM) refers to the practice of setting a limit on the number of tokens an LLM can use for a specific task. The paper is co-authored by Samy Bengio, senior director, AI and ML research at Apple who is also the brother of Yoshua Bengio, often referred to as the godfather of AI. Meanwhile, AI company Anthropic, backed by Amazon, countered Apple's claims in a separate paper, saying that the 'findings primarily reflect experimental design limitations rather than fundamental reasoning failures.' 'Their central finding has significant implications for AI reasoning research. However, our analysis reveals that these apparent failures stem from experimental design choices rather than inherent model limitations,' it said. Mayank Gupta, founder of Swift Anytime, currently building an AI product on stealth, told Business Standard that both sides have equally important points. 'What this tells me is that we're still figuring out how to measure reasoning in LRMs the right way. The models are improving rapidly, but our evaluation tools haven't caught up. We need tools that separate how well an LRM reasons from how well it generates output and that's where the real breakthrough lies,' he said. Gary Marcus, a US academic, who has become a voice of caution on the capabilities of AI models, said in a best case scenario, these models can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. 'What this means for business and society is that you can't simply drop o3 or Claude into some complex problem and expect it to work reliably,' he wrote in his blog, Marcus on AI. The Apple researchers conducted experiments comparing thinking and non-thinking model pairs across controlled puzzle environments. 'The most interesting regime is the third regime where problem complexity is higher and the performance of both models have collapsed to zero. Results show that while thinking models delay this collapse, they also ultimately encounter the same fundamental limitations as their non-thinking counterparts,' they wrote. Apple's observations in the paper perhaps can explain why the iPhone maker has been slow to embed AI across its products or operating systems, a point on which it was criticised at the Worldwide Developers Conference (WWDC) last week. This approach is opposite to the ones adopted by Microsoft-backed OpenAI, Meta, and Google, who are spending billions to build more sophisticated frontier models to solve more complex tasks. However, there are other voices too who believe that Apple's paper has its limitations. Ethan Mollick, associate professor at the Wharton School who studies the effects of AI on work, entrepreneurship, and education, mentioned on X that while the limits of reasoning models are useful, it is premature to say that LLMs are hitting a wall.


Economic Times
2 hours ago
- Economic Times
ChatGPT took on a 50-year-old Atari — and lost
Synopsis In a surprising turn of events, ChatGPT, a leading AI chatbot, was defeated by the vintage Atari 2600 in a chess match. Despite ChatGPT's initial confidence and claims of chess prowess, the Atari console, launched in 1977, consistently outperformed the AI. The experiment highlighted the limitations of ChatGPT in logical reasoning and board awareness, leading to its eventual concession.
&w=3840&q=100)

Business Standard
3 hours ago
- Business Standard
Bhavish Aggarwal's Krutrim bets on India-first AI to rival global peers
Krutrim, the artificial intelligence startup founded by Ola's Bhavish Aggarwal, is positioning its recently launched flagship assistant, Kruti, to stand apart from global peers like OpenAI's ChatGPT and Google's Gemini by leveraging deep local integration, multilingual capabilities, and agentic intelligence tailored to India's unique digital ecosystem. The company calls Kruti India's first agentic AI, capable of booking cabs, paying bills, and generating images while supporting 13 Indian languages using a localised large language model. In the Indian context, the firm competes with global AI giants such as OpenAI, Anthropic and Google, as well as local players such as Sarvam AI and 'Our key differentiator will come with integrating local services,' said Sunit Singh, Senior Vice-President for Product at Krutrim. 'That's not something that will be very easy for global players to do.' Krutrim has already integrated India-specific services, with plans to scale this integration further. The strategy aims to embed Kruti deeply into Indian digital life, allowing it to perform functional tasks through local service connections. This is an area where international competitors may struggle due to regulatory and infrastructural complexities in the Indian market. Voice-first As Krutrim positions Kruti to serve India's linguistically diverse population, the company is doubling down on voice-first, multilingual AI as a core enabler of scale and accessibility. Navendu Agarwal, Group CIO of Ola, emphasised that India's unique language landscape demands a fundamentally different approach from Western AI products. 'India is a voice-first world. So we are building voice-first models,' Agarwal said, outlining Krutrim's strategy to prioritise natural, speech-driven interactions. Currently, Kruti supports voice commands in multiple Indian languages, with plans underway to expand that footprint. Agarwal said the long-term vision is to enable seamless, speech-based interactions that go deeper into local dialects. The company's multilingual, voice-first design is central to its go-to-market strategy, especially in reaching non-English speakers in semi-urban and rural India. The plan also includes integrating with widely used Indian services and government platforms. Krutrim's long-term vision for Kruti centres on true agentic intelligence, where the assistant can act autonomously on behalf of users. Whether it's 'book me a cab to the airport' or 'order my usual lunch', Kruti understands intent and executes tasks without micromanagement. 'Think about it—a super agent which can do food, do apps, provide you help and education information and which can also manage your budget and finance,' said Agarwal. 'So that's what is a mega-agent, or the assistant which is communicating with all of them seamlessly wherever it is needed.' Hybrid technology Rather than relying solely on a single in-house model, Krutrim has opted for a composite approach aimed at optimising accuracy, scalability and user experience, according to Chandra Khatri, the company's Vice-President and Head of AI. 'The goal is to build the best and most accurate experience,' Khatri said. 'If that means we need to leverage, say Claude for coding, which is the best coding model in the world, we'll do that.' Kruti is powered by Krutrim's latest large language model, Krutrim V2, alongside open-source systems. The AI agents evaluate context-specific needs and choose from this suite of models to deliver tailored responses. Investments Krutrim reached unicorn status last year after raising $50 million in equity during its inaugural funding round. The round, which valued the company at $1 billion, included participation from investors such as Matrix Partners India. Earlier this year, company founder Bhavish Aggarwal announced an investment of ₹2,000 crore in Krutrim, with a commitment to invest an additional ₹10,000 crore by next year. The company also launched the Krutrim AI Lab and released some of its work to the open-source community. As Krutrim's AI assistant begins to interface with highly contextual and personal user data, the company emphasises a stringent, India-first approach to data privacy and regulatory compliance. The company employs internal algorithms to manage and isolate user data, ensuring it remains secure and compartmentalised. While Krutrim is open to competing globally, it remains committed to addressing India's market complexities first. 'We don't shy away from going global. But our primary focus is India first,' Agarwal said. Krutrim's emphasis on embedded, action-oriented intelligence—capable of not just understanding queries but also fulfilling them through integrations—could define its edge in the increasingly competitive AI landscape. Here, localisation and service depth may become as critical as raw model power.