logo
Apple Paper questions path to AGI, sparks division in GenAI group

Apple Paper questions path to AGI, sparks division in GenAI group

Economic Times7 hours ago

New Delhi: A recent research paper from Apple focusing on the limitations of large reasoning models in artificial intelligence has left the generative AI community divided, sparking significant debate whether the current path taken by AI companies towards artificial general intelligence is the right one to take.
The paper, titled The Illusion of Thinking, published earlier this week, demonstrates that even the most sophisticated large reasoning models do not genuinely think or reason in a human-like way. Instead, they excel at pattern recognition and mimicry, generating responses that only appear intelligent, but lack true comprehension or conceptual understanding.
The study used controlled puzzle environments, such as the popular Tower of Hanoi puzzle, to systematically test reasoning abilities across varying complexities by large reasoning models such as OpenAI's o3 Mini, DeepSeek's R1, Anthropic's Claude 3.7 Sonnet and Google Gemini Flash. The findings show that while large reasoning and language models may handle simple or moderately complex tasks, they experience total failure when faced with high-complexity problems, which occur despite having sufficient computational resources.
Gary Marcus, a cognitive scientist and a known sceptic of the claims surrounding large language models, views Apple's work as providing compelling empirical evidence that today's models primarily repeat patterns learned during training from vast datasets without genuine understanding or true reasoning capabilities. "If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote," Marcus wrote in his blog. Marcus' arguments are also echoed in earlier comments of Meta's chief AI scientist Yann LeCun, who has argued that current AI systems are mainly sophisticated pattern recognition tools rather than true thinkers.
The release of Apple's paper ignited a polarised debate across the broader AI community, with many panning the design of the study than its findings.A published critique of the paper by researchers from Anthropic and San-Francisco based Open Philanthropy said the study has issues in the experimental design, that it overlooks output limits.In an alternate demonstration, the researchers tested the models on the same problems but allowed them to use code, resulting in high accuracy across all the tested models. The critique around the study's failure to take in the output limits and the limitations in coding by the models have also been highlighted by other AI commentators and researchers including Matthew Berman, a popular AI commentator and researcher."SOTA models failed The Tower of Hanoi puzzle at a complexity threshold of >8 discs when using natural language alone to solve it. However, ask it to write code to solve it, and it flawlessly does up to seemingly unlimited complexity," Berman wrote in a post on X (formerly Twitter).The study highlights Apple's more cautious approach to AI compared to rivals like Google and Samsung, who have aggressively integrated AI into their products. Apple's research explains its hesitancy to fully commit to AI, contrasting with the industry's prevailing narrative of rapid progress.Many questioned the timing of the release of the study, coinciding with Apple's annual WWDC event where it announces its next software updates.Chatter across online forums said the study was more about managing expectations in light of Apple's own struggles with AI.That said, practitioners and business users argue that the findings do not change the immediate utility of AI tools for everyday applications.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Indian income tax sleuths have no way to tax digital value loop'
'Indian income tax sleuths have no way to tax digital value loop'

Time of India

time2 hours ago

  • Time of India

'Indian income tax sleuths have no way to tax digital value loop'

Indian tax authorities are staring at a growing revenue loss in digital tax collection as artificial intelligence (AI) firms such as OpenAI , Anthropic and Perplexity generate income at a fast clip from Indian developers, companies and startups, despite having no physical presence in the country, reigniting long-standing concerns over the concept of 'permanent establishment' in the digital economy. The companies are earning millions of dollars from Indian developers, startups and enterprises who access their AI models-eg ChapGPT, Claude, Perplexity etc-through paid APIs and subscriptions but they operate without local offices, employees, or servers in India, allowing them to bypass the country's tax obligations entirely. Then there is also an additional conundrum of how to tax AI models that continuously extract important insights and information from Indian-origin data-users, startups and companies-but the revenue is generated and taxed in a different geography. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Linh Dong: Unsold Furniture Liquidation 2024 (Prices May Surprise You) Unsold Furniture | Search Ads Learn More Undo ETtech Tax experts say Indian tax authorities currently have no way to tax the 'digital value loop'. Live Events Experts feel that the India tax framework, which was created around the concept of physical presence, such as offices, employees, or equipment, is struggling to keep up with the new emerging software business models. And the current predicament has renewed the discussion over interpreting 'nexus'-the legal basis for taxing foreign entities-as algorithms obfuscate borders and challenge old tax concepts. "The digitalisation of the economy has posed serious challenges to the existing international tax system, primarily due to the ability of digital businesses to scale in a jurisdiction without any physical presence, and their heavy reliance on intangibles and the value created by user-contributed data," said Akhilesh Ranjan, adviser, tax and regulatory services at Price Waterhouse & Co LLP. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories He added the current international tax architecture, which was based entirely on physical presence and where allocation of profits was governed by the separate entity concept and the arm's length standard, has been shown to have become incapable of providing complete answers to questions of 'nexus', characterisation and a fair and equitable allocation of income. The tax headache is only going to grow bigger as AI-related business activity in India is quickly gaining traction, with startups across sectors integrating AI through API subscriptions, while larger companies invest heavily in AI-powered automation and analytics tools. 'The issue with AI models isn't very different from the unresolved software taxation problem. For example, if someone in the US licenses software for use in India, India currently can't tax that income. Based on Supreme Court rulings, such payments are not considered as royalty, and furthermore, the previous equalisation levy that was applicable has been withdrawn. Since there's no permanent establishment either, the income remains untaxed. So, we end up in a similar situation with AI—where there may be an income source from India, but under existing treaty obligations, the country cannot tax it,' said Rohinton Sidhwa, leader, global business tax, Deloitte India. This is part of a larger problem Pillar One was meant to solve, but geopolitical pushback—especially from the US—has stalled, he added. 'As long as treaties don't define software or AI payments as royalties or establish a clear nexus, countries like India can't tax this income, even if it's sourced from their own markets,' he said. Under the OECD-led international tax reform, Pillar One allocated a chunk of profits from large multinational companies, especially those offering digital services, to market countries, allowing them to tax these firms even without a physical presence. Experts say India is already playing an active role in ongoing United Nations efforts to develop a new framework for taxing cross-border digital services. 'It must continue to pursue multilateral consensus on the 'physical presence' test being supplemented by 'the place of generation of user data'; arm's length transfer pricing giving way to a formulary approach based on revenue sourcing; and the debate on the relative primacy of 'source' versus 'residence' being shifted to a discussion on the extent to which income taxation should be based on the situs of value creation and of consumption,' said Ranjan.

AI firms are hyping up threat to coders to push their model sales
AI firms are hyping up threat to coders to push their model sales

Time of India

time2 hours ago

  • Time of India

AI firms are hyping up threat to coders to push their model sales

Anshul Ramachandran , founding team member at artificial intelligence (AI) coding platform Windsurf , said he believes that the idea that AI will replace software developers is a narrative being overhyped by some AI founders and chief executives largely to push their model sales. 'My personal belief is that most of them wanted to fire a lot of people anyways, and AI is a very good excuse,' Ramachandran told ET in an exclusive interview. 'And then I think some of these folks leading model companies have some incentive as well—they're building models that are for software engineering. That's like very much a business tactic to sell more models.' The California-headquartered AI-assisted coding platform, which was valued at $3 billion within a year of launch, is reportedly in talks for acquisition by OpenAI . The company has seen India emerge as its second-largest market after the US, both in terms of developer adoption and enterprise partnerships, Ramachandran said. 'There are close to 17 million developers in India and there's very real energy for adopting the latest and greatest AI,' Ramachandran said. 'We're expanding our team there and actively thinking about building GPU clusters in India.' He said the company is working with every leading IT company in India at various levels. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Moreover, Indian IT companies are disrupting themselves faster than their international peers by using more AI, he said. But, he added, that only means a lot more software will now be created out of India because these companies have already made technology a differentiator. Windsurf is also considering offering special pricing for a cost-sensitive market like India, Ramachandran said, particularly for enterprises. 'We've always provided a very generous free tier. And part of that rationale is, even in markets like India, we want to be able to democratise the technology,' he said. While work in the future will be AI-assisted, humans will remain in the driver's seat, according to Ramachandran. AI-based tools like Windsurf have the potential to reduce the number of people needed to do the same work, which could be viewed negatively, he said. 'It's actually been quite the opposite. People see it as an opportunity to have more business.' He added that we will create ten times more software and, therefore, always need 'engineering 101' and investments in the workforce. 'That's where the counterintuitive part comes in, because every incremental hire you make now can actually create even more software than before,' he said. On Windsurf's rapid growth, Ramachandran said the company was at 'the right place at the right time'. 'We chose a problem space that we understood that ended up being of great value,' he said. 'We were an infra company before we were even an AI company, so our background in GPU infra has absolutely helped us.'

AI and biomanufacturing: can India's policies match its ambitions?
AI and biomanufacturing: can India's policies match its ambitions?

The Hindu

time3 hours ago

  • The Hindu

AI and biomanufacturing: can India's policies match its ambitions?

India stands at a pivotal juncture in its quest to harness artificial intelligence (AI) for biotechnology innovation. On one hand, initiatives like the BioE3 Policy and the IndiaAI Mission reflect a bold vision to position the country as a global leader in AI-driven biomanufacturing and ethical AI development. On the other, fragmented regulations and lagging safeguards threaten to undermine this progress. As India races to capitalise on AI's transformative potential, a critical question emerges: can it balance ambition with accountability? India's biomanufacturing sector is abuzz with possibilities. For decades, the country has been the world's go-to supplier for generic medicines and vaccines, a reputation it has built on scale, cost, and reliability. But now, as AI sweeps through the global life sciences industry, there's a sense that something much bigger is in the works. Many modern biomanufacturing facilities already have robots running precision tasks, biosensors streaming real-time data, and AI models quietly optimising everything from fermentation to packaging. DNA of biomanufacturing Biocon, one of India's largest biotechnology firms, is integrating AI to improve drug screening and its biologics manufacturing processes. By leveraging AI-based predictive analytics, Biocon will enhance the efficiency of fermentation and quality control, reducing production costs while maintaining global standards. Similarly, Bengaluru-based Strand Life Sciences uses AI in genomics and personalised medicine, helping accelerate drug discovery and clinical diagnostics. Their platforms use machine learning to analyse complex biological data, making it easier to identify drug targets and predict treatment responses. These efforts illustrate how AI is already reshaping biomanufacturing and healthcare delivery in India. It's not just about swapping out people for machines. AI is transforming the very DNA of biomanufacturing. Imagine a production line where sensors feed thousands of data points every second into an AI system that can spot the faintest hint of trouble, like a temperature drift, a pH blip or a subtle change in cell growth. Before a human operator even notices, the AI predicts a deviation, tweaks the process, and keeps the batch on track. Digital twins, which are virtual replicas of entire manufacturing plants allow engineers to run simulations, test changes, and foresee problems without ever touching a real fermenter. The result? Fewer failed batches, less waste, and products that consistently meet the gold standard for quality. For a country like India, where every rupee and every dose counts, these gains can be transformative. Interesting and complicated The Government of India has clearly recognised this potential. The BioE3 Policy, rolled out in 2024, is a playbook for the future. The policy lays out plans for state-of-the-art biomanufacturing hubs, biofoundries, and 'Bio-AI Hubs' that will bring together the best minds in science, engineering, and data. There's real money on the table too, with funding and grants designed to help startups and established players alike leap from the lab bench to the market shelf. Equally important is the IndiaAI Mission, which is working alongside BioE3 to ensure India's AI revolution is both innovative and ethical. The Mission is as much about building technical capacity as about building trust. By supporting projects that focus on explainable and responsible AI — such as efforts to reduce algorithmic bias or frameworks for 'machine unlearning' — the Mission is helping set the standards for how AI should be developed and deployed in sensitive sectors like health and biotechnology. But here's where things get interesting and complicated. While India's ambitions are sky-high, its regulatory framework is still catching its breath. The rules that govern how new drugs, biologics, and manufacturing processes come to market were written for a different era. Today's AI-driven systems don't always fit neatly into those boxes. For example, when an AI model is used to control a bioreactor or predict the yield of a vaccine batch, how do we know it's reliable? Who checks that the data it was trained on is representative of India's diverse conditions, or that it won't make a catastrophic error if something unexpected happens? These aren't just technical questions. They are matters of public trust and safety. Risk-based, context-aware Globally, the rules are changing. The European Union's AI Act, effective since August 2024, classifies AI tools into four risk tiers. High-risk applications like genetic editing face strict audits while the U.S. FDA's 2025 guidance mandates a seven-step framework for AI credibility. These models emphasise two things India lacks: context-specific risk evaluation and adaptive regulation. For instance, the FDA's 'Predetermined Change Control Plans' allow iterative AI updates that are critical for evolving cancer therapies without compromising safety. India needs this kind of risk-based, context-aware oversight as it moves from pilot projects to full-scale, AI-powered manufacturing. Picture an Indian biotech startup that develops an AI platform to optimise enzyme production for the specialty chemicals industry. This sector is already worth $32 billion (Rs 2.74 lakh crore) and growing fast. If this AI is trained only on data from large, urban manufacturing sites, it might fail to account for the quirks of smaller plants in semi-urban or rural areas, like differences in water quality, ambient temperature or even local power fluctuations. Without clear standards for dataset diversity and model validation, the tool could recommend process tweaks that work beautifully in Bengaluru but flop in Baddi. The result: lost revenue, wasted resources, and a blow to India's reputation for quality. This is why the context of use and credibility assessment that are core pillars in the FDA's approach are so important. We need to be clear exactly what question the AI is answering, how it's being used, and how strict our oversight should be, depending on the risks involved. Of course, biomanufacturing is only one piece of the puzzle. Imagine a future where India not only supplies 60% of the world's vaccines but also designs them using algorithms that predict viral mutations. A future where farmers in Bihar receive AI-generated advisories to combat pest outbreaks and patients in rural Tamil Nadu are diagnosed by tools trained on India's genetic diversity. This isn't science fiction — it's the promise of AI-driven biomanufacturing, a field where India is making bold strides. Yet beneath this optimism lies a critical question: can our policies keep up with science? With great power comes… The intersections are multiplying. In drug discovery, AI platforms can screen millions of compounds in silico, slashing the time and cost needed to find new treatments. Molecular design tools are helping researchers fine-tune drug candidates for maximum efficacy and minimal side effects. Clinical trials that were once notorious for delays and inefficiencies are being streamlined by AI systems that optimise patient recruitment and trial design, making studies faster and more representative. Even the supply chain is getting an upgrade: AI-powered predictive maintenance keeps manufacturing lines humming, while demand forecasting ensures that medicines reach the right place at the right time, reducing shortages and waste. Another unique application of AI is Wipro's work in developing AI-powered solutions for pharmaceutical companies to streamline drug discovery. By combining machine learning algorithms with computational biology, Wipro has helped reduce the time required to identify viable drug candidates. Similarly, Tata Consultancy Services is leveraging AI in its 'Advanced Drug Development' platform, which uses machine learning to fine-tune clinical trials and predict treatment outcomes. These applications demonstrate how AI is not just confined to manufacturing but is transforming the entire healthcare value chain, from research to patient care. These innovations also indicate India's potential to lead the way in AI-powered healthcare solutions. But with great power comes great responsibility and a host of new challenges. Data governance is a big one. AI models are only as good as the data they're trained on, and in a country as diverse as India, that's no small feat. The Digital Personal Data Protection Act 2023 is a start, but it doesn't address the specific needs of AI in biomanufacturing, like ensuring that datasets are clean, diverse, and free from hidden biases. Intellectual property is another thorny issue. As AI begins to play a bigger role in inventing new molecules and processes, questions about inventorship, data ownership, and licensing are becoming more urgent. Without clear, harmonised policies, the risk of stifling innovation or ending up in costly legal battles persists. Create, not just copy So, what's the way forward? First, India needs to move quickly towards a risk-based, adaptive regulatory framework. This means defining the context of use for every AI tool, setting clear standards for data quality and model validation, and ensuring ongoing oversight as systems evolve. Second, India needs to invest in infrastructure and talent — and not just in the metropolitan cities but across the country. Third, it needs to foster a culture of collaboration, bringing together regulators, industry, academia, and international partners to share best practices and solve problems together. If the country gets this right, the rewards are enormous. India's legacy in generic drug manufacturing is secure but the future belongs to those who can harness the power of AI to create, not just copy. With the right policies, the right people, and the right priorities, there's no reason why the next great leap in biomanufacturing shouldn't come from India. The world is watching and the time to act is now. Deepakshi Kasat is a scientist with GlaxoSmithKline in California.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store