
Apple to develop brain-computer interface, following path laid by Elon Musk's Neuralink
Apple is stepping into the brain-computer interface (BCI) space with a major announcement. The tech giant is developing a new technology that will eventually allow users to operate Apple devices like iPhones and iPads using brain signals, without needing to touch or physically interact with the device.
This move is part of Apple's broader commitment to accessibility, with a focus on supporting individuals with significant motor impairments, such as those affected by spinal cord injuries or conditions like amyotrophic lateral sclerosis (ALS), according to The Wall Street Journal.
'At Apple, accessibility is part of our DNA,' said Apple CEO
Tim Cook
in a press release. 'Making technology for everyone is a priority for all of us, and we're proud of the innovations we're sharing this year. That includes tools to help people access crucial information, explore the world around them, and do what they love.'
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Unsold Container Homes in Cebu - Prices You Won't Believe!
Shipping Container Homes | Search Ads
Search Now
Undo
Apple vs Musk's Neuralink
To drive its BCI efforts, Apple is collaborating with a startup named Synchron. The company has developed a device called the Stentrode—a tiny, stent-like electrode implant positioned within a vein near the brain's motor cortex. It features 16 electrodes that detect brain activity.
Live Events
By contrast,
Elon Musk
's Neuralink is working on a more invasive approach. Its brain implant, the N1, is inserted directly into brain tissue and contains over 1,000 electrodes, allowing it to capture far more detailed neural data.
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories
This difference in design leads to noticeably different user experiences. Mark Jackson, one of the first individuals to test the Stentrode, shared that the system doesn't yet allow him to simulate using a mouse or touchscreen. As a result, navigating digital interfaces is significantly slower than with conventional input methods.
On the other hand, Neuralink,
in March last year
, demonstrated a more advanced level of interaction. In a livestream, Noland Arbaugh, the company's first implanted patient, was seen playing an online chess game, controlling the cursor purely through his thoughts.
While Synchron's technology may be at an earlier stage, it has already delivered some remarkable moments. According to the Wall Street Journal, Jackson, though unable to stand and not physically present in Switzerland, used an Apple VR headset connected to his brain implant to virtually gaze off a mountaintop in the Swiss Alps—and was even overcome by the sensation of his legs trembling.
'More is possible with a standard built specifically for these implants,' said Synchron CEO Tom Oxley. Apple is expected to unveil this new standard later this year, making it available to developers across the platform.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
28 minutes ago
- Indian Express
Apple researchers show how popular AI models ‘collapse' at complex problems
A new research paper by a group of people at Apple has said that artificial intelligence (AI) 'reasoning' is not all that it is cracked up to be. Through an analysis of some of the most popular large reasoning models in the market, the paper showed that their accuracy faces a 'complete collapse' beyond a certain complexity threshold. The researchers put to the test models like OpenAI o3-mini (medium and high configurations), DeepSeek-R1, DeepSeek-R1-Qwen-32B, and Claude-3.7- Sonnet (thinking). Their findings showed that the AI industry may be grossly overstating these models' capabilities. They also benchmarked these large reasoning models (LRMs) with large language models (LLMs) with no reasoning capabilities, and found that in some cases, the latter outperformed the former. 'In simpler problems, reasoning models often identify correct solutions early but inefficiently continue exploring incorrect alternatives — an 'overthinking' phenomenon. At moderate complexity, correct solutions emerge only after extensive exploration of incorrect paths. Beyond a certain complexity threshold, models completely fail to find correct solutions,' the paper said, adding that this 'indicates LRMs possess limited self-correction capabilities that, while valuable, reveal fundamental inefficiencies and clear scaling limitations'. For semantics, LLMs are AI models trained on vast text data to generate human-like language, especially in tasks such as translation and content creation. LRMs prioritise logical reasoning and problem-solving, focusing on tasks requiring analysis, like math or coding. LLMs emphasise language fluency, while LRMs focus on structured reasoning. To be sure, the paper's findings are a dampener on the promise of large reasoning models, which many have touted as a frontier breakthrough to understand and assist humans in solving complex problems, in sectors such as health and science. Apple researchers evaluated reasoning capabilities of LRMs through four controllable puzzle environments, which allowed them fine-grained control over complexity and rigorous evaluation of reasoning: Tower of Hanoi: It involves moving n disks between three pegs following specific rules, with complexity determined by the number of disks. Checker Jumping: This requires swapping red and blue checkers on a one-dimensional board, with complexity scaled by the number of checkers. River Crossing: This is a constraint satisfaction puzzle where and actors and n agents must cross a river, controlled by the number of actor/agent pairs and boat capacity. Blocks World: Focuses on rearranging blocks into a target configuration, with complexity managed by the number of blocks. 'Most of our experiments are conducted on reasoning models and their non-thinking counterparts, such as Claude 3.7 Sonnet (thinking/non-thinking) and DeepSeek-R1/V3. We chose these models because they allow access to the thinking tokens, unlike models such as OpenAI's o-series. For experiments focused solely on final accuracy, we also report results on the o-series models,' the researchers said. The researchers found that as problem complexity increased, the accuracy of reasoning models progressively declined. Eventually, their performance reached a complete collapse (zero accuracy) beyond a specific, model-dependent complexity threshold. Initially, reasoning models increased their thinking tokens proportionally with problem complexity. This indicates that they exerted more reasoning effort for more difficult problems. However, upon approaching a critical threshold (which closely corresponded to their accuracy collapse point), these models counter-intuitively began to reduce their reasoning effort (measured by inference-time tokens), despite the increasing problem difficulty. Their work also found that in cases where problem complexity is low, non-thinking models (LLMs) were capable to obtain performance comparable to, or even better than thinking models with more token-efficient inference. With medium complexity, the advantage of reasoning models capable of generating long chain-of-thought began to manifest, and the performance gap between LLMs and LRMs increased. But, where problem complexity is higher, the performance of both models collapsed to zero. 'Results show that while thinking models delay this collapse, they also ultimately encounter the same fundamental limitations as their non-thinking counterparts,' the paper said. It is worth noting though that the researchers have acknowledged their work could have limitations: 'While our puzzle environments enable controlled experimentation with fine-grained control over problem complexity, they represent a narrow slice of reasoning tasks and may not capture the diversity of real-world or knowledge-intensive reasoning problems.' Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers' rights, privacy, India's prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More

Mint
33 minutes ago
- Mint
Tata Sons' FY25 revenue is likely to be lower despite record dividends
MUMBAI : Tata Sons, the privately held parent company of the Tata group, will likely see its revenues shrink in 2024-25 despite receiving record dividend income from the group companies. The reason: Last fiscal's one-off income from the sale of Tata Consultancy Services (TCS) Ltd shares. The company's dividend income from 11 listed companies surpassed the ₹35,000-crore mark in 2024-25, which is the highest ever, showed data compiled by Mint. However, since Tata Sons sold TCS shares worth approximately ₹9,000 crore in the open market in 2023-24, its income is likely to see a decline in the latest fiscal. Also Read: Borrowed time: Investors bullish on near-term upside increase leveraged bets on Tata Motors, SBI, HAL, Jio Financial Dividends and share buybacks from group companies account for nearly all of Tata Sons' top line. While the conglomerate has 26 listed companies, the 11 companies used in this analysis account for over 95% of dividend income of the holding firm. The company received a dividend income of ₹21,529 crore in 2023-24, according to its annual report. In addition, it received ₹10,548 crore from a TCS share buyback, taking its total payout from group companies to ₹32,077 crore. Adding revenue from other sources like Tata brand usage feeds and the sale of TCS shares took Tata Sons' 2023-24 revenue to ₹43,893 crore. Tata Sons did not respond to Mint's queries. Reaping dividends TCS, where Tata Sons holds a 71.74% stake, led the pack with a dividend of ₹32,184 crore in 2024-25. In fact, a 12% increase in the payout from TCS is the reason why the dividend income of Tata Sons has hit a fresh record in 2024-25 despite a dip in dividends from three other companies. Tata Steel and Tata Motors were the next two highest dividend returning units of Tata Sons, with a payout of ₹1,427 crore and ₹947 crore. Also Read: N. Chandrasekaran changed Tata Capital. Now, the company is prepping for an IPO 'The reason for growth in dividends is because of the improving profitability of most large Tata group companies and likely the lack of suitable reinvestment opportunities for those companies," said Shriram Subramanian, the managing director of proxy advisory firm InGovern Research Services. For instance, TCS generates the highest free cash flow within the group. The company will need to payout the cash as dividends due to a slowdown in the IT services sector leaving it with few investment opportunities, Subramanian added. Tata Sons also needs these dividends as the company has to invest in fledgling businesses like Air India and Tata Electronics, which take up a lot of capital, he added. Air India is the privately held aviation business of the Tata group while Tata Electronics, which is also private, assembles iPhones for Apple Inc. on a contract basis and is setting up semiconductor manufacturing fabs. The Chandra factor The dividend income of Tata Sons has surged sharply during the tenure of chairman N. Chandrasekaran who took over the reins on 21 February 2017. The 2024-25 dividend income is up five-fold since 2016-17. Seven of the 11 listed Tata group companies analyzed returned dividend in excess of ₹100 crore to Tata Sons. Only three of these companies had paid the parent over ₹100 crore dividend in 2016-17. 'The Tata group of companies is a very large group, and one person alone cannot be responsible for its performance or the lack of it," Subramanian said. Also Read: Tata Motors, JLR flag EV supply chain as a separate business risk. They don't name China, but its imprint is all over. 'However, Mr. Chandrasekaran has, in the past eight years, brought some dynamism and tried to consolidate businesses," he added. Chandrasekaran joined the board of Tata Sons in October 2016 and took over as the chairman in February 2017. He also chairs the boards of Tata Steel Ltd, Tata Motors Ltd, Tata Power Co. Ltd, Air India Ltd, TCS and several group operating companies. He worked in TCS for 30 years, including eight years as its chief executive until 2017.


New Indian Express
35 minutes ago
- New Indian Express
Apple unveils software redesign while reeling from AI missteps, tech upheaval and Trump's trade war
CUPERTINO (California): After stumbling out of the starting gate in Big Tech's pivotal race to capitalize on artificial intelligence, Apple tried to regain its footing Monday during an annual developers conference that focused mostly on incremental advances and cosmetic changes in its technology. The presummer rite, which attracted thousands of developers from nearly 60 countries to Apple's Silicon Valley headquarters, subdued compared with the feverish anticipation that surrounded the event in the last two years. Apple highlighted plans for more AI tools designed to simplify people's lives and make its products even more intuitive. It also provided an early glimpse at the biggest redesign of its iPhone software in a decade. In doing so, Apple executives refrained from issuing bold promises of breakthroughs that punctuated recent conferences, prompting CFRA analyst Angelo Zino to deride the event as a "dud" in a research note. More AI, but what about Siri? In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year WWDC trumpeted its first major foray into the AI craze with an array of new features highlighted by the promise of a smarter and more versatile version of its virtual assistant, Siri — a goal that has yet to be realized. "This work needed more time to reach our high-quality bar," Craig Federighi, Apple's top software executive, said Monday at the outset of the conference. The company didn't provide a precise timetable for when Siri's AI upgrade will be finished but indicated it won't happen until next year at the earliest. "The silence surrounding Siri was deafening," said Forrester Research analyst Dipanjan Chatterjee said. "No amount of text corrections or cute emojis can fill the yawning void of an intuitive, interactive AI experience that we know Siri will be capable of when ready. We just don't know when that will happen. The end of the Siri runway is coming up fast, and Apple needs to lift off." Is Apple, with its 'liquid glass,' still a trendsetter? The showcase unfolded amid nagging questions about whether Apple has lost some of the mystique and innovative drive that has made it a tech trendsetter during its nearly 50-year history. Instead of making a big splash as it did with the Vision Pro headset and its AI suite, Apple took a mostly low-key approach that emphasized its effort to spruce up the look of its software with a new design called "Liquid Glass" while also unveiling a new hub for its video games and new features like a "Workout Buddy" to help manage physical fitness. Apple executives promised to make its software more compatible with the increasingly sophisticated computer chips that have been powering its products while also making it easier to toggle between the iPhone, iPad, and Mac. "Our product experience has become even more seamless and enjoyable," Apple CEO Tim Cook told the crowd as the 90-minute showcase wrapped up. IDC analyst Francisco Jeronimo said Apple seemed to be largely using Monday's conference to demonstrate the company still has a blueprint for success in AI, even if it's going to take longer to realize the vision that was presented a year ago. "This year's event was not about disruptive innovation, but rather careful calibration, platform refinement and developer enablement —positioning itself for future moves rather than unveiling game-changing technologies," Jeronimo said. Apple's next operating system will be iOS 26 Besides redesigning its software. Apple will switch to a method that automakers have used to telegraph their latest car models by linking them to the year after they first arrive at dealerships. That means the next version of the iPhone operating system due out this autumn will be known as iOS 26 instead of iOS 19 — as it would be under the previous naming approach that has been used since the device's 2007 debut. The iOS 26 upgrade is expected to be released in September around the same time Apple traditionally rolls out the next iPhone models. Playing catchup in AI Apple opened the proceedings with a short video clip featuring Federighi speeding around a track in a Formula 1 race car. Although it was meant to promote the June 27 release of the Apple film, "F1" starring Brad Pitt, the segment could also be viewed as an unintentional analogy to the company's attempt to catch up to the rest of the pack in AI technology. While some of the new AI tricks compatible with the latest iPhones began rolling out late last year as part of free software updates, the delays in a souped-up Siri became so glaring that the chastened company stopped promoting it in its marketing campaigns earlier this year.