logo
AI device startup that sued OpenAI and Jony Ive is now suing its own ex-employee over trade secrets

AI device startup that sued OpenAI and Jony Ive is now suing its own ex-employee over trade secrets

Economic Times11-07-2025
ETtech A secretive competition to pioneer a new way of communicating with artificial intelligence chatbots is getting a messy public airing as OpenAI fights a trademark dispute over its stealth hardware collaboration with legendary iPhone designer Jony Ive.
In the latest twist, tech startup iyO Inc., which already sued Ive and OpenAI CEO Sam Altman for trademark infringement, is now suing one of its own former employees for allegedly leaking a confidential drawing of iyO's unreleased product.
At the heart of this bitter legal wrangling is a big idea: we shouldn't need to stare at computer or phone screens or talk to a box like Amazon's Alexa to interact with our future AI assistants in a natural way. And whoever comes up with this new AI interface could profit immensely from it. OpenAI, maker of ChatGPT, started to outline its own vision in May by buying io Products, a product and engineering company co-founded by Ive, in a deal valued at nearly $6.5 billion. Soon after, iyO sued for trademark infringement for the similar sounding name and because of the firms' past interactions. US District Judge Trina Thompson ruled last month that iyO has a strong enough case to proceed to a hearing this fall. Until then, she ordered Altman, Ive and OpenAI to refrain from using the io brand, leading them to take down the web page and all mentions of the venture.
A second lawsuit from iyO filed this week in San Francisco Superior Court accuses a former iyO executive, Dan Sargent, of breach of contract and misappropriation of trade secrets over his meetings with another io co-founder, Tang Yew Tan, a close Ive ally who led design of the Apple Watch. Sargent left iyO in December and now works for Apple. He and Apple didn't immediately respond to a request for comment. "This is not an action we take lightly," said iyO CEO Jason Rugolo in a statement Thursday. "Our primary goal here is not to target a former employee, whom we considered a friend, but to hold accountable those whom we believe preyed on him from a position of power." Rugolo told The Associated Press last month that he thought he was on the right path in 2022 when he pitched his ideas and showed off his prototypes to firms tied to Altman and Ive. Rugolo later publicly expanded on his earbud-like "audio computer" product in a TED Talk last year. What he didn't know was that, by 2023, Ive and Altman had begun quietly collaborating on their own AI hardware initiative. "I'm happy to compete on product, but calling it the same name, that part is just amazing to me. And it was shocking," Rugolo said in an interview. The new venture was revealed publicly in a May video announcement, and to Rugolo about two months earlier after he had emailed Altman with an investment pitch. "thanks but im working on something competitive so will (respectfully) pass!" Altman wrote to Rugolo in March, adding in parentheses that it was called io. Altman has dismissed iyO's lawsuit on social media as a "silly, disappointing and wrong" move from a "quite persistent" Rugolo. Other executives in court documents characterized the product Rugolo was pitching as a failed one that didn't work properly in a demo. Altman said in a written declaration that he and Ive chose the name two years ago in reference to the concept of "input/output" that describes how a computer receives and transmits information. Neither io nor iyO was first to play with the phrasing - Google's flagship annual technology showcase is called I/O - but Altman said he and Ive acquired the io.com domain name in August 2023. The idea was "to create products that go beyond traditional products and interfaces," Altman said. "We want to create new ways for people to input their requests and new ways for them to receive helpful outputs, powered by AI." A number of startups have already tried, and mostly failed, to build gadgetry for AI interactions. The startup Humane developed a wearable pin that you could talk to, but the product was poorly reviewed and the startup discontinued sales after HP acquired its assets earlier this year. Altman has suggested that io's version could be different. He said in a now-removed video that he's already trying a prototype at home that Ive gave him, calling it "the coolest piece of technology that the world will have ever seen." Altman and Ive still haven't said is what exactly it is. The court case, however, has forced their team to disclose what it's not. "Its design is not yet finalized, but it is not an in-ear device, nor a wearable device," said Tan in a court declaration that sought to distance the venture from iyO's product. It was that same declaration that led iyO to sue Sargent this week. Tan revealed in the filing that he had talked to a "now former" iyO engineer who was looking for a job because of his frustration with "iyO's slow pace, unscalable product plans, and continued acceptance of preorders without a sellable product." Those conversations with the unnamed employee led Tan to conclude "that iyO was basically offering 'vaporware' - advertising for a product that does not actually exist or function as advertised, and my instinct was to avoid meeting with iyO myself and to discourage others from doing so." IyO said its investigators recently reached out to Sargent and confirmed he was the one who met with Tan. Rugolo told the he feels duped after he first pitched his idea to Altman in 2022 through the Apollo Projects, a venture capital firm started by Altman and his brothers. Rugolo said he demonstrated his products and the firm politely declined, with the explanation that they don't do consumer hardware investments. That same year, Rugolo also pitched the same idea to Ive through LoveFrom, the San Francisco design firm started by Ive after his 27-year career at Apple. Ive's firm also declined. "I feel kind of stupid now," Rugolo added. "Because we talked for so long. I met with them so many times and demo'd all their people - at least seven people there. Met with them in person a bunch of times, talking about all our ideas." Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Why this one from 'Dirty Dozen', now in Vedanta fold, is again in a mess
Can Indian IT protect its high valuation as AI takes centre stage?
Engine fuel switches or something else? One month on, still no word on what crashed AI 171
As GenAI puts traditional BPO on life support, survival demands a makeover
Stock Radar: ITC Hotels hits fresh record high in July – time to buy or book profits?
Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus
These large- and mid-cap stocks can give more than 24% return in 1 year, according to analysts
Suited for the long term, even with headwinds: 8 stocks from healthcare & pharma sectors with upside potential of up to 39%
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple releases iOS 26 beta 4 with Liquid Glass redesign and AI-news summaries
Apple releases iOS 26 beta 4 with Liquid Glass redesign and AI-news summaries

Time of India

time38 minutes ago

  • Time of India

Apple releases iOS 26 beta 4 with Liquid Glass redesign and AI-news summaries

Apple has rolled out the fourth developer beta of its next major software update for iPhones, iOS 26. The latest update introduces minor adjustments to its Liquid Glass redesign and reintroduces the AI-powered notification summaries for news. This new developer beta (iOS 26 beta 4) was rolled out just before the expected launch of the iOS 26 public beta later this week. The beta largely reflects the features and changes users can anticipate in the upcoming public release. Developer betas are intended to allow mobile app creators to test their apps with Apple's new software, ensuring they are ready for the public launch of the operating system in the coming months. Traditionally, Apple has offered public betas for years following its Worldwide Developers Conference in June, allowing iPhone owners early access to updated software with fewer stability issues and bugs. Apple iOS 26 beta 4 released: What's new Apple has yet to publish the release notes for beta 4 on its Developer website. This leaves room for additional discoveries, particularly around minor bug fixes and performance enhancements. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like American Investor Warren Buffett Recommends: 5 Books For Turning Your Life Around Blinkist: Warren Buffett's Reading List Undo However, several users of the social media platform X (earlier Twitter) who have already seeded the latest beta have confirmed the availability. As per the X posts, iOS 26 beta 4 adds a new 'Welcome' splash screen after updating, along with introductory screens for key features such as Siri, AI-powered notification summaries, prioritisation tools, and the redesigned Camera app. Earlier this year, Apple temporarily paused its AI notification summaries following concerns from the BBC. The outlet flagged an issue where the summary misrepresented a headline, falsely stating that Luigi Mangione, who was charged with the murder of UnitedHealthcare CEO Brian Thompson, had died by suicide. Responding to this, Apple announced plans to update the software to better indicate when AI-generated summaries are being shown. The current beta includes a setup screen for the AI summarisation feature, which remains under testing. Under the 'News & Entertainment' section, Apple now warns that 'Summarization may change the meaning of the original headlines' and advises users to 'Verify information.' Users have also noticed further refinements to Apple's 'Liquid Glass' UI redesign. While beta 3 had scaled back some of the transparency effects, beta 4 brings them back in apps like App Store, Photos, Apple Music, and Weather. The Notification Center now adds a subtle dynamic tint as users scroll. Additional visual updates include a dynamic wallpaper that shifts colours and new wallpapers for CarPlay. Apart from this, Apple also rolled out beta 4 versions of its other platforms, including iPadOS 26, macOS 26, watchOS 26, tvOS 26, visionOS 26, and Xcode 26. Big Question Answered: Why Google is Merging Android and ChromeOS AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Agra police holds training session on AI for cops
Agra police holds training session on AI for cops

Time of India

timean hour ago

  • Time of India

Agra police holds training session on AI for cops

Agra: In an initiative aimed at technologically empowering its personnel and enabling "effective use of artificial intelligence (AI) in day-to-day policing and curbing crime", Agra police on Wednesday held a special training session on prompt engineering based on AI. Tired of too many ads? go ad free now The session focussed on application of Large Language Models (LLMs) such as ChatGPT, Gemini and Perplexity in policing activities, including FIR and report writing, understanding and interpreting BNS sections, giving direction in cybercrime investigations, creating awareness material for cybersecurity and public outreach, drafting documents and summaries for analysis and interpretation, among others. During the session, Agra police also showcased its in-house developed AI apps, including FAI, EBITA and AI RAGBOT for UP police circular, which have already been integrated into the system. Agra DCP (city) Sonam Kumar said policemen participated in hands-on activities, crafted their own prompts, and gained a better understanding of the capabilities of ChatGPT, Gemini and Perplexity AI. "Those who completed the 'prompt engineering' training were awarded the title of AI commandos and were also provided a one-month paid subscription to Perplexity AI, enabling them to perform traditional duties with greater speed, accuracy and efficiency using new technology," the DCP said. The personnel were also instructed not to share any sensitive or confidential departmental information on AI models. "This training marks a significant step toward building a tech-savvy police force in Agra, and more such technical programmes will be organised in future," said Kumar. Agra police commissioner Deepak Kumar said, "In modern policing, the use of technology and AI is no longer optional — it has become a necessity. Such initiatives will enhance the efficiency and response time of the police personnel."

AI must aid human thought, not become its replacement
AI must aid human thought, not become its replacement

Hindustan Times

timean hour ago

  • Hindustan Times

AI must aid human thought, not become its replacement

Watching the recent resurgence of violence in Kashmir, I find myself grappling with questions about the role of technology, particularly Generative Artificial Intelligence (GenAI), in warfare. India is built upon the philosophy of live and let live, yet that doesn't mean passively accepting aggression. As someone deeply invested in responsibly applying AI in critical industries like financial services, aerospace, semiconductors, and manufacturing, I am acutely aware of the unsettling dual-use potential of the tools we develop: The same technology driving efficiency and innovation can also be weaponised for harm. We stand at a critical juncture. GenAI is rapidly shifting from mere technological advancement to a profound geopolitical tool. The stark division between nations possessing advanced GenAI capabilities and those dependent on externally developed systems poses serious strategic risks. Predominantly shaped by the interests and biases of major AI-developing nations, primarily the US and China, these models inevitably propagate their creators' narratives, often undermining global objectivity. Consider the inherent biases documented in AI models like OpenAI's GPT series or China's Deepseek, which subtly yet powerfully reflect geopolitical views. Research indicates these models minimise criticism of their home nations, embedding biases that can exacerbate international tensions. China's AI approach, for instance, often reinforces national policy stances, inadvertently legitimising territorial disputes or delegitimising sovereign entities, complicating fragile diplomatic relationships, notably in sensitive regions like Kashmir. Historically, mutually assured destruction (MAD) relied on nuclear deterrence. Today's arms race, however, is digital and equally significant in its potential to reshape global stability. We must urgently reconsider this outdated framework. Instead of mutually assured destruction, I advocate for a new kind of MAD: mutual advancement through digitisation. This paradigm shifts the emphasis from destructive competition to collaborative development and technological self-reliance. This evolved MAD requires nations, particularly technologically-vulnerable developing countries, to establish independent, culturally informed AI stacks. Such autonomy would reflect local histories, cultures, and political nuances, making these nations less susceptible to external manipulation. Robust, culturally informed AI not only protects against misinformation but fosters genuine global dialogue, contributing to a balanced, multipolar AI landscape. At the core of geopolitical tensions lies a profound challenge of mutual understanding. The world's dominant AI models, primarily trained in English and Chinese, leave multilingual and culturally diverse nations like India, with its 22 official languages and hundreds of dialects, in a precarious position. A simplistic AI incapable of capturing nuanced linguistic subtleties risks generating misunderstandings with severe diplomatic repercussions. To prevent this, developing sophisticated, culturally aware AI models is paramount. Multilingual AI systems must leverage similarities among related languages such as Marathi and Gujarati or Tamil and Kannada to rapidly scale without losing depth or nuance. Such culturally adept systems, sensitive to idiomatic expressions and contextual subtleties, significantly enhance cross-cultural understanding, reducing the risk of conflict driven by miscommunication. As GenAI becomes integrated into societal infrastructure and decision-making processes, it will inevitably reshape human roles. While automation holds tremendous promise for efficiency, delegating judgment, especially in life and death contexts like warfare, to AI systems raises profound concerns. I am reminded of the Cold War incident in 1983 when Soviet Lieutenant Colonel Stanislav Petrov trusted human intuition over technological alarms, averting nuclear disaster — a poignant reminder of why critical human judgment must never be relinquished to machines entirely. My greatest fear remains starkly clear: A future where humans willingly delegate judgment and thought to algorithms. We should not accept this future. We share collective responsibility as innovators, technologists, and global citizens, to demand and ensure that AI serves human wisdom rather than replaces it. Let's commit today: never allow technology to automate away our humanity. Arun Subramaniyan is founder and CEO, Articul8. The views expressed are personal.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store