logo
OpenAI's first ai hardware stuck in legal row over 'io' branding and trademark concerns

OpenAI's first ai hardware stuck in legal row over 'io' branding and trademark concerns

Hindustan Times5 hours ago

OpenAI, the company that gave us ChatGPT, is working on something big, and it's not just software this time. They've teamed up with former Apple design legend Jony Ive to build their first-ever hardware device. Earlier this year, OpenAI brought in a team led by Jony Ive, the former Apple design head, in a multi-billion dollar all-equity deal. The goal was to build a new kind of AI device. But there is one big problem. The name they were using, "io", is now caught up in a legal fight. Trademark fight slows OpenAI's hardware launch
So what is the issue? Another company called Iyo says OpenAI is using a name that sounds too close to its own. Iyo is an audio tech startup that is working on an in-ear AI device. They filed a lawsuit claiming that OpenAI and its new hardware team knew about their brand and even rejected an earlier investment offer. Court documents include emails that show OpenAI CEO Sam Altman had direct contact with Iyo before deciding to move in a different direction.
Things got more serious when a US court told OpenAI to stop using the name "io" in public. So OpenAI has now removed all references to it from its website and social media. The company is being very careful with what it says about the project for now. What kind of device is OpenAI building?
According to court filings, it is not a wearable and not an in-ear gadget. Tang Tan, who is leading the hardware team, said the product is still at least a year away and the design is not final yet. While the exact nature of OpenAI's hardware remains under wraps, it has been confirmed that it will not be released before 2026.
Court documents dated June 12 reveal more about the development process. OpenAI stated that the "io" team had explored a variety of hardware formats before deciding on a direction. These included desktop, mobile, wired, wireless, wearable and portable concepts. The company added that its team spent several months studying existing market products and engaging in prototyping exercises.
In short, OpenAI wanted to build something new and exciting in the AI hardware space. But now they have to deal with a name dispute before they can move forward. For now, all eyes are on what they build and what they call it next. One thing is clear. OpenAI's move into hardware has begun, but the road ahead may be a little bumpy.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

At Amazon's biggest data centre, everything is supersized for AI
At Amazon's biggest data centre, everything is supersized for AI

Business Standard

timean hour ago

  • Business Standard

At Amazon's biggest data centre, everything is supersized for AI

By Karen Weise & Cade Metz A year ago, a 1,200-acre stretch of farmland outside New Carlisle, Ind., was an empty cornfield. Now, seven Amazon data centres rise up from the rich soil, each larger than a football stadium. Over the next several years, Amazon plans to build around 30 data centres at the site, packed with hundreds of thousands of specialised computer chips. With hundreds of thousands of miles of fibre connecting every chip and computer together, the entire complex will form one giant machine intended just for artificial intelligence. The facility will consume 2.2 gigawatts of electricity — enough to power a million homes. Each year, it will use millions of gallons of water to keep the chips from overheating. And it was built with a single customer in mind: the AI startup Anthropic, which aims to create an AI system that matches the human brain. The complex — so large that it can be viewed completely only from high in the sky — is the first in a new generation of data centres being built by Amazon, and part of what the company calls Project Rainier, after the mountain that looms near its Seattle headquarters. Project Rainier is Amazon's entry into a race by the technology industry to build data centres so large they would have been considered absurd just a few years ago. The data centres will dwarf most of today's, which were built before OpenAI's ChatGPT chatbot inspired the AI boom in 2022. The tech industry's increasingly powerful AI technologies require massive networks of specialised computer chips — and hundreds of billions of dollars to build the data centres that house those chips. The result: behemoths that stretch the limits of the electrical grid and change the way the world thinks about computers. Amazon, which has invested $8 billion in Anthropic, will rent computing power from the new facility to its startup partner. An Anthropic cofounder, Tom Brown, who oversees the company's work with Amazon on its chips and data centres, said having all that computing power in one spot could allow the startup to train a single AI system.

Gemini CLI debuts as Google's open-source AI coding assistant: How it works
Gemini CLI debuts as Google's open-source AI coding assistant: How it works

Mint

time2 hours ago

  • Mint

Gemini CLI debuts as Google's open-source AI coding assistant: How it works

Alphabet Inc.'s Google has unveiled a new artificial intelligence-powered coding assistant called Gemini CLI (Command Line Interface), designed to streamline the development process by allowing users to interact with their systems through natural language. The global launch signals Google's intent to close the gap with competing AI tools such as OpenAI's Codex and Anthropic's Claude Code. Announced on Wednesday, Gemini CLI is positioned as an open-source tool that brings the capabilities of Google's Gemini AI directly to the terminal, the text-based interface widely used by developers. It enables users to perform a variety of tasks using conversational commands, ranging from writing and debugging code to building simple websites or even generating videos. "With Gemini CLI, you can have a natural language conversation with your computer to solve problems and weave complex workflows together, to do far more than was previously possible," said Taylor Mullen, Senior Staff Software Engineer at Google, during a press briefing. One of the defining features of Gemini CLI is its open-source nature. Google said the decision reflects a broader goal of democratising access to AI tools, enabling developers to inspect, adapt, and contribute to the codebase. This move also supports transparency, allowing users to understand the mechanics of the tool and assess its security. Gemini CLI builds on Google's existing legacy of open AI development, including the TensorFlow engine and several foundational transformer models, the architecture behind leading AI systems like ChatGPT. The launch of Gemini CLI also follows Google's recent release of Gemma, its open large language models made publicly available earlier in 2024. These efforts point to a shift in strategy, with the tech giant increasingly engaging with external developers after years of keeping much of its proprietary AI under wraps. Gemini CLI users with personal Google accounts will be granted a free Gemini Code Assist licence, offering access to the Gemini 2.5 Pro model. This tier includes a generous usage limit of up to 60 requests per minute and 1,000 per day. Paid plans expand those limits to 120 requests per minute and 1,500 per day, while enterprise users can make up to 2,000 requests daily. Ryan J. Salva, Senior Director of Product at Google, emphasised the broad applicability of the new tool: 'We believe that these tools are going to dominate the way not just developers, but creators of all kinds, work over the next decade. Whether you're a student, a freelancer, or a seasoned professional, you deserve access to the same cutting-edge resources.'

Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply
Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply

Time of India

time3 hours ago

  • Time of India

Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply

An online debate over AI race of Silicon Valley reignited recently. Tesla CEO Elon Musk also presented his point of view on the debate with a one-word response. Recently, a US federal judge ruled that Anthropic and AI company did not break any copyright laws while training AI model Claude using books. The judge pointed out that the use of books was a fair step and an AI model does not copy or reproduce books but it learns from them and then generate original content. Soon after the judgement, an online debate started on X (formerly known as Twitter). A startup CEO Luis Batalha has asserted that Google , despite possessing "everything" needed, was ultimately outmaneuvered by OpenAI in the burgeoning Large Language Model (LLM) "gold rush." 'Google had everything: the transformer, massive compute, access to data, even Google Books - yet OpenAI beat them to the LLM gold rush. Having the pieces isn't the same as playing the game,' wrote Batalha. Tesla CEO Elon Musk also supported the sentiment with a one-word reponse 'True'. Originally Google has been at the forefront of AI research. The company has published seminal papers and has also developed advanced models. However, with the launch of ChatGPT , OpenAI managed to capture imagination of millions and ignited the current "LLM Gold Rush," forcing other tech giants to accelerate their own public-facing generative AI initiatives. Critics suggest that Google's cautious approach, perhaps due to its established market position and the potential risks of deploying rapidly evolving AI, allowed a leaner, more focused entity like OpenAI to seize the early lead.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store