logo
So You Bought a Humane Ai Pin. Here's What You Can Do Next

So You Bought a Humane Ai Pin. Here's What You Can Do Next

WIRED28-02-2025
Humane's Ai Pin stopped working today, turning the year-old wearable—which cost millions to build—into a paperweight. Here are some alternatives if you'd rather avoid the e-waste. Photograph: Julian Chokkattu
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
As of today, the Humane Ai Pin is dead—less than a year since its launch. Following an acquisition by HP, Humane shut down many of the core features of the artificial intelligence-powered wearable and deleted user data, rendering it useless. Yes, some functions remain, like checking battery life (useful!), but you can't access the voice assistant.
If you spent $700 on the Ai Pin, you might be wondering what you can do now. These are the risks of being an early adopter, but not getting a refund on a device bricked before the warranty is even up feels like a rip-off. Humane sold roughly 10,000 units, though daily returns were outpacing sales at one point, so there are even fewer Pins in the world. Still, that's thousands of effectively useless devices. It's a blip in the amount of e-waste generated in a year around the world—already at a crisis point—but Humane really should have offered a more responsible approach with the Ai Pin's demise.
There might not be a way to get your money back, though, if you bought the pin in October of 2024 (for some reason), you might fall under the typical 120-day window to issue a chargeback with your credit card. There are some alternative options, however. Let's explore. File a Complaint With the FTC
Killing a product consumers have spent money on is 'unfair and deceptive.' That's what Lucas Gutterman told WIRED via email. He's the campaign director of the Designed to Last campaign at Public Interest Research Groups (PIRG).
'When we buy something with advertised features, we should get what we pay for, and when we get ripped off the law should protect us,' Gutterman says. 'I urge everyone who purchased a Humane AI Pin to file a complaint with the FTC so they can step up and protect consumers.' Photograph: Federal Trade Commission
Last year, a coalition of groups like US PIRG and Consumer Reports sent a letter to the Federal Trade Commission, urging the agency to address 'software tethering,' described as the use of software to control and limit the function of a device after someone buys it. The FTC subsequently conducted a study that attempted to determine software support commitments for more than 180 products, only to find that 'nearly 89 percent of the manufacturer's web pages for these products failed to disclose how long the products would receive software updates.'
Humane's warranty states that the 'software and software functionality' are excluded, which is often the case on many connected products. But the study also noted that it's deceptive if manufacturers market a device's features but then fail to provide software updates to maintain those capabilities—it may violate the Magnuson Moss Warranty Act, which was enacted in 1975 to protect consumers from unfair disclaimers in warranties.
'Without transparent labeling of length of software support, or by taking away key features that were advertised, manufacturers might be violating the FTC Act by deceiving consumers," Gutterman says. "Paying for a $700 product that's supposed to work, and then being told it will suddenly stop working, is a 'harm consumers cannot avoid,' although it's one that Humane could have humanely avoided before they shipped e-waste-to-be."
You can file a complaint with the FTC here. Pressure Humane to Open Up the Software
Sometimes, when companies stop delivering updates to products and shut down core features, a devoted community comes to the rescue to revive or maintain capabilities of the product (or mod it to do something else). We've seen this time and time again, like with the iPod, the Game Boy, or even the Pebble smartwatch.
The Humane Ai Pin may not have enough doting admirers up for the task, but this process would be made simpler if Humane released the keys to the software. Kyle Wiens, CEO of iFixit, says Humane should follow Pebble's lead and open the device up. Either that or we'll have to wait for someone to find a vulnerability and jailbreak the Ai Pin to write custom software for it. Humane did not respond to our request for comment.
This content can also be viewed on the site it originates from.
What could you do with this little wearable pin? Wiens had some ideas. 'You could just use it as a Walkie-Talkie. … A pin that talks to the internet, has a camera and microphone, sounds pretty cool. It's like a Star Trek communications pin." Dispose or Disassemble It
If you want to just get rid of the thing, Wiens says to make sure you remove the battery first and then take it to an e-recycler. We also have a detailed guide on how to responsibly dispose of your electronics. Make sure you do the same for the other accessories that came in the box, though you can easily repurpose Humane's nifty charging adapter and the nice braided cable.
Alternatively, you can use a service like Grid and have the company deconstruct and frame the Humane Ai Pin, so you can hang it up on the wall and remind yourself every day of the $700 you lost. I know it's a little hard to think about spending more money on this wearable, but at least it'd look cool.
You could turn your Ai Pin into a 'unique framed artwork' and commemorate the $700 you lost. Photograph: Grid
Grid accepts custom orders, so I asked the company if it would consider the Ai Pin. The answer is yes—for a cool $90, which includes design, materials, and shipping. 'We have carefully examined the structure of the Humane AI Pin and can confirm that we are able to provide a deconstruction and framing service for it. If anyone is interested in preserving their AI Pin as a unique framed artwork instead of letting it go to waste, they can definitely reach out to us.'
You can email support@gridstudio.cc to inquire about this custom request. Use It as a Paperweight or Keep It Safe
The Ai Pin is more than up to the complex task of weighing down paper! Photograph: Julian Chokkattu
It's not the heftiest thing in the world, but the Ai Pin can do the complex job of a paperweight. Or you can keep it in the box and put it away somewhere safe. In 50 years, you'll accidentally find it in the attic and then you can tell your grandkids how this little gadget was once—for a fleeting moment—supposed to be the next big thing.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Developers Say GPT-5 Is a Mixed Bag
Developers Say GPT-5 Is a Mixed Bag

WIRED

time4 hours ago

  • WIRED

Developers Say GPT-5 Is a Mixed Bag

Aug 15, 2025 1:47 PM Software engineers are finding OpenAI's new GPT-5 model is helping them think through coding problems—but isn't much better at actual coding. Photo-Illustration:When OpenAI launched GPT-5 last week, it told software engineers the model was designed to be a 'true coding collaborator' that excels at generating high-quality code and performing agentic, or automated, software tasks. While the company didn't say so explicitly, OpenAI appeared to be taking direct aim at Anthropic's Claude Code, which has quickly become many developers' favored tool for AI-assisted coding. But developers tell WIRED that GPT-5 has been a mixed bag so far. It shines at technical reasoning and planning coding tasks, but some say that Anthropic's newest Opus and Sonnet reasoning models still produce better code. Depending on which version of GPT-5 developers are using—low, medium, or high verbosity—the model can be more elaborative, which sometimes leads it to generate unnecessary or redundant lines of code. Some software engineers have also criticized how OpenAI evaluated GPT-5's performance at coding, arguing the benchmarks it used are misleading. One research firm called a graphic that OpenAI published boasting about GPT-5's capabilities a 'chart crime.' GPT-5 does stand out in at least one way: Several people noted that, in comparison to competing models, it is a much more cost-effective option. 'GPT-5 is mostly outperformed by other AI models in our tests, but it's really cheap,' says Sayash Kapoor, a computer science doctoral student and researcher at Princeton University who co-wrote the book AI Snake Oil . Kapoor says he and his team have been running benchmark tests to evaluate GPT-5's capabilities since the model was released to the public last week. He notes that the standard test his team uses—measuring how well a language model can write code that will reproduce the results of 45 scientific papers—costs $30 to run with GPT-5 set to medium, or mid-range verbosity. The same test using Anthropic's Opus 4.1 costs $400. In total, Kapoor says his team has spent around $20,000 testing GPT-5 so far. Although GPT-5 is cheap, Kapoor's tests indicate the model is also less accurate than some of its competitors. Claude's premium model achieved a 51 percent accuracy rating, measured by how many of the scientific papers it accurately reproduced. The medium version of GPT-5 received a 27 percent accuracy rating. (Kapoor has not yet run the same test using GPT-5 high, so it's an indirect comparison, given that Opus 4.1 is Anthropic's most powerful model.) OpenAI spokesperson Lindsay McCallum referred WIRED to its blog, where it said that it trained GPT-5 on 'real-world coding tasks in collaboration with early testers across startups and enterprises.' The company also highlighted some of its internal accuracy measurements for GPT-5, which showed that the GPT-5 'thinking' model, which does more deliberate reasoning, scored highest on accuracy among all of OpenAI's models. GPT-5 'main,' however, still fell short of previously-released models on OpenAI's own accuracy scale. Anthropic spokesperson Amie Rotherham said in a statement that 'performance claims and pricing models often look different once developers start using them in production environments. Since reasoning models can quickly use a lot of tokens while thinking, the industry is moving to a world where price per outcome matters more than price per token.' Some developers say they've had largely positive experiences with GPT-5 so far. Jenny Wang, an engineer, investor, and creator of the personal styling agent Alta, told WIRED the model appears to be better at completing complex coding tasks in one shot than other models. She compared it to OpenAI's o3 and 4o, which she uses frequently for code generation and straightforward fixes 'like formatting, or if I want to create an API endpoint similar to what I already have,' Wang says. In her tests of GPT-5, Wang says she asked the model to generate code for a press page for her company's website, including specific design elements that would match the rest of the site's aesthetic. GPT-5 completed the task in one take, whereas in the past, Wang would have had to revise her prompts during the process. There was one significant error, though: 'It hallucinated the URLs,' Wang says. Another developer, who spoke on the condition of anonymity because their employer didn't authorize them to speak to the press, says GPT-5 excels at solving deep technical problems. The developer's current hobby project is writing a programmatic network analysis tool, one that would require code isolation for security purposes. 'I basically presented my project and some paths I was considering, and GPT-5 took it all in and gave back a few recommendations along with a realistic timeline,' the developer explains. 'I'm impressed.' A handful of OpenAI's enterprise partners and customers, including Cursor, Windsurf, and Notion, have publicly vouched for GPT-5's coding and reasoning skills. (OpenAI included many of these remarks in its own blog post announcing the new model). Notion also shared on X that it's 'fast, thorough, and handles complex work 15 percent better than other models we've tested.' But within days of GPT-5's release, some developers were weighing in online with complaints. Many said that GPT-5's coding abilities seemed behind-the-curve for what was supposed to be a state-of-the-art, ultra-capable model from the world's buzziest AI company. 'OpenAI's GPT-5 is very good, but it seems like something that would have been released a year ago,' says Kieran Klassen, a developer who has been building an AI assistant for email inboxes. 'Its coding capabilities remind me of Sonnet 3.5,' he adds, referring to an Anthropic model that launched in June 2024. Amir Salihefendić, founder of the startup company Doist, said in a social media post that he's been using GPT-5 in Cursor and has found it 'pretty underwhelming' and that 'it's especially bad at coding.' He said the release of GPT-4 felt like a 'Llama 4 moment,' referring to Meta's AI model, which had also disappointed some people in the AI community. On X, developer Mckay Wrigley wrote that GPT-5 is a 'phenomenal everyday chat model,' but when it comes to coding, 'I will still be using Claude Code + Opus.' Other developers describe GPT-5 as 'exhaustive'—at times helpful, but often irritating in its long-windedness. Wang, who overall was pleased overall with the frontend coding project she assigned to GPT-5, says that she did notice the model was 'more redundant. It clearly could have come up with a cleaner or shorter solution.' (Kapoor points out that the verbosity of GPT-5 can be adjusted, so that users can ask it to be less chatty or even do less reasoning in exchange for better performance or cheaper pricing.) Itamar Friedman, the cofounder and CEO of the AI-coding platform Qodo, believes that some of the critiques of GPT-5 stem from evolving expectations around AI model releases. 'I think a lot of people thought that GPT-5 would be another moment when everything about AI improved, because of this march towards AGI. When actually, the model improved on a few key sub-tasks,' he says. Friedman refers to before 2022 as 'BCE'—Before ChatGPT Era—when AI models improved holistically. In the post-ChatGPT era, new AI models are often better at certain things. 'Claude Sonnet 3.5, for example, was the one model to rule them all on coding. And Google Gemini got really good at code review, to check if code is high quality,' Friedman says. OpenAI has also gotten some heat for the methodology it used to run its benchmark tests and make performance claims about GPT-5—although benchmark tests vary considerably across the industry. SemiAnalysis, a research firm focused on the semiconductor and AI sector, noted that OpenAI only ran 477 out of the 500 tests that are typically included in SWE-bench, a relatively new AI industry framework for testing large language models. (This was for overall performance of the model, not just coding.) OpenAI says that OpenAI always tests its AI models on a fixed subset of 477 tasks rather than the full 500 in the SWE-bench test because those 477 tests are the ones the company has validated on its internal infrastructure. McCallum also pointed to GPT-5's system card, which noted that changes in the model's verbosity setting can 'lead to variation in eval performance.' Kapoor says that frontier AI companies are ultimately facing difficult tradeoffs. 'When model developers train new models, they're introducing new constraints, too, and have to consider many factors: how users expect the AI to behave and how it performs at certain tasks like agentic coding, all while managing the cost,' he says. 'In some sense, I believe OpenAI knew it wouldn't break all of those benchmarks, so it made something that would generally please a wide range of people.'

When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare
When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare

Yahoo

time21 hours ago

  • Yahoo

When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare

When you buy through links on our articles, Future and its syndication partners may earn a commission. You probably won't be surprised to learn that one of Lisa Su's main priorities is to make AMD the default choice for AI GPUs. If you're familiar with the all-business AMD CEO, it will likewise hardly be a revelation to find she's pragmatic about how and when AMD gets there. But what happens afterwards? Now, that's more intriguing. In a new interview with Wired, Su is canvassed on a wide range of topics, inevitably including the impact and future trajectory of AI, chip tariffs, building silicon in the US and Starbucks. Su is nothing if not a canny operator, so you can't expect much by way of indiscretions when discussing AMD's operations. In other words, if you were hoping for a push back on tariffs or bullish predictions about taking down Nvidia, you'll be disappointed. But has her future after the AMD project been ticked off? That's a subject for which Su is willing to willing to leverage her boxing training (she's had a personal boxing trainer for the last eight years) and pull fewer punches. "One of the areas that I'm most personally passionate about is health care, because I have had experience with the health care system, and I think it should be much, much better than it is today," Su says, speaking of the years of treatment her now late mother experienced. According to Su, the problem is the disjointed approach to healthcare. "The body is a very complex system. So you have specialists, like a heart specialist or a kidney specialist. But there are not that many generalists that can pull it all together, she says. "And that, to me, is a travesty." Su thinks that stitching together all the disparate elements of healthcare, from drug discovery to therapeutics to inpatient care, "is a perfect use case for AI." Making sense of the complexity of healthcare is also something which she thinks will bend to her own expertise. "That's what we do in tech, right? We take complex systems and put them together, and we make them work. But we're often only looking at one aspect of health, and it's my firm belief that if we can use technology to help pull all of that expertise together, we'll be able to treat people better," Su says. However, she doesn't necessarily think we'll need AGI or "superintelligence" to achieve that. "We should be able to cure these diseases," she reckons, but adds, "I don't know if you call that 'superintelligence.'" In any case, Su is exactly the sort of person you'd want to be working on the healthcare problem, so it's a slight pity that we'll have to wait a while for her attention to move away from making AMD even more competitive. Healthcare is for her "next life," she says. "I have a few things to do right now." But then a methodical, non-flashy approach has been something of a signature for Su during her stint at the helm of AMD. "When I first joined AMD in 2012, Microsoft was just an early partner for us in gaming. Over the past 10-plus years we've built a lot of trust, and now we're cocreating with them, so Microsoft just announced they're using AMD not only for their next-generation Xbox consoles but across their entire cloud," Su explains. And she sees AMD's participation in the AI market and the inevitable comparison with currently dominant Nvidia in a similarly pragmatic light. "That's where we are today in CPUs," she says when queried about whether she'd like AMD to be the first choice for AI GPUs for companies like OpenAI and Meta. "And absolutely, we expect to be there in AI as well. But I'm not impatient with this." Personally, I wouldn't bet against Su getting there, sooner or later. Anyway, even if she doesn't drop any real bombshells about AMD's activities, Su's matter-of-fact takes on a wide range of subjects remain worth a read. Tariffs? They're a "fact of life." Politics? "You won't see me weighing in on general social issues, because I don't necessarily think that that's where my value-add is." You don't have to agree with everything Lisa Su says, then. But the absence of hyperbole is certainly refreshing. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare
When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare

Yahoo

time21 hours ago

  • Yahoo

When CEO Lisa Su is finished making AMD the default choice for GPUs, she wants to use AI to help fix the 'travesty' that is modern healthcare

When you buy through links on our articles, Future and its syndication partners may earn a commission. You probably won't be surprised to learn that one of Lisa Su's main priorities is to make AMD the default choice for AI GPUs. If you're familiar with the all-business AMD CEO, it will likewise hardly be a revelation to find she's pragmatic about how and when AMD gets there. But what happens afterwards? Now, that's more intriguing. In a new interview with Wired, Su is canvassed on a wide range of topics, inevitably including the impact and future trajectory of AI, chip tariffs, building silicon in the US and Starbucks. Su is nothing if not a canny operator, so you can't expect much by way of indiscretions when discussing AMD's operations. In other words, if you were hoping for a push back on tariffs or bullish predictions about taking down Nvidia, you'll be disappointed. But has her future after the AMD project been ticked off? That's a subject for which Su is willing to willing to leverage her boxing training (she's had a personal boxing trainer for the last eight years) and pull fewer punches. "One of the areas that I'm most personally passionate about is health care, because I have had experience with the health care system, and I think it should be much, much better than it is today," Su says, speaking of the years of treatment her now late mother experienced. According to Su, the problem is the disjointed approach to healthcare. "The body is a very complex system. So you have specialists, like a heart specialist or a kidney specialist. But there are not that many generalists that can pull it all together, she says. "And that, to me, is a travesty." Su thinks that stitching together all the disparate elements of healthcare, from drug discovery to therapeutics to inpatient care, "is a perfect use case for AI." Making sense of the complexity of healthcare is also something which she thinks will bend to her own expertise. "That's what we do in tech, right? We take complex systems and put them together, and we make them work. But we're often only looking at one aspect of health, and it's my firm belief that if we can use technology to help pull all of that expertise together, we'll be able to treat people better," Su says. However, she doesn't necessarily think we'll need AGI or "superintelligence" to achieve that. "We should be able to cure these diseases," she reckons, but adds, "I don't know if you call that 'superintelligence.'" In any case, Su is exactly the sort of person you'd want to be working on the healthcare problem, so it's a slight pity that we'll have to wait a while for her attention to move away from making AMD even more competitive. Healthcare is for her "next life," she says. "I have a few things to do right now." But then a methodical, non-flashy approach has been something of a signature for Su during her stint at the helm of AMD. "When I first joined AMD in 2012, Microsoft was just an early partner for us in gaming. Over the past 10-plus years we've built a lot of trust, and now we're cocreating with them, so Microsoft just announced they're using AMD not only for their next-generation Xbox consoles but across their entire cloud," Su explains. And she sees AMD's participation in the AI market and the inevitable comparison with currently dominant Nvidia in a similarly pragmatic light. "That's where we are today in CPUs," she says when queried about whether she'd like AMD to be the first choice for AI GPUs for companies like OpenAI and Meta. "And absolutely, we expect to be there in AI as well. But I'm not impatient with this." Personally, I wouldn't bet against Su getting there, sooner or later. Anyway, even if she doesn't drop any real bombshells about AMD's activities, Su's matter-of-fact takes on a wide range of subjects remain worth a read. Tariffs? They're a "fact of life." Politics? "You won't see me weighing in on general social issues, because I don't necessarily think that that's where my value-add is." You don't have to agree with everything Lisa Su says, then. But the absence of hyperbole is certainly refreshing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store