Sam Altman's eye-scanning orbs have arrived, sparking curiosity and fear
Earlier this month, a mysterious store selling a vision of the future opened its doors in downtown San Francisco's Union Square district.
A cryptic message appeared on the storefront window: 'World is the real human network. Anonymous proof of human and universally inclusive finance for the age of AI. Millions of humans in over 160 countries. Now available in the USA.'
The store attracted a small crowd and curious onlookers. People took turns scanning their eyes by peering into white devices known as orbs — to prove they are human. Then they received, free of charge, a verified World ID they could use to log into online services and apps. As an extra bonus, participants were given some Worldcoin cryptocurrency tokens.
Read more: Bay Area tech workers thought their jobs were safe. Then the 'golden handcuffs' came off
Some just observed from a distance.
'I'm afraid to walk inside,' said Brian Klein, 66, as he peered into the window on his way to the theater. 'I don't want that thing taking any of my data and biometric scanning me.'
The futuristic technology is the creation of a startup called Tools for Humanity, which is based in San Francisco and Munich, Germany. Founded in 2019 by Alex Blania and Sam Altman — the entrepreneur known for OpenAI's ChatGPT — the tech company says it's 'building for humans in the age of AI.'
In theory, these iris scans offer a safe and convenient way for consumers to verify their human identity at a time when AI-powered tools can easily create fake audio and images of people.
"We wanted a way to make sure that humans stayed special and essential in a world where the internet was going to have lots of AI-driven content," said Altman, the chairman for Tools for Humanity, at a glitzy event in San Francisco last month.
Like the early stages of Facebook and PayPal, World is still in a growth phase, trying to lure enough customers to its network to eventually build a viable service.
A chief draw, World says, is that people can verify their humanness at an orb without providing personal information, such as, their names, emails, phone numbers and social media profiles.
But some are skeptical, contending that handing over biometric data is too risky. They cite instances where companies have reported data breaches or filed for bankruptcy, such as DNA research firm 23andMe.
Read more: 'People should be worried': 23andMe bankruptcy could expose customers' genetic data
'You can't get new eyeballs. I don't care what this company says. Biometric data like these retinal scans will get out. Hacks and leaks happen all the time,' said Justin Kloczko, a tech and privacy advocate at Consumer Watchdog. 'Your eyeballs are going to be like gold to these thieves.'
World has been making waves in Asia, Europe, South America and Central America. More than 12 million people have verified themselves through the orbs and roughly 26 million have downloaded the World app, where people store their World ID, digital assets and access other tools, the company says.
Now, World is setting its sights on the United States. The World app says people can claim up to 39 Worldcoin tokens, worth up to $45.49 if a user verifies they're human with an orb.
World plans to deploy 7,500 orbs throughout the U.S. this year. It's opening up spaces where people can scan their eyes in six cities — Los Angeles, San Francisco, Atlanta, Austin, Miami and Nashville. The L.A. space opened on Melrose Avenue last week.
Read more: Elon Musk's feud with OpenAI CEO Sam Altman, explained
Backed by well-known venture capital firms including Bain Capital, Menlo Ventures, Khosla Ventures and Andreessen Horowitz, Tools for Humanity has raised $240 million, as of March, according to Pitchbook.
The crypto eye-scanning project has stirred up plenty of buzz, but also controversy.
In places outside the United States, including Hong Kong, Spain, Portugal, Indonesia, South Korea, and Kenya, regulators have scrutinized the effort because of data privacy concerns.
Whistleblower Edward Snowden, who leaked classified details of the U.S. government's mass surveillance program, responded to Altman's post about the project in 2021 by saying 'the human body is not a ticket-punch.'
Ashkan Soltani, the former executive director of the California Privacy Protection Agency, said that privacy risks can outweigh the benefits of handing over biometric data.
'Even if companies don't store raw biometric data, like retina scans, the derived identifiers are immutable … and permanently linked to the individuals they were captured from,' he said in an email.
World executives counter that the orb captures photos of a person's face and eyes, but doesn't store any of that data. To receive a verified World ID, people can choose to send their iris image to their phone and that data are encrypted, meaning that the company can't view or access the information.
The idea for World began five years ago. Before the popularity of ChatGPT ignited an AI frenzy, Altman was on a walk with Blania in San Francisco talking about how trust would work in the age where AI systems are smarter than humans.
'The initial ideas were very crazy, then we came down to one that was just a little bit crazy, which became World,' Altman said onstage at an event about World's U.S. debut at Fort Mason, a former U.S. Army post in San Francisco.
At the event, tech workers, influencers and even California Gov. Gavin Newsom and San Francisco Mayor Daniel Lurie wandered in and out of a large building filled with orbs, refreshments and entertainment.
Tools for Humanity Chief Executive Blania highlighted three ways people could use their verified World ID: gaming, dating and social media.
Currently, online services use a variety of ways to confirm people's identities including video selfies, phone numbers, government-issued IDs and two-factor authentication.
World recently teamed up with gaming company Razer, based in Irvine and Singapore, to verify customers are human through a single-sign on, and is placing orbs in Razer stores.
Read more: OpenAI takes its pitch to Hollywood creatives after launching controversial video tool
Blania also touted a partnership with Match Group, where people can used World to verify themselves and their ages on apps such as Tinder , an effort that will be tested in Japan.
"We think the internet as a whole will need a proof of human and one space that I'm personally most excited about will be social," Blania said at the San Francisco event.
Back at the World store in San Francisco, Zachary Sussman was eager to check out the orbs with his two friends, both in their 20s.
'For me, the more 'Black Mirror' the technology is, the more likely I am to use it,' Sussman said, referring to the popular Netflix sci-fi series. 'I like the dystopian aesthetic.'
Doug Colaizzo, 35, checked out the store with his daughter and parents. Colaizzo, a developer, described himself as an 'early adopter' of technology. He already uses his fingerprint to unlock his front door and his smartphone to pay for items.
'We need a better way of identifying humans,' he said. 'I support this idea, even if this is not gonna be the one that wins.'
Andras Cser, vice president and principal analyst of Security and Risk Management at Forrester Research, said the fact that people have to go to a store to scan their eyes could limit adoption.
World is building a gadget called the 'mini Orb' that's the size of a smartphone, but convincing people to carry a separate device around will also be an uphill battle, he said.
'There's big time hype with a ton of customer friction and privacy problems,' he said.
The company will have to convince skeptics like Klein to hand over their biometric data. The San Francisco resident is more cautious, especially after he had to delete his DNA data from 23andMe because the biotech company filed for bankruptcy.
'I'm not going to go off and live in the wilderness by myself,' he said. 'Eventually, I might have to, but I'm going to resist as much as I can.'
Sign up for our Wide Shot newsletter to get the latest entertainment business news, analysis and insights.
This story originally appeared in Los Angeles Times.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
31 minutes ago
- Yahoo
The $1.3 Billion Health-Tech IPO Everyone Missed--Until It Soared 21% on Day One
Omada Health (NASDAQ:OMDA) opened its first day of trading with a bangjumping 21% to $23 after pricing its IPO at $19. That move gives the digital chronic-care startup a market cap of roughly $1.3 billion. The $150 million raise was more than just well receivedinvestor demand came in more than 15x oversubscribed, with most allocations going to a concentrated group of institutional buyers. It's another sign that appetite for health-tech IPOs is heating up again. U.S. listings have raised nearly $25 billion year-to-date, and early-stage healthcare names are up an average of 18% since listing. Founded in 2011, Omada offers virtual care between doctor visitsfocused on diabetes, obesity, hypertension, and joint pain. What sets it apart? It's not chasing the GLP-1 gold rush. Instead, Omada supports people already on those drugs, helping them manage side effects, nutrition, and post-treatment maintenance. About 50,000 of its 679,000 members are on GLP-1s. CEO Sean Duffy told Bloomberg that while AI powers much of the back-end, real human interaction is what drives results. Our members tell us they want a personand I have yet to see anyone that feels accountable to ChatGPT, he said. Revenue for Q1 2025 came in just under $55 millionup from $35.1 million a year agowhile net losses narrowed by more than half to $9.4 million. That improving financial picture, combined with a clear niche in chronic care, could be what's drawing top-tier investors. Revelation Partners, Andreessen Horowitz, Fidelity, and USVP are all among the largest shareholders. The IPO was led by Morgan Stanley, Goldman Sachs, and JPMorgan. OMDA now trades on the Nasdaq Global Marketand it's one more sign that digital health might be staging a real comeback. This article first appeared on GuruFocus.
Yahoo
35 minutes ago
- Yahoo
Stargate UAE AI Deal Stalls Over U.S. Chip Rules
AI behemoth Stargate UAE has hit a roadblock because U.S. officials are still hashing out security terms for exporting advanced chips. Nvidia (NASDAQ:NVDA), Oracle (NYSE:ORCL), OpenAI, Cisco Systems (NASDAQ:CSCO), SoftBank Group (SFTBY) and UAE's G42 teamed up last month after President Trump's Middle East stop to launch what could become a 5-gigawatt AI data-center powerhouse. Warning! GuruFocus has detected 4 Warning Signs with NVDA. The first 200-megawatt chunk was supposed to go live in 2026, operated by OpenAI and Oracle, but nobody's signed off on the chip-export rules yet, so the deal remains on ice. Although G42 says the project builds on the new U.S.-UAE AI Acceleration Partnership, which aims to foster safe, responsible AI, Washington still worries about the UAE's China tiesrecalling how the UAE let Huawei (now under U.S. fire) roll out 5G during Trump's first term. OpenAI and G42 both have Microsoft (NASDAQ:MSFT) backing, but even that doesn't erase concerns that Chinese hardware or talent could leak U.S. tech to adversaries. Because of those unresolved security questionslike banning Chinese gear on site or limiting who can work therethe Commerce Department, Nvidia, OpenAI, G42, Microsoft, Oracle and SoftBank all declined to say when any approval might come. And adding drama, Elon Musk recently hinted he might torpedo the whole thing unless xAI gets a seat at the table. Investors should care because if the UAE pushes back on U.S. curbsinsisting on looser rulesthe entire Stargate plan could stall, undermining a high-profile attempt to export American AI clout. Watch for any new U.S. export guidelines and whether Abu Dhabi caves to get those cutting-edge chips; otherwise, this megaproject could sit stalled well past 2026. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
EleutherAI releases massive AI training dataset of licensed and open domain text
EleutherAI, an AI research organization, has released what it claims is one of the largest collections of licensed and open-domain text for training AI models. The dataset, called The Common Pile v0.1, took around two years to complete in collaboration with AI startups Poolside, Hugging Face, and others, along with several academic institutions. Weighing in at 8 terabytes in size, The Common Pile v0.1 was used to train two new AI models from EleutherAI, Comma v0.1-1T and Comma v0.1-2T, that EleutherAI claims perform on par with models developed using unlicensed, copyrighted data. AI companies, including OpenAI, are embroiled in lawsuits over their AI training practices, which rely on scraping the web — including copyrighted material like books and research journals — to build model training datasets. While some AI companies have licensing arrangements in place with certain content providers, most maintain that the U.S. legal doctrine of fair use shields them from liability in cases where they trained on copyrighted work without permission. EleutherAI argues that these lawsuits have "drastically decreased" transparency from AI companies, which the organization says has harmed the broader AI research field by making it more difficult to understand how models work and what their flaws might be. "[Copyright] lawsuits have not meaningfully changed data sourcing practices in [model] training, but they have drastically decreased the transparency companies engage in," Stella Biderman, EleutherAI's executive director, wrote in a blog post on Hugging Face early Friday. "Researchers at some companies we have spoken to have also specifically cited lawsuits as the reason why they've been unable to release the research they're doing in highly data-centric areas." The Common Pile v0.1, which can be downloaded from Hugging Face's AI dev platform and GitHub, was created in consultation with legal experts, and it draws on sources, including 300,000 public domain books digitized by the Library of Congress and the Internet Archive. EleutherAI also used Whisper, OpenAI's open source speech-to-text model, to transcribe audio content. EleutherAI claims Comma v0.1-1T and Comma v0.1-2T are evidence that the Common Pile v0.1 was curated carefully enough to enable developers to build models competitive with proprietary alternatives. According to EleutherAI, the models, both of which are 7 billion parameters in size and were trained on only a fraction of the Common Pile v0.1, rival models like Meta's first Llama AI model on benchmarks for coding, image understanding, and math. Parameters, sometimes referred to as weights, are the internal components of an AI model that guide its behavior and answers. "In general, we think that the common idea that unlicensed text drives performance is unjustified," Biderman wrote in her post. "As the amount of accessible openly licensed and public domain data grows, we can expect the quality of models trained on openly licensed content to improve." The Common Pile v0.1 appears to be in part an effort to right EleutherAI's historical wrongs. Years ago, the company released The Pile, an open collection of training text that includes copyrighted material. AI companies have come under fire — and legal pressure — for using The Pile to train models. EleutherAI is committing to releasing open datasets more frequently going forward in collaboration with its research and infrastructure partners. Error in retrieving data Sign in to access your portfolio Error in retrieving data