logo
#

Latest news with #ToolsForHumanity

I Let Sam Altman's Orb Scan My Eyes. Now I'm A 'Verified Human'
I Let Sam Altman's Orb Scan My Eyes. Now I'm A 'Verified Human'

Forbes

time20-05-2025

  • Forbes

I Let Sam Altman's Orb Scan My Eyes. Now I'm A 'Verified Human'

Dan and the Orb Dan Fitzpatrick One minute. That's all it took. One surreal, slightly awkward minute standing face-to-face with a shiny metal sphere in a glass-fronted shop on Union Square, San Francisco. I stepped in curious. I walked out a "Verified Human." I had met the Orb. The Orb isn't a sci-fi prop. It's real, it's shiny and it stares into your soul. Well, your eyes. It's the centerpiece of World's ambitious and controversial quest to verify every human on Earth. The visit began with a smile from a young guide who looked like he would fit right in trying to advice at an Apple store. I was asked to stand in front of an Orb, keep still and let it scan my iris. I obliged. I'm still not sure why. A few seconds later, the World App on my phone pinged. "You are now a Verified Human." No confetti, no parade. Just relief. For a second there, I had begun to wonder. So what is this all about? What did I really just do? And why are millions around the world queuing up to do the same? World is the brainchild of Sam Altman, CEO of OpenAI, the company behind ChatGPT, and co-founders Alex Blania and Max Novendstern. It launched publicly in 2023 after years of build-up. The company behind it, Tools for Humanity, raised a hefty war chest from investors like Andreessen Horowitz and Reid Hoffman. At its core, World is trying to answer a deceptively simple question: how do you prove you're human in an age of AI? The answer, apparently, involves your eyeballs. The Orb is World's signature device. It scans your iris and creates a unique digital code, called your World ID. This code is used to ensure no one can register twice. Think of it as a global passport for your humanity. No names, no addresses, no other data stored. Just proof that you are one of a kind. Your mom was right all along! After I got back to my apartment, I noticed that I could claim some Worldcoin. The cryptocurrency that accompanies this human verification tool. I clicked claim. It gave me $44 worth of the cryptocurrency, but it has to be claimed periodically over a 12-month window. There's always a catch! The crypto side of World kind of confuses me. Where does it fit into this human verification tool? Apparently, the cryptocurrency Worldcoin is to incentivize people to join up to World and therefore make the verification network more used and more trusted. It is also used to raise funds for the company behind World, Tools for Humanity. And it will be used as a way to make payments through the app and to access services built on this new infrastructure. That's the pitch. Clean, efficient, futuristic. Or is it? Union Square storefront of World Gado via Getty Images As with all things that promise to change the world, World is raising eyebrows as fast as it scans irises. Let's start with privacy. Critics argue that collecting biometric data at scale is a privacy nightmare dressed up as innovation. Even though World says it deletes your iris image after creating the code (unless you opt in to data storage), not everyone's buying it. The idea of a startup backed by VCs storing millions of biometric scans has triggered investigations in Germany, France, the UK and Kenya. Then there's the ethics. World has rolled out rapidly in parts of Africa, South America and Asia. Reports have emerged of people being offered tokens with little understanding of what they were giving up. The risk of exploiting vulnerable populations in exchange for sensitive data is real. The company insists it's acting transparently, ethically and with privacy at its core. It's published its tech specs, engaged external audits and committed to decentralizing control over time. Still, the questions persist. Is World a humanitarian moonshot or a surveillance scheme wrapped in shiny hardware? Is it the beginning of AI-era digital equality or just the next step in data extraction economics? Even the idea of "proof of personhood" is being debated. Do we really need to scan our eyes to prove we're real? Couldn't we just click a CAPTCHA? Do we risk building a world where access to online life requires biometric ID? And yet. World is tapping into a real and growing concern. As AI becomes more capable of impersonating humans, the need to differentiate bots from people grows urgent. In that context, the Orb begins to make sense. It's not about surveillance. It's about survival. Or so the argument goes. Altman has spoken publicly about a future where AI generates so much wealth that we need new ways to distribute it. World, he suggests, could form the infrastructure for AI-funded universal basic income. If machines do the work, the money they make could be shared among verified humans. That idea sounds grand. But building that future on the back of biometric scans and crypto wallets is where people get nervous. As I left the World Flagship Space, I couldn't shake the feeling that I had just participated in something important. Or maybe just something strange. Maybe both. The Orb didn't feel dangerous. It felt clinical, cool. Like a self-checkout for your identity. Still, I kept checking the World App on my phone. Just to be sure. There it was: "Verified Human." The Orb had spoken. If World succeeds, it could change how we vote, how we get paid, how we prove who we are online. If it fails, it will serve as a cautionary tale about the cost of trust in the age of data. In the world of education, a secure global identity could help prevent AI cheating in exams or coursework. Imagine a future where students verify their humanness before submitting work. Is this the answer to restoring trust in student work by educators? On the other, it raises huge ethical questions. Would students need to scan their eyes to access learning? Could digital identity become a gatekeeper? Biometric verification could support integrity and access, but only if deployed with transparency, consent and clear benefit. If handled poorly, it risks turning classrooms into checkpoints. Either way, the Orb is watching. And people are lining up to be seen.

How AI Threatens To Take The Fun Out Of Gaming—And How To Stop It
How AI Threatens To Take The Fun Out Of Gaming—And How To Stop It

Forbes

time22-04-2025

  • Entertainment
  • Forbes

How AI Threatens To Take The Fun Out Of Gaming—And How To Stop It

Steven Smith, Head of Protocol at Tools For Humanity. Games are, at their core, about people. From ancient rounds of mancala thousands of years ago to the thriving world of online video gaming, gameplay has always been defined by the tension of unpredictable decisions, the thrill of outsmarting an opponent and the satisfaction of shared experiences. It's that human element that makes friendly competition so captivating, a compelling drama where skill, strategy and luck collide—and it's nothing without the players who bring that drama to life. Today, the same technologies that have delivered increasingly complex (and wonderful) games to the world also hold the potential to undermine the very humanity that makes them worth playing. Although the rise of artificial intelligence (AI) offers infinite possibilities for enhancing online video games, it's also given rise to a major challenge for players and developers alike: third-party bots. In recent years, gamers who cheat have used third-party bots to gain an unfair advantage in online games, whether it's automating complex building patterns in Fortnite or improving shooting accuracy in Call of Duty: Warzone. As AI tools have proliferated, this usage has only increased: According to a February 2025 survey of gamers conducted by Echelon Insights, 59% of respondents reported regularly encountering such bots in the course of gameplay. Although AI has its place in video game development, from building more adaptive non-player characters to procedural content generation, third-party bots fundamentally erode a game's competitive integrity and, in turn, suck the joy out of gameplay entirely. According to the Echelon survey, 71% of gamers say these 'bots are ruining multiplayer competition,' and nearly 1 in 5 gamers (18%) say they've 'stopped playing a game entirely' in response to the sudden swell of AI opponents. If a game is rigged, why even play at all? The rise of third-party bots has underscored a growing demand for human verification in online games. Indeed, 59% of Echelon respondents said that it's 'important to know whether they are competing against a bot or a real human.' If gamers are going to face off against bots (like, say, chess maestro Garry Kasparov vs. IBM supercomputer Deep Blue), they at least want to know ahead of time. That's where 'proof-of-human' systems come into play. Designed to authenticate digital accounts as belonging to real, unique humans rather than automated systems, proof-of-human mechanisms can take on different forms in the context of video games. They include tests or behavioral checks that AI struggles to replicate, Captchas embedded in gameplay or monitoring player behavior for patterns and imperfections consistent in human playstyles that bots fail to consistently mimic. If gameplay is too unnatural, there's a high likelihood that the player is, in fact, a bot. Single sign-on (SSO) mechanisms attempted to address the problem by linking player accounts to established platforms that require multifactor authentication or even biometric data (such as face ID or fingerprint recognition). The goal is to create a higher barrier for bots and AI-assisted accounts to bypass and ensure that a player accessing the game has already been authenticated as a real person. Taken together, both gameplay analysis and SSO verifications can help developers reduce the creation of bot accounts. For example, some gaming platforms currently monitor suspicious activity and employ their own anti-bot and fraud detection systems; if a player account is flagged for unusual behavior, game developers can coordinate with a provider to restrict or suspend access, effectively cutting off bots at the source. This creates a deterrent for bot operators and makes it easier to identify and remove AI-assisted accounts from competitive play. (The same mechanism can potentially apply to players who engage in abusive behavior like toxic chat or harassment.) Unfortunately, this combination falls short of a true proof-of-human signal. After all, sophisticated AI processes that can easily pass Turing tests can also easily defeat these systems by simulating natural aiming variability, imitating organic player decision making and even engaging in context-appropriate communication. This makes it harder for even the most sophisticated analysis to distinguish between human and simulated behavior, suggesting that AI-powered cheating may remain one step ahead—a prospect that lends even more urgency to the development of a more robust proof-of-human system for the online gaming world. This need extends to user-generated digital ecosystems beyond the gaming world. Take online recommendations for products and services: AI-generated reviews have become a growing problem in recent years, flooding once-trusted platforms with fake or misleading content. A proof-of-human system tied to verified accounts or behavioral checks could help ensure that only genuine human feedback is prioritized, whether that involves requiring users to authenticate through an SSO platform or complete a human verification task before posting a review. Indeed, the same logic applies to the proliferation of bots in other online spaces, from the so-called 'reservation bots' that have grown notorious for locking down restaurant tables faster than hungry human diners to the ticket bots that are vacuuming up seats at your favorite live concerts. Although AI offers the potential to augment everything from video games to online shopping, cheaters and fraudsters threaten to spoil the fun for everyone. For games, at least, proof-of-human systems can introduce an added layer of verification that not only makes it harder for these actors to negatively impact a gaming community but also fosters a more secure and fair environment for all players. Games are, after all, about bringing joy and entertainment to people—let's keep them that way. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store