
The Accessibility 100 -- The Top Innovators and Impact-Makers
getty
Accessibility is about far more than wheelchair ramps or live captioning. The field has emerged as a bustling innovation hub, an educational imperative and—unapologetically—a business sector waiting to explode.
'This isn't charity. This isn't just 'doing the right thing,'' says Mike Buckley, CEO of Be My Eyes, whose software in Ray-Ban/Meta sunglasses connects blind and low-vision people to live, seeing volunteers who help them navigate the world. 'It's about seeing a market, innovating, scaling and ultimately ROI. Venture capital is getting more and more involved every day.'
The Accessibility 100—a new Forbes list, launching June 17—will unveil the 100 biggest innovators and impact-makers in the field of accessibility for people with disabilities. The list will include the top global forces in accessibility-related fields ranging from consumer products and software to education, AI-driven robotics, sports and recreation, travel, the workplace and the arts, among others. Some will be juggernaut tech companies leveraging their reach and resources to embed their products with crucial accessibility features; others will be smaller entrepreneurs whose innovations are poised to change the world.
The Accessibility 100 will be launched on Forbes.com alongside a live panel discussion at the Cannes Lions Festival in France.
For the purpose of this list, the disability categories that listees are impacting include the following:
For the purposes of this list, 'Accessibility' is defined as software, devices and services that allow people with disabilities to have access to information, content, public spaces and experiences. Examples include:
The Accessibility 100 was compiled through interviews with more than 400 experts, along with an expert advisory panel. Emphasis was placed on breadth of true impact across the widest landscape. The final selections will feature companies and individuals from more than 15 countries.
—
As with all Forbes lists, there was no fee for any company or person to be considered for the Accessibility 100. For questions about the list, please email Alan Schwarz at aschwarz [at] forbes.com.
Forbes
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
10 minutes ago
- Forbes
AI, Context, And Code: The Quiet Revolution Reshaping Technology
An invisible protocol for AI is quietly replacing apps, search, and even speech. A digital human composed of contextual data trails a stream of information, symbolizing how AI ... More systems will increasingly rely on persistent, personalized context to act with memory, intent, and alignment. AI is everywhere, but it rarely understands us. Context is what turns noise into meaning. It's the connective tissue between moments, memories, and decisions—between what you meant to say and how it's understood. In human communication, context is often taken for granted. When we speak, we pull from shared experiences, references, tone, timing, and body language. Machines don't have that. They see a pattern, not a presence. They respond, but they don't relate. That's why context matters in AI. Without it, machines offer rhetorical fluency without real comprehension. They generate sentences that sound right, but they don't understand what matters most. This is where Model Context Protocol (MCP) comes in. MCP is the scaffolding that helps these pattern-recognition systems approximate something deeper: intersubjectivity—the ability to carry forward shared meaning across interactions. With MCP, machines don't just complete your sentence—they remember what came before, what constraints apply, and what goal you're trying to reach. It's not just helpful. It's foundational. We used to write code to command machines. Now, machines interpret context to act on our behalf. That shift is subtle, but it's rewriting the logic of computing. And that change isn't cosmetic. It's foundational. Model Context Protocol isn't a wrapper. It's not a prompt template. It's not a UX tool. At the core of this shift is a new architectural layer—one that, to date, has received little attention: the Model Context Protocol. If large language models gave us a new kind of intelligence, MCP provides that intelligence with continuity. Boundaries. Memory. Identity. MCP doesn't make models smarter. It makes them situated—capable of acting in our world, on our behalf, without spinning into chaos or contradiction. We don't need faster chips. We need clearer context. If we don't get this layer right, everything built on top of AI, including commerce, creativity, and communication, will falter. Search used to be a map. Now it's a destination. Apps used to be icons. Now they're invisible APIs. Conversation used to be the frontier. Now it's just a stepping stone to thought-based interaction via brain-computer interfaces. We're not asking machines to do things anymore. They are understanding us, and that changes everything. This isn't about the next big app or a killer chatbot. It's about the end of interfaces as we've known them. The UI is disappearing, and what replaces it isn't screens—it's contextual computation. Most people think this is about chat replacing search. That's only part of the picture. Yes, we've moved from lists of links to direct answers—but we've also moved from tapping apps to making requests. You won't open Lyft anymore. You'll say, 'Get me a ride.' And the system—your AI, your phone, your OS—will find the best option based on cost, loyalty, time of day, your calendar, your preferences, and your past behavior. Search and apps aren't disappearing entirely, but they are being reframed. What's rising is execution based on context. Another app store isn't replacing the app store—a new logic of fulfillment is replacing it. And increasingly, the system may choose the brand on your behalf—unless your preferences indicate otherwise. Intent has become the AI platform, and this is what I've said for decades: In the 2000s, that was your browser of choice. In the 2010s, your smartphone. Today, it's the system interpreting your intent. Tomorrow? It will be the invisible, yet essential, contextual architecture that surrounds every intelligent machine you interact with. And this is where Model Context Protocol comes in. MCP is an emerging open standard that facilitates structured communication between AI models and external tools. It is gaining adoption among leading platforms such as OpenAI and Google DeepMind. It enables continuity, constraint, and contextual intelligence by supplying models with a live, structured snapshot of the world they're entering—including the user's goals, past behavior, permissions, and environment. Imagine telling an AI, 'Get me to Austin by tomorrow afternoon for under $500.' Instead of asking follow-up questions, the system already knows your preferences, past decisions, calendar, and approval rules. It checks the right APIs, evaluates your loyalty points, and books the flight—no app-hopping, no extra clicks. That's not just a more intelligent assistant. That's intelligence equipped with context, structured, current, and fully aligned with your goals. Without MCP, models act statelessly—reacting only to the surface of user input, often forgetting what came before or guessing at constraints. With MCP, the model enters the moment in context, with clarity and relevance baked in. Most AI systems today operate in fragments. They respond to inputs, but lose track of continuity, constraints, and identity between sessions. The result? Responses that feel generic, misaligned, or too confident about the wrong thing. MCP flips that. It carries forward structured knowledge—information about who the user is, what they're trying to achieve, what tools are available, and what boundaries exist. It doesn't just process language. It acts with memory, accountability, and purpose. With MCP, you get continuity, transparency, and trust. That said, implementing MCP securely requires attention to risks such as prompt injection and tool permission leakage—challenges that developers and platform providers are actively exploring. To understand the foundational nature of MCP, look back at the origin story of the Web. When you 'surf the web,' you're not just clicking links. Behind every click, HTTP tells your browser how to make sense of what it's pulling: Without HTTP, your browser wouldn't know how to interpret a page. The internet would be a mess of unstructured files. You'd be flying blind. The Model Context Protocol operates in a similar manner, but for intelligence. Instead of structuring how we load pages, MCP structures how machines interpret people, tasks, constraints, and history. It travels with you—across sessions, devices, and domains—ensuring continuity, alignment, and understanding. But where HTTP resides in the browser, MCP is present everywhere—from your phone to your wearables, from your operating system to the immersive worlds you step into. It doesn't just structure virtual experiences. It orchestrates your entire computational footprint. Imagine you get the scary news that you have to be treated for non-Hodgkin's lymphoma. Today, your health records are scattered: electronic medical records (EMRs) in one system, genomics in another, and imaging data floating in the cloud. Your oncologist has to interpret a mosaic of fragmented data, often manually. But with MCP in place, a model assisting your care team has access to a structured, secure, real-time contextual protocol that includes: It doesn't guess. It consults. And every recommendation is tethered to what matters most—you. It's not just faster—it's more personal, more explainable, and more aligned with both clinical guidance and human nuance. You're on vacation. You buy a $600 watch in Lisbon. Normally, that would trigger a fraud alert or card freeze. But a context-aware system governed by MCP doesn't just see a transaction. It sees: Rather than block the charge, the system authorizes it and logs it as expected behavior. No alert. No friction. Total alignment. Because the system isn't just reacting to a data point—it's drawing from your real-time behavior, location, and intent to make a contextually intelligent decision. You enter a VR concert—an avatar-based show from your favorite artist. With no MCP, every experience has to be rebuilt from scratch: However, with MCP embedded at the system level, the environment doesn't need to ask. It already knows: So the system adapts instantly. Your experience feels fluid, personalized, and embodied—not because the model is innovative, but because MCP made the environment aware. These are three radically different domains, but they all share one common need: systems that understand us, not abstractly, but in a contextually relevant way. Different industries, different stakes—but the exact invisible requirement: intelligence that doesn't just compute, but understands. We used to build software with code-first logic—'if this, then that.' Intelligent systems don't work like that. They operate probabilistically. They interpret nuance. They guess what you meant. They decide how to respond based on what they know about you, about the world, and about the constraints you've given them. In other words, they operate in context, and the quality of that context determines the quality of every outcome. That's the revolution. Not faster chips. Not smarter models. Context as compute. Of course, context isn't a panacea. Bad context leads to brittle systems that overfit or misfire. And without transparency, it's nearly impossible to audit why a model made the decision it did. Precision must be earned—and constantly recalibrated. Brain-computer interfaces are no longer science fiction. The distance between intent and action is shrinking fast, and we're nearing a moment when you won't need to type, tap, or even speak. You'll think. The machine will act. In that world, there is no interface. No menus. No 'are you sure?' confirmation screen. Your brain becomes the input layer. And the system, if not fully aligned, becomes dangerous in its fluency. What disappears with conversation is not just UX—it's friction, correction, negotiation. When your mind sends a signal, there's no time to clarify. No chance to restate. No contextual cues, such as facial expressions or tone. The system must already know your preferences, values, limitations, and goals before executing anything on your behalf. This isn't just a shift in interaction; it's a fundamental change. It presents a profound challenge to accountability, regulation, and trust. If something goes wrong—if the system misunderstands your intent or violates your consent—what will we audit? There is no transcript. No written instructions. Only context. In healthcare, the stakes couldn't be higher. Imagine a BCI-enabled system monitoring your neurological signals to adjust a medication or initiate treatment. There's no margin for guesswork. The model must operate within a context grounded in clinical rules, patient history, and real-time consent. That's not just context—it's compliance by design. Commercially, this shifts how choices are made. You won't comparison-shop. You won't click. You'll express a need, and the system will fulfill it. If your brand isn't context-aware, it won't even be part of the decision. Marketing becomes metadata. Preference becomes architecture. This is why Model Context Protocol isn't just a technical spec, it's a governance framework. A way to encode not just what a machine can do, but what it should do, under the terms set by the human it serves. When conversation disappears, context becomes everything. And MCP is what keeps that context aligned, auditable, and human-centered. Today, OpenAI owns your context inside ChatGPT. Apple is building a closed-loop context layer around Siri. Google is doing the same with Gemini. Meta? They're still trying to get back in the room. These aren't just product strategies—they're positioning moves for contextual dominance. The same companies that monetized our clicks, scrolls, and attention spans now want to capture something more profound: our intent, our memory, our identity across time. In Web 2.0, the data economy was built on surveillance and micro-targeting. You didn't own your behavior—platforms did. Now, in the age of AI, they're updating that playbook. Instead of optimizing what you see, they're optimizing what gets done on your behalf. And if they own the context, they own the decision. The question is no longer, 'Who's watching?' It's: 'Whose values shape the system that acts in your name?' This is why platform companies are racing to build closed-loop context layers—ecosystems where your preferences are remembered, but not necessarily portable. Your digital identity may be persistent, but it's not sovereign. The future will depend on whether MCP becomes open, auditable, and user-governed, or whether context becomes the new extraction layer, just hidden behind predictive convenience. Because whoever controls that layer will influence: Context, not code. That's the new dividing line. Code tells machines what to do. Context tells them who they are. And when the machine acts on your behalf, only one of those matters. This is the new terrain for design, ethics, infrastructure, and sovereignty. Not smarter prompts. Not flashier apps. Contextual scaffolding for autonomous execution. In a world where consumers no longer tap, scroll, or search, brand visibility doesn't disappear—but it evolves. When decisions are made by AI systems interpreting context rather than by users navigating menus, brands must shift their focus from front-end design to contextual presence. That means designing for discovery within the system. If the AI is selecting the best option based on your price sensitivity, behavior, or preferences, then the question becomes: Are you structured to be chosen? The brand battle won't happen on screens. It will occur in context layers that determine what is relevant, helpful, and aligned. To win, brands need to think like structured data and act like trusted proxies. HTTP created the Web. MCP for AI will make the next layer: A world where intent flows invisibly through invisible systems. Where cognition, not clicks, defines our digital lives. And where proximity to context, not placement on a screen, determines which ideas, brands, and actions win. If you're still designing for the app economy, you're already behind the curve—design for context. Or disappear into someone else's. The future of AI won't be written in screens, apps, or even prompts. It will be written in the invisible thread of context—what systems remember, how they align, and who they serve. If you're not designing for context, you're not designing for the future of AI; you're defaulting to someone else's.
Yahoo
11 minutes ago
- Yahoo
Elon Musk's Net Worth Takes $27 Billion Hit Amid Feud With Pres. Donald Trump
Elon Musk's exit from President Donald Trump's White House has resulted in the two towering figures feuding online, with the richest man in the world's net worth taking a significant hit due to the back-and-forth. Finance pub Forbes reports that Musk's net worth fell below $400 billion this Thursday, dropping from $414.7 billion to $388 billion, a difference of around $26.7 billion. More specifically, Musk's Tesla stock declined 14%, or $47 per share, to $285 on what Forbes calls, 'an otherwise flat day for the market.' The drop in value came almost immediately after Musk and Pres. Trump began exchanging blows on social media Thursday (June 5), with Musk claiming that Trump would've never been elected for a second term if it were not for him (Musk spent nearly $300 million backing Trump and other Republicans in last year's election) while Trump accused Musk of having 'Trump Derangement Syndrome.' Musk also accused Trump of being listed on the Jefferey Epstein files, suggesting the current president has a direct connection to the late sex offender and financier. 'Time to drop the really big bomb,' Musk wrote on X, which he owns. '[Trump] is in the Epstein files. That is the real reason they have not been made public.' He later followed up, 'Mark this post for the future. The truth will come out.' The rift seemingly began after Musk exited his role as one of Trump's advisors and head of the Department of Government Efficiency (DOGE). Soon after, Elon called out Trump and Republicans for passing the One Big Beautiful Bill, which Musk deemed a 'massive, outrageous, pork-filled Congressional spending bill' that is a 'disgusting abomination.' Trump fired back by suggesting he would terminate government contracts with Musk's businesses, which include rocket company SpaceX and its satellite unit Starlink. This threat is possibly what led to Musk's businesses dropping in value literally overnight. The Hill reports that White House Press Secretary Karoline Leavitt called Thursday's spat 'an unfortunate episode from Elon, who is unhappy with the One Big Beautiful Bill because it does not include the policies he wanted. The President is focused on passing this historic piece of legislation and making our country great again.' More from Donald Trump's Pardon For NBA YoungBoy Could Be In Jeopardy Donald Trump Announces Travel Ban And Restrictions Affecting 19 Countries Following Terrorist Attack In Colorado Elon Musk Slams Donald Trump Agenda Bill Days After White House Exit Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


The Verge
11 minutes ago
- The Verge
The Access-Ability Summer Showcase returns with the latest in accessible games
Now in its third year, the Access-Ability Summer Showcase is back to redress the lack of meaningful accessibility information across the ongoing video game showcase season. As we see progress broadly slow down, it's also a timely reminder of the good work that's still happening in pursuit of greater accessibility in gaming. 'At a time where we are seeing a slowdown in accessibility adoption in the AAA games space,' organizer Laura Kate Dale says, 'we're showing that there are interesting accessible games being made, games with unique and interesting features, and that being accessible is something that can bring an additional audience to purchase and play your games.' The showcase is growing, too. In 2025, it's longer, more packed with games, and streamed concurrently on Twitch, Youtube (where it's also available on-demand), and on Steam's front page. That growth comes with its own challenges — mitigated this year by Many Cats Studio stepping in as sponsor — but the AA Summer Showcase provides an accessible platform in response to the eye-watering costs of showcasing elsewhere (it has previously been reported that presenting trailers across Summer Game Fest starts at $250,000), while providing disabled viewers with the information they need to know if they can actually get excited about new and upcoming releases. It's lesson Dale hopes other platforms might take on board. 'I grow the show in the hopes that other showcases copy what we're doing and make this the norm,' she says. 'If I could quit hosting the AA Summer Showcase next year because every other show in June committed to talking about accessibility as part of their announcements, that would be wonderful news.' To help that along (sorry, Laura, don't quit just yet), The Verge has collated the games featured in this year's Access-Ability Summer Showcase below. Visual accessibility in focus A major theme that emerged from this year's showcase is color blind considerations. The showcase kicked off with ChromaGun2: Dye Hard by Pixel Maniacs, a first-person color-based puzzler. In its color blind mode, colors are paired with symbols for better parsing and those symbols combine when colors are mixed. A similar spirit is echoed in Sword and Quill's Soulblaze, a creature-collecting roguelike that's a bit of Pokémon mixed with tabletop RPGs (dice included). It also pairs colors and icons, adding a high level of customization to color indicators, difficulty, and an extensive text-to-speech function that supports native text-to-speech systems and NVDA. Later, Gales of Nayeli from Blindcoco Studios, a grid-based strategy RPG, showcased its own color blind considerations and an impressive array of visual customization options. Room to breathe A welcome trend carried over from last year, games continue to eschew time pressure and fail states. Dire Kittens Games' Heartspell: Horizon Academy is a puzzle dating simulator that feels like Bejeweled meets Hatoful Boyfriend. Perhaps its most welcome feature is the ability to skip puzzles altogether, though it also features customization for puzzle difficulty. Sunlight from Krillbite Studio is a chill hiking adventure that tasks the player with picking flowers while walking through a serene forest. It does away with navigation as you'll always be heading the right way, while sound cues direct you to nearby flowers. This year's showcase featured two titles from DarZal Games. Quest Giver is a low-stakes management visual novel which casts the player as an NPC handing quests out to RPG heroes, while 6-Sided Stories is a puzzle game involving flipping tiles to reveal an image. The games were presented by Darzington, a developer with chronic hand pain who develops with those needs in mind and, interestingly, with their voice (thanks to Talon Voice). Both games feature no time pressure, no input holds or combos, and allow for one-handed play. Single-handed controls are also a highlight of Crayonix Games' Rollick N' Roll, a puzzle game in which you control the level itself to get toy cars to their goal without the burden of a ticking clock. Highlighting highlights Speaking of highlights, this was another interesting trend to emerge from this year's showcase. Spray Paint Simulator by Whitethorn Games is, in essence, PowerWash Simulator in reverse. Among a suite of accessibility features that help players chill out and paint everything from walls and bridges to what looks like Iron Man's foot, the game allows you to highlight painting tasks and grants a significant level of control over how those highlights appear and how long they last. Whitethorn Games provides accessibility information for all its games here. Cairn, by contrast, is a challenging climbing game from The Game Bakers which looks like transplanting Octodad onto El Capitan. As it encourages players to find new routes up its mountains, the game allows players to highlight their character's limbs, as well as skip quick reaction minigames and rewind falls completely. Highlights are also important to Half Sunk Games' Blow-up: Avenge Humanity, in which players can desaturate the background and customize the size and tone of enemy outlines to make its chaotic gunplay more visible. Something Qudical's Coming Home, which debuted during the showcase, also offers in its tense horror gameplay as you evade a group of murderers. You can switch on a high-contrast mode that highlights objects to distinguish them from the environment (including said killers). Unsighted If this year's been challenging for accessibility, it's been even more disappointing for blind players when it comes to games that are playable independently. The AA Summer Showcase, however, included an interlude showing off the best titles from the recent Games for Blind Gamers 4, a game jam in which all games are designed with unsighted play in mind and judged by blind players. Four games were featured: Lacus Opportunitas by one of last year's standouts shiftBacktick, The Unseen Awakening, Barista, and Necromancer Nonsense. This was chased by a look at Tempo Labs Games' Bits & Bops, a collection of rhythm games with simple controls and designed to be playable in its entirety without sighted assistance. A difficult subject Accessible indie games often favor the cozy, but this year's AA Summer Showcase brought a standout game that bucked that trend. Wednesdays by ARTE France is a game that deals with the aftermath of childhood abuse. That's certainly in keeping with the host of trauma-driven indie games out there. Wednesdays, however, positions itself as a more hopeful examination of that trauma, both through its visual novel style memories and theme park manager gameplay. Like so many of the showcase's games this year, Wednesdays includes mitigations for color blindness — though no essential information is tied to color in-game — as well as a comprehensive text log for cognitive support, manual and automated text scrolling, and customization options for cursor speed, animations, fonts, inputs, and more. Better yet, all those options are displayed at launch and the game always opens in a windowed mode to allow for easier setup of external accessibility tools.