logo
#

Latest news with #userresearch

AI has a resilience problem. Designers and researchers can help fix it
AI has a resilience problem. Designers and researchers can help fix it

Fast Company

time2 days ago

  • Health
  • Fast Company

AI has a resilience problem. Designers and researchers can help fix it

2012. I walk out of a gastroenterologist's office with a brochure titled 'Your Life With Ulcerative Colitis.' What the brochure doesn't say: A month later, I will wake up on the day of a critical midyear design presentation feeling too nauseous to leave my apartment, and will have to spend several weeks at my parents' house, where I will miss several more midterms. A year later, I'll stand at a boarding gate and feel too sick to take a five-hour flight and meet with potential graduate school advisers. I'll soon learn that, for me, these won't be one-offs. Instead, I'll live a life of constant flux, impossible to plan for. Desperate for some control as I push through academia, I turn to tech products. But technology can't help me. Digital tools excel at routines, but falter at exceptions. I can schedule weeks of meetings in a few clicks, but when I'm unwell, I'm copy-pasting the same cancellation message a dozen times. My personal-finance app keeps me on track, but only until an urgent-care bill throws things off. When my fitness tracker chastises me for not closing my rings during a particularly brutal flare-up, I shove it into my junk drawer. Technology is failing me when I need it the most. Happy paths 2016. I join Big Tech, working as a user researcher in early-stage and AI technology. Two things become immediately clear. First, my story is far from unique. Anecdotes from many hundreds of user interviews reflect lives riddled with chaos and disruption. Change—unplanned and planned—is the norm. Second, consumer products are largely designed for 'happy paths.' A clear-cut problem is solved by a superhero technology, resulting in a favorable outcome that is tied off with a neat bow. For the sake of clarity, efficiency, and technical ease, the zigzag realities of lives are often sanitized into an idealized arc. We trot out these squeaky clean stories as 'hero use cases' for a product idea—first to convince ourselves, then our executives, and, finally, our users. Today's explosion of consumer-facing GenAI products are built with the same recipe. We get heartstring-tugging stories with just enough complexity to feel real, without any of the mess. A dad uses AI to prepare for a job interview while reminiscing on parenthood. A parent brings a child's imaginary creature to life in a custom picture book. Some brands try to incorporate more chaotic realities (a storm hits restaurant patio seating) only to portray absurd overdependence on AI (waiters leave their customers drenched because an AI agent doesn't reseat them indoors). If you're like me, these ads make you want to scream: 'You're standing in the middle of the kitchen. How are your kids not interrupting your conversation with AI 27 times?' But in contrast to the 'hero use case,' taking kid snack breaks and asking AI to repeat itself over the noise of toddler screams are often cordoned off as 'edge cases' in product development. The implication: These occurrences are rare. But they aren't. Human journeys are not straight lines. They are dynamic, defined by change, interruptions, and curveballs. Some 60% of Americans reported experiencing an unexpected expense in the past year, though 42% don't have an emergency fund greater than $1,000. Households with two or more children have a viral infection in the household more than 50% of the time. And an estimated 28% of work time each year is lost to distractions. When technology isn't resilient to this reality, it breaks—sometimes catastrophically. Like when a Florida teen dies by suicide after his lengthy conversations with a chatbot turn darkly romantic. When AI-powered cameras mounted on public buses mistakenly ticket thousands of legally parked vehicles in New York because they fail to recognize alternate side zones. Or when AI weather models fail to predict the worst storms because extreme weather data doesn't exist in the training data. These outcomes are extreme, but the pathways leading there are deeply ordinary, broken by nascent technology that isn't resilient to the gritty reality of human behavior. Sometimes, the catalyst stems from the tech itself, like security vulnerabilities. Other times, it's agnostic of the technology, like mental health. But in all cases, the technology was not resilient to changes in context. AI's broken promise Years ago, you could blame technology as the limiting factor. But AI should, ideally, thrive on this sort of complexity—using its superpowers of pattern recognition, synthesis, and triangulation of thousands of data points about users and their environment. GenAI has introduced a new frontier around deep reasoning and human interaction that should make the technology more tractable and transparent. AI is uniquely positioned to help people anticipate and recover from change, the kind that they may not have seen coming. Yet the system didn't raise the alarm when a conversation overtly turned dangerous, much less recognize patterns that may suggest that it was headed that way. On issuing its 7,000th ticket in one day, the MTA's system didn't flag that this is an unusually large number of violations on a route. It's never easy to deal with the complex behavior of humans and societies. But when we keep designing to make already great lives 1% better, we are perpetuating a specific type of harm—one that happens when the people designing the technology aren't considering the real ways it might be used. As UX practitioners, we are uniquely positioned to start the conversation about how to change this. To move toward an AI UX rooted in resilience, we'll need to shepherd at least three main shifts in the way our products are designed. 1. Shift the user stories we tell—which directly map to the problems we choose to solve. UX must choose to foreground the hard, complex story. We all have one: a multigenerational household with life-stage changes, moves across the country, divorce, job loss, a chronic illness. Right now, a key barrier to centering these stories is that they extend ideation cycles, which is uncomfortable in an increasingly launch-first-or-perish climate. As a result, cleaner stories, like the product narratives described earlier, win out. To break this cycle, UX can introduce complex user stories to product teams starting with ideation, through prototype and concept testing—especially ones that cut horizontally across product ecosystems. This requires creating a new canon: an accessible taxonomy of types of complexity, curveballs, and changes that we can easily pull from. Such a taxonomy might take the form of brainstorming prompts, user journey templates, or a card deck or visualization used in sprints. This cracking open will take time, but the more we tell these stories, the easier they will roll off the tongue, and the more they can become normalized. 2. Shift how we leverage user data in AI-powered products. Today, user data collected by companies—while wide-ranging—isn't always curated or connected well. Most users, particularly younger generations, have resigned themselves to data collection and don't mind it, but also don't understand how the data is used or whether it benefits them. This is not an argument to collect more data. Rather, it's a call to connect existing data for more meaningful, tangible user benefits, like helping navigate blind spots and complexity. Consider a simple example: Ann's AI agent has access to a calendar app where she has blocked off time for a post-work run, a weather app that shows unexpected evening rain showers, and a maps app that she frequently uses to navigate to a yoga studio. This agent can now surface a timely suggestion: help Ann move meetings to shift the run to earlier in the day, or help her find a class at the yoga studio at that time. In reflecting how people really use their technology, this sort of cross-product dialogue and synthesis has the opportunity to leverage AI and user data to unlock resilience in the face of change. 3. Shift away from traditional definitions of 'seamlessness' and 'magic moments' toward ones that gracefully embrace failure, meaningful friction, and deep, explicit user feedback. AI advancements tend to tempt product teams to remove all friction and present users with auto-magical solutions to needs they weren't even aware of, from hyper-personalized AI-driven ads to 'smart' nudges on food and shopping apps. Common success metrics used today reflect the value we place on frictionless experiences: fewer clicks, greater session length, engagement with automation features, fewer user-submitted comments. This can cause a misleading overreliance on implicit behavioral signals that don't always reflect real intent. Take the example of an in-app pop-up: A user might spend a long time viewing it, even clicking on a link—not because they find it useful but because they can't find the exit. Even when users do provide explicit feedback, it's often not in a form that can be interpreted meaningfully, leading to undesired outcomes. Think, for example, of how OpenAI's models grew sycophantic after a thumbs-up on a response was used as a signal to make the chatbot behave more in that direction. Instead, how might we offer users more ways to provide granular feedback that can shed light not only on the 'what' but also the 'why'? This can be meaningful friction that can empower users to have their unique human context be better understood while harnessing the beyond-human capabilities of AI. One could argue that this, in fact, is the more magical experience. Finally, the pursuit of seamless perfection risks underplaying the shortcomings of AI itself—misunderstood accents, factual inaccuracies, biased imagery. These are a function of the technology, and are bound to happen. UX needs to treat these as predictable breaking points in the technology, build frameworks to classify them, and design intentionally with them as part of the user narrative. Of course, it's far simpler to sketch these solutions than implement them, but if AI is to work well for real-world problems, we need to tackle real-world complexity head-on. UX is in a powerful position to shift these mindsets. As it has done for domains like accessibility and product inclusion, UX can redefine the problems and narratives that emerging technology is built for, and reshape the UX to accommodate product and user realities to support resilience.

How Confirmation Bias Is Destroying Your Product
How Confirmation Bias Is Destroying Your Product

Entrepreneur

time22-05-2025

  • Business
  • Entrepreneur

How Confirmation Bias Is Destroying Your Product

It's time to unlearn what you "know" about your users. Opinions expressed by Entrepreneur contributors are their own. The most dangerous words in product development are: "Our users will love this." I've heard this declaration in countless product meetings, usually followed by months of engineering work and ending with the quiet disappointment of underwhelming user adoption. The culprit? Confirmation bias — our brain's maddening tendency to seek out information that supports what we already believe. As product managers, we're hired to make decisions. We analyze markets, gather requirements and prioritize features. The problem is, once we've developed a hypothesis about what users want, we start filtering all incoming information through that lens. Ambiguous feedback gets interpreted as supportive. Negative feedback gets labeled as "edge cases." And gradually, we construct an alternate reality where our product decisions are always brilliant. Related: How Entrepreneurs Can Overcome Confirmation Bias The user research theater "User research theater" refers to going through the motions of talking to users without actually being open to having your assumptions challenged. You might recognize these symptoms in your organization: Cherry-picking positive quotes from user sessions while ignoring negative patterns Asking leading questions designed to elicit specific answers Limiting your research to users who already love your product Interpreting silence or confusion as agreement Dismissing negative feedback as "they just don't get it yet" Look, I get it. You've already told your leaders and investors about the amazing feature roadmap. You've hired engineers based on certain technical assumptions. Your entire company narrative might be built around a particular vision of what users want. Changing course feels impossible. But staying on the doomed course is worse. Related: Do You Know What Your Customers Want? Are You Sure? Breaking the bias cycle So, how do we actually fix this? How do we create processes that challenge our cherished assumptions instead of reinforcing them? Here are some practical approaches I've seen work: 1. Separate data collection from interpretation One team I worked with adopted a practice where the people conducting user interviews weren't allowed to interpret the results. They could only document exactly what was said. A separate team — one without emotional investment in specific outcomes — would then analyze the transcripts. This reduced the tendency to hear what they wanted to hear during interviews. This separation creates a healthy tension. The interview team focuses on asking good questions rather than leading users toward predetermined conclusions. The analysis team spots patterns without being influenced by users' tone or the interpersonal dynamics of the interview. 2. Actively seek disconfirming evidence Make it someone's specific job to play devil's advocate during research planning. This person should be asking: "How might we disprove our hypothesis?" rather than "How can we validate our idea?" For example, instead of asking "Would you use this feature?" try "What would prevent you from using this feature?" The first question almost always gets a polite "yes." The second gives you actual obstacles you'll need to overcome. 3. Pay attention to behavior, not just opinions Users are notoriously bad at predicting their own future behavior. They'll enthusiastically tell you they'd definitely use your new feature, but when it launches, they stick with their old habits. I've found it much more valuable to observe what users actually do rather than what they say they'll do. This means analyzing usage data from existing features, creating prototype experiences where users can demonstrate preferences through actions, and conducting field studies where you watch users in their natural environment. 4. Create a culture that rewards changing course If your team gets punished for admitting they were wrong, guess what? They're going to double down on bad ideas rather than acknowledge the need to pivot. Smart companies build ceremonies that celebrate learning and adjustment. Some startups have done "Pivot Parties" — actual celebrations when the team made a major course correction based on user insights. They literally popped champagne when they killed features that research showed wouldn't succeed. This sent a powerful message: Learning is valued over stubborn persistence. 5. Diversify your research participants If you only talk to your most enthusiastic users, you're creating an echo chamber. Make sure your research includes: Prospective users who chose competitor products Former users who abandoned your product Current users who rarely engage with your product Users from different demographics and use cases This diversity helps expose blind spots in your understanding. Related: 3 Cognitive Pitfalls That Are Ruining Your Business — How to Unravel the Biases in Decision-Making The paradox of expertise Here's the painful truth: The more experienced you are in your domain, the more susceptible you become to confirmation bias. You've seen patterns before. You've developed intuition. Sometimes this is incredibly valuable. Other times, it makes you dangerously overconfident. The solution isn't to ignore your experience. It's to pair your hard-earned intuition with rigorous processes that test your assumptions. The best product leaders I know have strong convictions loosely held. They make bold bets based on their expertise, but they're quick to adjust when evidence contradicts their initial hypotheses. In the end, the market doesn't care about your brilliant vision or your elegant solution. It only cares if you've solved a real problem in a way that fits into users' lives. And the only way to know that for sure is to constantly challenge what you think you know about your users.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store