Latest news with #BYOAI


Forbes
21 hours ago
- Business
- Forbes
The Hype Gap: Why Most Organizations Aren't Ready For AI At Scale
Most companies I speak to about AI leadership are stuck in what I call pilot purgatory: an endless loop of small AI experiments that never seem to scale. This feels weird because the AI hype is loud. While the headlines promise transformation, inside many organisations, there is little more than isolated sparks of innovation Recently, I spoke to a senior leader at a global manufacturer with cutting-edge capabilities and a generous R&D budget. On paper, this was precisely the kind of organisation you'd expect to be leading the AI charge. In reality: no strategy, no governance, no roadmap. AI activity was fragmented, driven by curious employees running unsanctioned experiments in corners of the business. Top leadership was interested in the benefits, but ROI was unclear, and it wasn't a priority. Even industries in AI's direct line of fire are holding back. Take the law. Legal work runs on words - drafting, reviewing, interpreting - exactly what generative AI does best. Yet uptake is slow. Recent research shows 47% of private practice lawyers say their firm is slow or very slow to adopt new tech, with only 18% calling themselves fast movers. A quarter of law firm employees believe failing to embrace AI could harm their career; 1 in 10 would consider leaving over it. Restraint is understandable. Powerful technologies have always taken time to transform the economy. James Watt patented his steam engine in 1769, but steam didn't overtake water power until the 1830s. Electricity took decades to replace steam in factories. I suspect leaders are waitng for others to blaze a trail they might follow more easily. But AI is moving far faster. It's advancing at speed and organisational readiness - strategy, governance, skills, culture - is lagging badly. Market leadership can be built in this gap, while laggards are left behind. Hype vs. reality Like a badly dubbed movie, Silicon Valley's instant-value promises don't match the real story on the ground (that AI will take time, effort and skilled change leadership): The hidden revolution: shadow adoption A Gallup poll shows 33% of senior leaders use AI frequently, compared to 16% of individual contributors. But McKinsey's data tells the other side of the story: employees are using AI far more than their bosses know. Microsoft and LinkedIn report that 75% of knowledge workers now use AI daily - half starting in the past six months - many without telling their managers. Some even pay for premium tools out of their pocket: BYOAI (Bring Your Own AI). Without leadership recognition and guidance, much of that learning stays locked in individuals. Worse, it creates compliance, security, and reputational risks. That's why I urge leaders I'm working with, especially mid-career ones who may feel a confidence gap, to experiment personally. Using AI yourself builds credibility, gives you a tangible example of change, and offers first-hand insight into how teams can use the technology. The real bottleneck: trust, not tech As with much digital transformation, the most significant friction isn't the technology. It's trust. Even the most accurate AI will be ignored if its outputs aren't understood or believed. Employees hesitate because they distrust the black box, lack transparency on data, or fear for their jobs. Without trust, adoption stalls. Why boldness wins History favours the early movers: • Retail: Tesco's Clubcard launched in 1995, locking in loyalty before Sainsbury's and others caught up. • E-commerce: Amazon went online in 1995; established retailers hesitated and fell away. • Cloud computing: Netflix moved to the cloud in 2008; Blockbuster stayed tied to physical infrastructure and collapsed. The LEAD framework: from hype to value Here is a simple checklist for moving from pilots to scaled impact: L — Locate the Power Users Find the people already experimenting with AI. Learn what's working.E — Experiment Yourself Use the tools personally. Run small pilots with clear measures. Learn alongside your team.A — Acknowledge and Reward AI Talent Recognise those driving results. Promote them, learn from them, and make them role models.D — Define New Performance Measures Shift metrics from volume and speed to quality, originality, and strategic value. Right now, most organisations are still at the AI starting line. Those who adopt early - with strategy, governance, skills, and trust - will build compounding advantages in productivity, innovation, and market insight. The leaders who close the hype gap will be the ones who treat AI not as a plug-in technology, but as a culture shift they lead from the front.


Forbes
23-07-2025
- Business
- Forbes
You're Using AI — But Are You Using It Well? Three Steps To Consider
Robot and human fingers reach out to the other, symbolizing the promise of our future. Artificial ... More intelligence has the power to transform our lives. It needs humans to step up to harness this power. The other day I stumbled on a term I'd never heard before: SolidGoldMagiKarp. Originating from the anime world of Pokemon, it is the idea of something so rare to be almost unreal. The phrase was making rounds due to ability to cause glitchy behavior in AI when used as a prompt. And, I wondered if I had missed yet another concept in the ever-expanding universe of artificial intelligence. That feeling — of brushing up against something new, strange and must-know — is felt by many and more often now than ever before. The rapid transformation of AI innovation is rivaled only by its pervasiveness in our lives. According to McKinsey's AI in the workplace 2025 survey, nearly all employees (94 percent) and C-suite leaders (99 percent) use gen AI tools. And yet, just last year's Gallup's poll asking employees on frequency of AI use at work had nearly seven in 10 saying they never use AI, while only one in 10 employee reporting weekly use. This dramatic change in usage of AI in the workplace comes at a cost. Workers are more worried than hopeful about adoption of AI, those who feel hopeful comprising of mostly of young professionals and those too scrambling to keep up. Almost 80% of users are bringing their own AI tools to work (BYOAI). Lacking clear guidance on what constitutes acceptable use of AI, more than half are unwilling to admit using AI at work. If you are feeling like you are chasing a mythical carp, you are not alone- 77% of employees report being lost on how to use AI in their jobs. AI Use ≠ AI Mastery Symptom 1: Surface-level productivity The advent of AI was meant to usher a transformation of how we work. And yet, Digital Work trends report found two-thirds of employees use AI primarily for cross-checking their work. Reason? It is a precarious time at work. Despite hopes of post-pandemic balance, meetings and after-hour work dominates a typical workday. Almost 70% report struggling with the pace and volume of their responsibilities, while nearly half say they feel burned out. A Microsoft 365 study found 60% of users spend their time on applications such as Outlook and Teams—responding, coordinating, and keeping up. In this environment, it's no surprise that employees reach for generative AI tools—not to innovate, but to simply keep up. If AI has to fulfill its promise, the scope of human bandwidth will determine the extent of AI mastery. Symptom 2: AI Dependency without discernment There is a tendency to trust AI generated work without verifying facts, citations, or data sources. Seldom do users critically evaluate if the content is accurate, complete, or unbiased. Educators have been sounding warning bells about plausible-sounding answers that are incorrect — hallucinations in AI-speak. But this issue is not limited to academic world. With work pressures and looming deadlines, AI provides a welcome short-cut to keep up with deliverables. Pick any AI tool. Trained to display a 'know it all' tech-authority while spitting out abundance of information, it lulls us to forget that it does not 'know' things and is simply predicting plausible output based on training data. Is this behavior a reflection of our own tendency to 'satisfice' —make decisions that are good enough rather than optimal, bound by the limits of our finite cognitive capacity, and real time constraints? AI's limitations are architectural, not cognitive — but like humans operating under bounded rationality, it produces output that will suffice. The danger? Unlike humans, it doesn't admit when it is wrong — and never signals uncertainty. Can AI companies take steps to have their tools acknowledge when a query pushes at its predictive boundary? They can and they should if hallucinations are to be managed. Without such guardrails, our AI dependency sans due diligence will remain a liability. Symptom 3: Ethical blind spots It seems like we are living through the Wild West era of AI — where tools are advancing faster than the legal and ethical frameworks needed to govern them. Just like in the 1800s American frontier, this new territory shows promise — but also peril. Businesses are rushing to stake their claims, often with no rules of governance in sight. Consider the case of an HR manager who gets the go-ahead to integrate AI into the firm's recruitment platform to streamline resume filtering. In a short period of time, it goes live and deemed a huge success as it slashes time-to-hire by half. Months later, it's discovered that the AI systematically downgraded women and minority applicants based on biased training data — leading to legal exposure and public backlash. Such blind spots are rampant due to lack of clarity on questions such as ownership of the training data including liability for IP violations for any embedded copyrighted material or steps for auditing its algorithms and many more. Why This Happens: AI's Flooded Learning Curve Generative AI tools are constantly 'learning' and the pace of advancement is such that even tech-savvy professionals are struggling to keep up. Jargon like multi-modal, tokenization, neural networks, retrieval-augmented generation abound and even playful terms like SolidGoldMagiKarp seem like secret passwords to a club we didn't know existed. Many feel a growing pressure to sound competent with AI, even if they're privately unsure of the what's and the how's. Others, excited to explore the new features and capabilities, end up with hours of wasted effort going down rabbit holes. Watching colleagues use custom bots or showing off complex prompts may make one question one's techyness, falling prey to the classic imposter syndrome. Could it be that rather than drowning in data, today's professionals are drowning in expectations? When the learning curve becomes a tidal wave, even the best talent is likely to tread water. From Confusion to Control: Stepping up your AI journey Step 1. Identify Your AI persona Begin by assessing your relationship with AI. Reid Hoffman, in his recent work on AI and its future, introduced four personas to categorize how people think and feel about artificial intelligence: Doomers, Gloomers, Bloomers, and Zoomers. He describes Doomers as those who view AI as an existential threat to humanity; Gloomers with less extreme views but still fearful of AI's role in deepening inequality, misinformation, and job disruption; Bloomers as cautiously optimistic of AI; and Zoomers, the early adopters embracing all things AI and viewing it as a force for growth and innovation. Reflecting on one's relationship with AI is increasingly important, especially for professionals and leaders navigating rapid technological change. Being overly skeptical or overly optimistic will determine how you engage with AI. For instance, Doomer may shy away from valuable learning opportunities, while a Zoomer may be susceptible to overlooking ethical red flags. Leaders need clarity and insight on their own stance before fostering discussions around AI strategy, policy, and implementation for their organization. There is no 'One size fit all' AI strategy but auditing your AI persona can help guide what works best for you and your organization. Step 2. Audit Your Current AI Use and Go Deeper AI can accelerate your workflow, but if you're only using it to get things done faster, you may be missing an opportunity to learn and grow. Ask yourself: Am I using AI to save time — or to think better? Is there a way to pose this query from a different angle? Even if your current AI use is predominantly drafting emails, how about learning to automate this task by creating templates with AI's help! Next, reconsider how you typically treat AI output- review or copy and paste. Doing latter means you are risking errors or misjudgments or worse your job and reputation. Workplaces are increasingly adopting internal AI governance tools such as Microsoft's Purview or plagiarism software such as Copyscape and Grammarly Business. Instead consider the concept of Collab score that in essence is about well a human and AI collaborate on a task, whatever the task may be. Did you just accept the AI's first output or did you revise it, critique it, build on it or even better sough follow-up questions? Collab score is a measure of interaction quality with higher score reflecting engaged, iterative use leading to high quality outputs and lowered risk, and thus, getting at the heart of meaningful human-AI teaming. Step 3. Learn the Tool, But Also Understand the System You don't need to become a machine learning expert — but you do need to understand that effective AI use is an iterative process. The real value comes not from a single prompt, but from refining, questioning, and building on what AI gives you. Where to begin? Try LinkedIn Learning that offers concise videos on AI that targets different professions. Subscribe to credible sites such as MIT Technology Review's AI section or newsletters such Brave New Words by Salman Khan of Khan Academy is another great resource. Though written for educators, it offers great insights on how to customize one's learning journey, on using AI as an ethical guide helping navigate the web with one's own filters of do's and don'ts and finally, as an assistant willing to handle the mundane stuff, freeing time to innovate. And if all this seems effortful and time consuming, how about using AI as an entry point to AI. Reid Hoffman suggests navigating to your favorite gen AI model and starting a new chat with prompts that begin with, 'Explain agentic AI to me like I'm five', to 'Explain agentic AI to me like I'm in high school', and then graduating to 'Explain agentic AI to me like I have a PhD'. Invest in learning about prompt engineering which in simple terms is about designing effective prompts to get accurate, useful, and consistent outputs from AI systems. The best news about learning AI from AI is that it does not judge— there are no dumb questions.


Axios
29-05-2025
- Business
- Axios
Secret chatbot use causes workplace rifts
More employees are using generative AI at work and many are keeping it a secret. Why it matters: Absent clear policies, workers are taking an "ask forgiveness, not permission" approach to chatbots, risking workplace friction and costly mistakes. The big picture: Secret genAI use proliferates when companies lack clear guidelines, because favorite tools are banned or because employees want a competitive edge over coworkers. Fear plays a big part too — fear of being judged and fear that using the tool will make it look like they can be replaced by it. By the numbers: 42% of office workers use genAI tools like ChatGPT at work and one in three of those workers say they keep the use secret, according to research out this month from security software company Ivanti. A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are. 20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals. Catch up quick: When ChatGPT first wowed workers over two years ago, companies were unprepared and worried about confidential business information leaking into the tool, so they preached genAI abstinence. Now the big AI firms offer enterprise products that can protect IP and leaders are paying for those bespoke tools and pushing hard for their employees to use them. The blanket bans are gone, but the stigma remains. Zoom in: New research backs up workers' fear of the optics around using AI for work. A recent study from Duke University found that those who use genAI "face negative judgments about their competence and motivation from others." Yes, but: The Duke study also found that workers who use AI more frequently are less likely to perceive potential job candidates as lazy if they use AI. Zoom out: The stigma around genAI can lead to a raft of problems, including the use of unauthorized tools, known as "shadow AI" or BYOAI (bring your own AI). Research from cyber firm Prompt Security found that 65% of employees using ChatGPT rely on its free tier, where data can be used to train models. Shadow AI can also hinder collaboration. Wharton professor and AI expert Ethan Mollick calls workers using genAI for individual productivity " secret cyborgs" who keep all their tricks to themselves. "The real risk isn't that people are using AI — it's pretending they're not," Amit Bendov, co-founder and CEO of Gong, an AI platform that analyzes customer interactions, told Axios in an email. Between the lines: Employees will use AI regardless of whether there's a policy, says Coursera's chief learning officer, Trena Minudri. Leaders should focus on training, she argues. (Coursera sells training courses to businesses.) Workers also need a "space to experiment safely," Minudri told Axios in an email. The tech is changing so fast that leaders need to acknowledge that workplace guidelines are fluid. Vague platitudes like "always keep a human in the loop" aren't useful if workers don't understand what the loop is or where they fit into it. GenAI continues to struggle with accuracy and companies risk embarrassing gaffes, or worse, when unchecked AI-generated content goes public. Clearly communicating these issues can go a long way in helping employees feel more comfortable opening up about their AI use, Atlassian CTO Rajeev Rajan told Axios. "Our research tells us that leadership plays a big role in setting the tone for creating a culture that fosters AI experimentation," Rajan said in an email. "Be honest about the gaps that still exist." The bottom line: Encouraging workers to use AI collaboratively could go a long way to ending the secrecy.