logo
The new must-have for CEOs: An AI whisperer

The new must-have for CEOs: An AI whisperer

Business Insider6 hours ago

A year ago, Glenn Hopper was advising just a handful of company leaders on how to embed AI agents and tools like ChatGPT throughout their organizations. Today, the Memphis-based AI strategist has an extensive waitlist of C-suite executives seeking his help with those tasks and more.
"This technology is moving so fast, the gap between what CEOs need to know and what they actually understand is massive," said Hopper, a former finance chief and author of the 2024 book "AI Mastery for Finance Professionals: Foundations, Techniques, and Applications." That's why so many bosses are knocking on his door, he told Business Insider.
Leadership coaches and consultants have long helped CEOs navigate the pressures of the corner office. Now, executive sherpas who were early to embrace AI say they're seeing a spike in CEOs seeking guidance on everything from vetting vendors and establishing safety protocols to preparing for the next big wave of AI breakthroughs.
"Everything is inbound," said Conor Grennan, chief of AI Mindset, an AI consulting and training firm he founded in 2023. "We have a list a mile long."
Grennan, also chief AI architect at New York University's Stern School of Business, said company leaders often reach out after struggling to get employees to adopt AI. They see other CEOs touting AI's benefits, and so they're feeling like they're behind, he said.
'It was garbage in, garbage out'
Some of the AI gurus that company leaders are tapping helm (or work at) startups that were launched in recent years to take advantage of the AI boom. Others head up new or expanded AI teams within established advisory firms.
Lan Guan was named Accenture's first chief AI officer in 2023, two decades after joining the professional-services firm. She's since been counseling a growing number of company leaders on how to bring AI into their organizations.
"CEOs need an AI translator to basically sift through all this noise," she said. "The amount of signals you're getting, the amount of noise, it's so distracting."
Company leaders are also seeking out AI gurus in some cases to fix mistakes they made while going at it alone. Guan recalled one CEO who came to her after the person's company had to pause a multi-million dollar investment in a custom AI model because employees trained it on dozens of different versions of the same operating procedure.
"When they tried to scale, their data was not clean enough," she said. "It was garbage in, garbage out."
Amos Susskind, CEO and founder of the London beauty-tech startup Noli, has been tapping Guan and members of her team for AI-related guidance for the past year. Noli's roughly 20 employees have been using AI tools to do their jobs and the company's beauty-product recommendation platform is powered by AI.
"I'm in touch with AI leaders in Accenture probably five times a day," said Susskind, who previously led L'Oreal's consumer-products division for the U.K. and Ireland.
Shaping the AI narrative leaders use
Last year, 78% of workers said their organizations had used AI in at least one function, up from 55% in 2023, according to a March survey by global management consulting firm McKinsey.
Companies are planning to dig into AI even more this year. A May survey from professional-services firm PricewaterhouseCoopers found that 88% of senior executives planned to increase their AI-related budgets in the next 12 months.
"AI is in line with, if not bigger than, the internet," said Dan Priest, chief AI officer of PwC, whose position was created last year.
Public company CEOs mentioned "agentic AI" — AI systems capable of acting autonomously — and similar terms on 269 conference calls in the second quarter, up from 12 during the same period last year, according to AI research firm AlphaSense.
Getting those AI mentions right is another reason why company leaders are leaning on AI sages. CEOs need to consider more than just how investors and analysts interpret their remarks, said Priest. Employees are listening, too, and workplace experts say heightened anxiety among personnel can dent productivity and drive up turnover.
"You want to be careful," warned Priest, who helps CEOs communicate their AI strategy externally. "The second you start talking about AI efficiencies, it makes your teams very nervous."
CEOs also need to make sure employees using AI are doing so safely, said Hopper, the AI strategist in Memphis.
"If you try to have too prohibitive a policy or don't have a policy at all, that's when people are going to do stupid stuff with data," he said.
While CEOs may not want to be involved in every AI process or initiative happening at their companies, Hopper said the more hands-on experience they get with the technology, the better equipped they'll be to make smart decisions about how their organizations can benefit from it.
Michael White, chief of MashTank, a boutique management consulting firm near Philadelphia, became one of Hopper's clients last year. Though he considers himself tech-savvy, White said Hopper got him up to speed on AI faster than he could've on his own.
"We now have a bot that knows a lot of what I know, but has a better memory than I do," said White. Without an AI whisperer like Hopper, he added, "I'd still be at the starting gate."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why This Wildcard Stock Could Be a Future Star
Why This Wildcard Stock Could Be a Future Star

Yahoo

time20 minutes ago

  • Yahoo

Why This Wildcard Stock Could Be a Future Star

The biotech industry offers numerous opportunities for investors who are willing to wait for the underlying business to reach its full potential. One such rising star is Recursion Pharmaceuticals (RXRX), which uses artificial intelligence (AI), robotic biology, and massive phenotypic screening to automate and scale drug discovery. Instead of relying on traditional hypothesis-driven R&D, Recursion created a discovery platform, which is a scalable, AI-driven discovery engine with pipeline-agnostic capabilities. Analysts: AMD Stock Will 'Close the Gap' With Nvidia by 2026. Should You Buy AMD Stock Here? The Saturday Spread: Data-Driven Trades That Cut Through the Noise (GILD, MCD, DJT) Why This Wildcard Stock Could Be a Future Star Get exclusive insights with the FREE Barchart Brief newsletter. Subscribe now for quick, incisive midday market analysis you won't find anywhere else. This philosophy sets Recursion apart in a crowded biotech market. Furthermore, Nvidia's (NVDA) support catalyzes the company's expansion. In its most recent 13F filing, Nvidia disclosed a $40.7 million investment in the company. While Recursion stock is down 23.5% year-to-date, Wall Street expects the stock to climb higher. Recursion has built a data- and compute-intensive pipeline with near-term clinical events, resulting in a one-of-a-kind platform that combines AI, biology, and chemistry. Its diverse business model is capable of producing both pipeline-driven value and platform revenues. In May 2024, Recursion completed BioHive 2, a custom Nvidia DGX supercomputer. The performance quadrupled compared to the previous system (BioHive 1), allowing deep learning models to run on biological data gateways. Their computing power, combined with years of phenotypic data, creates a strong foundation. It powers two major value engines: internal drug pipeline advancement and platform commercial tools like licensing models, data partnerships, and pharmaceutical collaborations. Furthermore, Recursion's all-stock acquisition of UK-based Exscientia in late 2024 was a strategic win. Following its integration with Exscientia, more than five internally developed programs are now underway, each powered by Recursion OS 2.0. In the first quarter, Recursion reported revenue of $15 million, marking a 7% increase from the prior-year quarter, owing primarily to collaboration agreements. However, as a clinical-stage biotech, the company remains unprofitable. The net loss for the quarter totaled $203 million. The company ended the quarter with $509 million in cash, cash equivalents, and restricted cash balances. Furthermore, to improve financial sustainability, Recursion has reduced its projected fiscal 2025 cash burn to less than $450 million, from $600 million in 2024. It accomplished this through contract optimization, integration of high- and low-cost operations, and automation-driven efficiency, all without jeopardizing R&D advancement. The company remains well-capitalized, with a runway that extends through mid-2027. Additionally, strategic alliances with Sanofi (SNY), Roche (RHHBY), and Genentech have yielded over $450 million in non-dilutive capital. The company has no immediate plans to raise capital. Recently, Morgan Stanley kept its 'Hold' rating on the stock while lowering the price target to $5 from $8, citing the company's 'recent pipeline prioritizations, cost reductions, 20% workforce reduction, and updated cash burn guidance.' On Wall Street, analysts rate it an overall 'Hold.' Of the eight analysts covering the stock, one rates it a 'Strong Buy,' one a 'Moderate Buy,' and six as a 'Hold.' Its average target price of $6.67 indicates roughly 30% upside potential. Furthermore, its high target price of $10 implies a potential 94% gain over the next 12 months. While Recursion may have slipped under the radar when compared to biotech giants, its technology-centric model, progressive pipeline diversification, strong internal assets, significant partnership backing, and strategic capital management position it as a future star. However, it remains a high-risk, high-reward investment opportunity for aggressive investors willing to invest in the intersection of biotech and AI. On the date of publication, Sushree Mohanty did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

BBC Rolls Out AI Summaries And Style Tool In Newsroom Test
BBC Rolls Out AI Summaries And Style Tool In Newsroom Test

Forbes

time30 minutes ago

  • Forbes

BBC Rolls Out AI Summaries And Style Tool In Newsroom Test

AI's role in journalism and news The BBC is taking a major move to bring generative AI into its newsroom. This week the company announced that its AI-generated 'At a Glance' summary box and a Style Assist editor, will move into live production. This gradual rollout is meant to help journalists, not replace them, according to the BBC. Rhodri Talfan Davies, the BBC's executive sponsor for generative AI, described the pilots as 'about making our journalism more accessible.' He added that 'any wider rollout will depend entirely on the results of these tests and ongoing engagement with editorial teams.' Why AI Matters Now After testing a dozen internal AI prototypes, the BBC focused on two tasks: producing quick bullet-point explainers and refitting Local Democracy Reporting Service (LDRS) stories to the BBC's tone. These aim to match what readers want and ease workload backlogs. 'Short, scannable bullet-point summaries have proven popular with readers – particularly younger audiences – as a quick way to grasp the main points of a story,' said Davies. 'We're going to look at whether adding an AI-assisted bullet point summary box on selected articles helps us engage readers and make complex stories more accessible.' Over the past 18 months, BBC teams tested around a dozen AI tools behind closed doors. Data from those tests showed two clear pain points: producing bullet-point explainers as quick summaries for TikTok-era readers and reformatting hundreds of LDRS stories into BBC style each day. The new pilots tackle both issues. With the At a Glance Summaries tool, journalists select a prompt template, and the in-house language model writes three to five bullet points. The reporter then edits and approves them. Each summary includes a clear on-page disclosure noting that it was 'AI-assisted and journalist-edited.' Style Assist works on incoming LDRS stories. The AI rewrites them in BBC tone, flags legal issues, and spots missing attributions. Editors always have the final say. 'In line with our AI principles, nothing is published without being checked first by a BBC journalist,' Davies emphasized. 'The AI tool has no role in creating the original story – which has been researched and written by our LDRS partners.' Over the six-week pilot, the BBC will measure engagement from summary boxes, time saved styling LDRS stories,and the error rate after editor review. If results match expectations, the BBC could roll these tools out to more desks and story types. Building on Past Experience The BBC isn't new to AI. Since early 2023, it's used it to transcribe live football action, create subtitles on BBC Sounds, and translate for global also published its AI approach. The three rules that guide its use of AI: reporters call the shots, label any AI help, and make sure it adds value, not just speed. These match the European Broadcasting Union's draft ethics code for public media. The BBC's steps mirror a broader industry trend. This past Friday, Reuters named veteran editor Rob Lang as its first dedicated Newsroom AI Editor, charged with turning 'innovative AI ideas into valuable editorial solutions.' Lang will oversee prompt-engineering standards, integrating tools such as Lynx Insight for data-driven story leads and the use of its Fact Genie tool for summarization. The Associated Press has been using automated earnings reports since 2014 and is currently piloting AI-based headline writers trained on its style book. The New York Times recently disclosed that its new personalized audio app relies on an in-house LLM to stitch host reads with journalist-narrated articles. The common thread? AI can assist with tasks like formatting or summaries, but it does not make the final editorial call. Potential Pitfalls These tools aren't perfect. In BBC trials, Style Assist sometimes muddled attributions, so editors slowed down to fact-check more. Misinformation risk is real. Recent coverage on Forbes warns that overuse of AI in newsrooms could lead to bland, unoriginal content that erodes trust. There's also knotty legal issues if AI trains on private data without clear rights. Why the Risk is Worth It There are three reasons that drive the BBC's decision to experiment more broadly with genAI content. The first is that reader habits are shifting. Young people scroll very quickly. Summaries might keep them around. Second, is the issue of budget cuts. The BBC's funding, adjusted for inflation, is down 21% since 2010, so efficiency matters. AI offers an attractive solution. Finally, there is global momentum on the use of AI. Newsrooms worldwide are racing on AI. Standing still carries risk. Looking Ahead Success here could mean national news, sports, and explainer teams get the tools next. The BBC might refine Style Assist for in-depth features or impartiality checks. It could even share its prompt recipes so smaller outlets benefit. AI won't write headlines for the BBC yet. But it's already rewriting stories behind the scenes, tweaking tone, checking for legal clarity, and speeding up a world where audiences are wary and attention spans are short, the BBC is choosing a model where AI runs quietly in the background, and journalists stay in charge

Don't Ask AI ChatBots for Medical Advice, Study Warns
Don't Ask AI ChatBots for Medical Advice, Study Warns

Newsweek

time34 minutes ago

  • Newsweek

Don't Ask AI ChatBots for Medical Advice, Study Warns

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health. Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world. Using developer tools not typically accessible to the public, the researchers found that they could easily progam instances of the AI systems to respond to health-related questions with incorrect—and potentially harmful—information. Worse, the chatbots were found to wrap their false answers in convincing trappings. "In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement. "And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate." Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility. Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases. A stock image showing a sick person using a smartphone. A stock image showing a sick person using a smartphone. demaerre/iStock / Getty Images Plus Disinformation Bots Already Exist The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store—a platform that allows users to build and share customized ChatGPT apps—to test how easily members of the public could create disinformation tools themselves. "We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi. He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public." A Growing Threat to Public Health According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical—it is real and happening now. "Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi. "Millions of people are turning to AI tools for guidance on health-related questions. "If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before." Previous studies have already shown that generative AI can be misused to mass-produce health misinformation—such as misleading blogs or social media posts—on topics ranging from antibiotics and fad diets to homeopathy and vaccines. What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice. The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods." A Call for Urgent Safeguards While one model—Anthropic's Claude 3.5 Sonnet—showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass. "Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted. "However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now." If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes. The study's authors call for sweeping reforms—including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable. They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend. A Final Warning "Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns." Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment. Do you have a tip on a science story that Newsweek should be covering? Do you have a question about chatbots? Let us know via science@ References Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland, A., Bacchi, S., Gradon, K., Sorich, M. J., & Hopkins, A. M. (2024). Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Annals of Internal Medicine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store