logo
AI is posing immediate threats to your business. Here's how to protect yourself

AI is posing immediate threats to your business. Here's how to protect yourself

Fast Company5 hours ago

Last month, an AI startup went viral for sending emails to customers explaining away a malfunction of its AI-powered customer service bot, claiming it was the result of a new policy rather than a mistake. The only problem was that the emails—which appeared to be from a human sales rep—were actually sent by the AI bot itself. And the 'new policy' was what we call a hallucination: a fabricated detail the AI invented to defend its position. Less than a month later, another company came under fire after using an unexpectedly obvious (and glitchy) AI tool to interview a job candidate.
AI headaches
It's not shocking that companies are facing AI-induced headaches. McKinsey recently found that while nearly all companies report investing in AI, fewer than 1% consider themselves mature in deployment. This gap between early adoption and sound deployment can lead to a PR nightmare for executives, along with product delays, hits to your companies' brand identity, and a drop in consumer trust. And with 50% of employers expected to utilize some form of agentic AI —far more advanced systems capable of autonomous decision-making—the business risks of clumsy AI deployment are not just real. They are rising.
As AI technology continues to rapidly evolve, executives need a trusted, independent way of comparing system reliability. As someone who develops AI assessments, my advice is simple: Don't wait for regulation to tell you what AI tools work best. Industry-led AI reliability standards offer a practical solution for limiting risk—and smart leaders will start using them now.
Industry Standards
Technology industry standards are agreed-upon measurements of important product qualities that developers can volunteer to follow. Complex technologies—from aviation to the internet to financial systems—rely on these industry-developed guidelines to measure performance, manage risk, and support responsible growth. Technology industry standards are developed by the industry itself or in collaboration with researchers, experts, and civil society—not policymakers. As a result, they don't rely on regulation or bill text, but reflect the need of industry developers to measure and align on key metrics. For instance, ISO 26262, which was developed by the International Organization for Standardization, sets requirements to ensure the electric systems of vehicles are manufactured to function safely. They're one reason we can trust that complex technology we use every day, like the cars we buy or the planes we fly on, are not defective.
AI is no exception. Like in other industries, those at the forefront of AI development are already using open measures of quality, performance, and safety to guide their products, and CEOs can leverage them in their own decision-making. Of course, there is a learning curve. For developers and technical teams, words like reliability and safety have very different meanings than they do in boardrooms. But becoming fluent in the language of AI standards will give you a major advantage.
I've seen this firsthand. Since 2018, my organization has worked with developers and academics to build independent AI benchmarks, and I know that industry buy-in is crucial to success. As those closest to creating new products and monitoring trends, developers and researchers have an intimate knowledge of what's at stake and what's possible for the tools they work on. And all of that knowledge and experience is baked into the standards they develop—not just at MLCommons but across the industry.
Own it now
If you're a CEO looking to leverage that kind of collaborative insight, you can begin by incorporating trusted industry benchmarks into the procurement process from the outset. That could look like bringing an independent assessment of AI risk into your boardroom conversations, or asking vendors to demonstrate compliance with performance and reliability standards that you trust. You can also make AI reliability a part of your formal governance reporting, to ensure regular risk assessments are baked into your company's process for procuring and deploying new systems. In short: engage with existing industry standards, use them to pressure test vendor claims about safety and effectiveness, and set clear data-informed thresholds for what acceptable performance looks like at your company.
Whatever you do, don't wait for regulation to force a conversation about what acceptable performance standards should look like—own it now as a part of your leadership mandate.
Real damage
Not only do industry standards provide a clear, empirical way of measuring risk, they can help navigate the high-stakes drama of the current AI debate. These days, discussions of AI in the workforce tend to focus on abstract risks, like the potential for mass job displacement or the elimination of entire industries. And conversations about the risks of AI can quickly turn political—particularly as the current administration makes it clear they see 'AI safety' as another word for censorship. As a result, many CEOs have understandably steered clear of the firestorm, treating AI risk and safety like a political hot potato instead of a common-sense business priority deeply tied to financial and reputational success. But avoiding the topic entirely is a risk in itself. Reliability issues—from biased outputs to poor or misaligned performance—can create very real financial, legal, and reputational damage. Those are real, operational risks, not philosophical ones.
Now is the time to understand and use AI reliability standards—and shield your company from becoming the next case study in premature deployment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Godfather of AI lays out a key difference between OpenAI and Google when it comes to safety
The Godfather of AI lays out a key difference between OpenAI and Google when it comes to safety

Business Insider

time29 minutes ago

  • Business Insider

The Godfather of AI lays out a key difference between OpenAI and Google when it comes to safety

When it comes to winning the AI race, the "Godfather of AI" thinks there's an advantage in having nothing to lose. On an episode of the "Diary of a CEO" podcast that aired June 16, Geoffrey Hinton laid out what he sees as a key difference between how OpenAI and Google, his former employer, dealt with AI safety. "When they had these big chatbots, they didn't release them, possibly because they were worried about their reputation," Hinton said of Google. "They had a very good reputation, and they didn't want to damage it." Google released Bard, its AI chatbot, in March of 2023, before later incorporating it into its larger suite of large language models called Gemini. The company was playing catch-up, though, since OpenAI released ChatGPT at the end of 2022. Hinton, who earned his nickname for his pioneering work on neural networks, laid out a key reason that OpenAI could move faster on the podcast episode: "OpenAI didn't have a reputation, and so they could afford to take the gamble." Talking at an all-hands meeting shortly after ChatGPT came out, Google's then-head of AI said the company didn't plan to immediately release a chatbot because of " reputational risk," adding that it needed to make choices "more conservatively than a small startup," CNBC reported at the time. The company's AI boss, Google DeepMind CEO Demis Hassabis, said in February of this year that AI poses potential long-term risks, and that agentic systems could get "out of control." He advocated having a governing body that regulates AI projects. Gemini has made some high-profile mistakes since its launch, and showed bias in its written responses and image-generating feature. Google CEO Sundar Pichai addressed the controversy in a memo to staff last year, saying the company " got it wrong" and pledging to make changes. The " Godfather" saw Google's early chatbot decision-making from the inside — he spent more than a decade at the company before quitting to talk more freely about what he describes as the dangers of AI. On Monday's podcast episode, though, Hinton said he didn't face internal pressure to stay silent. "Google encouraged me to stay and work on AI safety, and said I could do whatever I liked on AI safety," he said. "You kind of censor yourself. If you work for a big company, you don't feel right saying things that will damage the big company." Overall, Hinton said he thinks Google "actually behaved very responsibly." Hinton couldn't be as sure about OpenAI, though he has never worked at the company. When asked whether the company's CEO, Sam Altman, has a "good moral compass" earlier in the episode, he said, "We'll see." He added that he doesn't know Altman personally, so he didn't want to comment further. OpenAI has faced criticism in recent months for approaching safety differently than in the past. In a recent blog post, the company said it would only change its safety requirements after making sure it wouldn't "meaningfully increase the overall risk of severe harm." Its focus areas for safety now include cybersecurity, chemical threats, and AI's power to improve independently. Altman defended OpenAI's approach to safety in an interview at TED2025 in April, saying that the company's preparedness framework outlines "where we think the most important danger moments are." Altman also acknowledged in the interview that OpenAI has loosened some restrictions on its model's behavior based on user feedback about censorship. The earlier competition between OpenAI and Google to release initial chatbots was fierce, and the AI talent race is only heating up. Documents reviewed by Business Insider reveal that Google relied on ChatGPT in 2023 — during its attempts to catch up to ChatGPT.

$200,000 home equity loan vs. $200,000 HELOC: Which is less expensive now?
$200,000 home equity loan vs. $200,000 HELOC: Which is less expensive now?

CBS News

time38 minutes ago

  • CBS News

$200,000 home equity loan vs. $200,000 HELOC: Which is less expensive now?

We may receive commissions from some links to products on this page. Promotions are subject to availability and retailer terms. Before borrowing hundreds of thousands of dollars worth of home equity, owners should calculate their potential repayment costs. Getty Images/iStockphoto The average home equity level has been consistently rising in recent years, and according to recent reports, it has remained at a steadily high level. The cumulative home equity level in the United States hit a record high of $17.6 trillion in the first quarter of 2025, based on a report released earlier in June. The average homeowner, meanwhile, has over $300,000 worth of equity that they can borrow from with a home equity loan or home equity line of credit (HELOC). Accounting for the 20% equity threshold many lenders prefer borrowers maintain in their home at all times, that still leaves more than $200,000 worth of equity to utilize right now. And with inflation stubborn, if significantly cooled, interest rates still high and economic concerns broad now, this could be one of the better ways to borrow a large, six-figure sum of money. To ensure borrowing success, however, which is critical when utilizing your home as the funding source, you should first calculate your potential repayment costs. Failure to pay here could result in your home being foreclosed on. So you'll want to know exactly what you'll pay long term. And with rates on home equity loans and HELOCs different, both in how high they are and how they're structured, it's particularly important to compare the potential costs of both before getting started. But which is less expensive now: a $200,000 home equity loan or a $200,000 HELOC? That's what we'll examine below. Start by seeing how much home equity you could potentially borrow here. $200,000 home equity loan vs. $200,000 HELOC: Which is less expensive now? In June 2025, the repayment costs of a home equity loan and HELOC, no matter the amount borrowed, are essentially the same. With the median home equity loan rate at 8.25% and the average HELOC rate at 8.25%, you won't see a material difference in repayments right now. But that's this month, not long-term. Since home equity loan rates have fixed rates that won't change until refinanced and HELOCs have variable rates that change over time, this similarity is not likely to stay consistent. Here's what they would look like calculated against 10- and 15-year repayment periods now, assuming the HELOC rate remains unchanged: 10-year home equity loan at 8.25%: $2,453.05 per month $2,453.05 per month 15-year home equity loan at 8.25%: $1,940.28 per month 10-year HELOC at 8.27%: $2,455.18 per month $2,455.18 per month 15-year HELOC at 8.27%: $1,942.61 per month And here's how they would compare if HELOC rates decline by 25 basis points during this time: 10-year home equity loan at 8.25%: $2,453.05 per month $2,453.05 per month 15-year home equity loan at 8.25%: $1,940.28 per month 10-year HELOC at 8.02%: $2,428.67 per month $2,428.67 per month 15-year HELOC at 8.02%: $1,913.61 per month And here's what they would look like if HELOC rates rise by 25 basis points from today's averages: 10-year home equity loan at 8.25%: $2,453.05 per month $2,453.05 per month 15-year home equity loan at 8.25%: $1,940.28 per month 10-year HELOC at 8.52%: $2,481.85 per month $2,481.85 per month 15-year HELOC at 8.52%: $1,971.82 per month In short, a $200,000 home equity loan is marginally less expensive than a $200,000 HELOC is now. But that dynamic can and almost assuredly will change over a multiple-year repayment period. Borrowers will need to weigh those changes, then, against what they can lock in with a fixed home equity loan rate instead. And remember that home equity loans and HELOCs can always be refinanced in the future, should the rate climate or your borrowing needs change, so don't get too focused on long-term rate change scenarios, either. Compare your HELOC and home equity loan rate offers here to learn more. The bottom line $200,000 home equity loans and HELOCs come with similar payments now but they may not stay that way for very long, thanks to the latter's variable rate. That noted, HELOCs come with interest-only payment requirements for borrowers who want to utilize their equity that way during the draw period, so interest rates may be less of a concern than they'd be with a home equity loan which requires full monthly repayments immediately thanks to the disbursement of the funds in a single, lump sum. Compare both options carefully before getting started, then, to better ensure borrowing success both now and in the years to come.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store