Latest news with #AIstartup


Fast Company
8 hours ago
- Business
- Fast Company
AI is posing immediate threats to your business. Here's how to protect yourself
Last month, an AI startup went viral for sending emails to customers explaining away a malfunction of its AI-powered customer service bot, claiming it was the result of a new policy rather than a mistake. The only problem was that the emails—which appeared to be from a human sales rep—were actually sent by the AI bot itself. And the 'new policy' was what we call a hallucination: a fabricated detail the AI invented to defend its position. Less than a month later, another company came under fire after using an unexpectedly obvious (and glitchy) AI tool to interview a job candidate. AI headaches It's not shocking that companies are facing AI-induced headaches. McKinsey recently found that while nearly all companies report investing in AI, fewer than 1% consider themselves mature in deployment. This gap between early adoption and sound deployment can lead to a PR nightmare for executives, along with product delays, hits to your companies' brand identity, and a drop in consumer trust. And with 50% of employers expected to utilize some form of agentic AI —far more advanced systems capable of autonomous decision-making—the business risks of clumsy AI deployment are not just real. They are rising. As AI technology continues to rapidly evolve, executives need a trusted, independent way of comparing system reliability. As someone who develops AI assessments, my advice is simple: Don't wait for regulation to tell you what AI tools work best. Industry-led AI reliability standards offer a practical solution for limiting risk—and smart leaders will start using them now. Industry Standards Technology industry standards are agreed-upon measurements of important product qualities that developers can volunteer to follow. Complex technologies—from aviation to the internet to financial systems—rely on these industry-developed guidelines to measure performance, manage risk, and support responsible growth. Technology industry standards are developed by the industry itself or in collaboration with researchers, experts, and civil society—not policymakers. As a result, they don't rely on regulation or bill text, but reflect the need of industry developers to measure and align on key metrics. For instance, ISO 26262, which was developed by the International Organization for Standardization, sets requirements to ensure the electric systems of vehicles are manufactured to function safely. They're one reason we can trust that complex technology we use every day, like the cars we buy or the planes we fly on, are not defective. AI is no exception. Like in other industries, those at the forefront of AI development are already using open measures of quality, performance, and safety to guide their products, and CEOs can leverage them in their own decision-making. Of course, there is a learning curve. For developers and technical teams, words like reliability and safety have very different meanings than they do in boardrooms. But becoming fluent in the language of AI standards will give you a major advantage. I've seen this firsthand. Since 2018, my organization has worked with developers and academics to build independent AI benchmarks, and I know that industry buy-in is crucial to success. As those closest to creating new products and monitoring trends, developers and researchers have an intimate knowledge of what's at stake and what's possible for the tools they work on. And all of that knowledge and experience is baked into the standards they develop—not just at MLCommons but across the industry. Own it now If you're a CEO looking to leverage that kind of collaborative insight, you can begin by incorporating trusted industry benchmarks into the procurement process from the outset. That could look like bringing an independent assessment of AI risk into your boardroom conversations, or asking vendors to demonstrate compliance with performance and reliability standards that you trust. You can also make AI reliability a part of your formal governance reporting, to ensure regular risk assessments are baked into your company's process for procuring and deploying new systems. In short: engage with existing industry standards, use them to pressure test vendor claims about safety and effectiveness, and set clear data-informed thresholds for what acceptable performance looks like at your company. Whatever you do, don't wait for regulation to force a conversation about what acceptable performance standards should look like—own it now as a part of your leadership mandate. Real damage Not only do industry standards provide a clear, empirical way of measuring risk, they can help navigate the high-stakes drama of the current AI debate. These days, discussions of AI in the workforce tend to focus on abstract risks, like the potential for mass job displacement or the elimination of entire industries. And conversations about the risks of AI can quickly turn political—particularly as the current administration makes it clear they see 'AI safety' as another word for censorship. As a result, many CEOs have understandably steered clear of the firestorm, treating AI risk and safety like a political hot potato instead of a common-sense business priority deeply tied to financial and reputational success. But avoiding the topic entirely is a risk in itself. Reliability issues—from biased outputs to poor or misaligned performance—can create very real financial, legal, and reputational damage. Those are real, operational risks, not philosophical ones. Now is the time to understand and use AI reliability standards—and shield your company from becoming the next case study in premature deployment.


Bloomberg
7 days ago
- Business
- Bloomberg
Scale AI Backer Accel Set for $2.5 Billion Windfall on Meta Deal
A handful of venture capital firms are set to see huge returns from their early bets on Scale AI, following Meta Platforms Inc.'s $14.3 billion mega investment in the artificial intelligence startup. The biggest winner is Accel, one of Silicon Valley's oldest venture firms, which first backed Scale almost a decade ago, when the startup's chief executive officer was still a teenager. Accel expects a windfall of more than $2.5 billion, according to a person familiar with the matter who asked not to be identified discussing private information. Accel declined to comment on the investment.


TechCrunch
11-05-2025
- Business
- TechCrunch
Microsoft and OpenAI may be renegotiating their partnership
In Brief OpenAI is currently in 'a tough negotiation' with its biggest investor and partner, Microsoft, according to the Financial Times. The AI startup recently announced a major change to its corporate restructuring plans — while it still aims to convert its business arm into a for-profit public benefit corporation, its nonprofit board will still be in control. The FT says it spoke to multiple sources who describe Microsoft, which has invested $13 billion in OpenAI to date, as a key holdout needed to approve the restructuring. While the crux of the negotiation is how much equity Microsoft will receive in the new for-profit entity, the companies are also reportedly renegotiating their broader contract, with Microsoft offering to give up some of its equity in exchange for access to OpenAI technology developed after the current 2030 cutoff. Sources also told the FT that the relationship between the two companies has become more competitive as OpenAI's enterprise business has grown and as it pursues its wildly ambitious Stargate infrastructure project.