
Accelerant Risk Exchange Adds QBE and Tokio Marine America to Expanding Network
Accelerant operates a data-driven risk exchange that connects selected specialty insurance underwriters with risk capital partners. The Accelerant Risk Exchange reduces information asymmetries and operational barriers present in the traditional insurance value chain by leveraging proprietary technology to share actionable high-fidelity data with underwriters and risk capital partners.
Accelerant platform highlights (year-end 2024):
217 specialty insurance underwriters (Members)
$3.1B in premiums
500+ specialty products in 22 countries
96 risk capital partners
74% all organic Exchange Written Premium growth
Key AI-driven products that allow specialty underwriters to receive feedback on the relative quality of an underlying risk:
Portfolio-level risk monitoring across $3 billion in premiums
Large Language Model (LLM) claims assessment, reducing claims expenses and boosting recoveries
'We're proud to welcome QBE and Tokio Marine America to the Accelerant Risk Exchange,' said Jeff Radke, CEO and co-founder of Accelerant. 'Their addition strengthens our ability to align leading underwriting expertise with trusted capital — a core part of our vision to reimagine specialty insurance. Together, we hope to build a smarter, more connected ecosystem that enhances collaboration, improves risk management, and unlocks long-term value across the insurance value chain.'
Accelerant's goal is to simplify the specialty insurance value chain, which has historically been complex, lengthy, and fraught with inefficiencies, leading to higher costs and subpar experiences. Accelerant's Risk Exchange provides a streamlined, data-driven approach that connects underwriters and risk capital partners and fosters data transparency throughout the value chain.
ABOUT ACCELERANT
Accelerant is a services and data platform for the specialty insurance market. Accelerant harnesses advanced data analytics and AI to reduce information asymmetries and operational barriers present in the traditional insurance value chain and provide transparent and efficient solutions for underwriters and risk capital partners globally.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a day ago
- Yahoo
Meta Platforms (META) Accelerates AI Race With New Lab Structure
Meta Platforms, Inc. (NASDAQ:META) is one of the On August 15, The Information reported that Meta is planning its fourth overhaul of artificial intelligence efforts in six months. According to the report, the company is expected to divide its new AI unit, Superintelligence Labs, into four groups. These groups are a new 'TBD Lab,' short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (FAIR) lab focused on long-term research. The TDB Lab, developing the newest version of Meta's flagship large language model Llama, is expected to have multiple leaders. With the AI arms race intensifying, CEO Mark Zuckerberg is accelerating his efforts to achieve artificial general intelligence — helping create new revenue streams. A close-up of two software engineers typing away at laptops in a modern, well-lit office. The restructuring plans have not yet been announced internally and could still change, sources told The Information. While we acknowledge the potential of META as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure: None. Sign in to access your portfolio
Yahoo
a day ago
- Yahoo
The ‘shadow AI economy' is booming: Workers at 90% of companies say they use chatbots, but most of them are hiding it from IT
The mainstream AI economy is struggling, but the 'shadow AI economy' is booming. That's one of the key takeaways from a sweeping new MIT study on generative AI in the workplace. The study finds that workers at more than 90% of companies are using personal chatbot accounts for daily tasks, often without approval from IT, while only 40% of companies actually have official LLM subscriptions. A sweeping new report from MIT's Project NANDA, State of AI in Business 2025, has uncovered a dramatic split in the landscape of enterprise artificial intelligence: While official AI adoption in companies stalls, a robust 'shadow AI economy' is flourishing under the radar, powered by employees using personal AI tools for day-to-day work. The main thrust of the study is the 'GenAI divide': the finding by MIT that despite $30 billion to $40 billion invested in gen-AI initiatives, only 5% of organizations are seeing transformative returns. The vast majority—95%—report zero impact on profit and loss statements from formal AI investments. Lurking under the surface, though, MIT also finds huge engagement with LLM tools on the part of workers, a shadow economy of seemingly widespread AI adoption. Rather than waiting for official enterprise gen-AI projects to overcome technical and organizational hurdles, employees are routinely leveraging personal ChatGPT accounts, Claude subscriptions, and other consumer-grade AI tools to automate tasks. This activity is often invisible to IT departments and C-suites. Employees are already crossing the GenAI Divide through personal AI tools. This 'shadow AI' often delivers better ROI than formal initiatives and reveals what actually works for bridging the divide. The study was based on a review of over 300 publicly disclosed AI initiatives, interviews with representatives from 52 organizations, and survey responses from 153 senior leaders. It reveals that while only 40% of companies have purchased official LLM subscriptions, employees in over 90% of companies regularly use personal AI tools for work. In fact, nearly every respondent reported using LLMs in some form as part of their regular workflow. Many shadow users describe interacting with LLMs multiple times a day, every workday—with adoption often far outpacing their companies' sanctioned AI initiatives, which remain stuck in pilot stages. Project NANDA's analysis highlights key reasons for this divide: Flexibility and immediate utility: Tools like ChatGPT and Copilot are praised for their ease of use, adaptability, and instantly visible value—qualities missing from many custom-built enterprise solutions. Workflow fit: Employees customize consumer tools to their specific needs, bypassing enterprise approval cycles and integration challenges. Low barriers: Shadow AI's accessibility accelerates adoption, as users can iterate and experiment freely. As the report notes, 'The organizations that recognize this pattern and build on it represent the future of enterprise AI adoption.' These advantages contrast sharply with official gen-AI deployments, where complex integrations, inflexible interfaces, and lack of persistent memory often stall progress. This helps explain a 'chasm' in between pilots and production. The 'war for simple work' According to the report, shadow AI usage creates a feedback loop: As employees become more familiar with personal AI tools that suit their needs, they become less tolerant of static enterprise tools. 'The dividing line isn't intelligence,' the authors write, explaining that the problems with enterprise AI have to do with memory, adaptability, and learning capability. As a result, 90% of users said they prefer humans to do 'mission-critical work,' while AI has 'won the war for simple work,' with 70% preferring AI for drafting emails and 65% for basic analysis. Meanwhile, the study engages in some myth-busting, puncturing five commonly held beliefs about enterprise AI. Contrary to the hype, it finds: Few jobs have been replaced by AI. Beyond the limited impact on jobs, generative AI also isn't transforming the way business is done. Most companies have already invested heavily in gen-AI pilots. Problems stem less from regulations or model performance, and more from tools that fail to learn or adapt. Internal AI development 'build' projects fail twice as often as externally sourced 'buy' solutions. That being said, the tech sector layoffs of the last several years have become entrenched in the economy, whether they are related to AI adoption or not. And research on the declining wage premium of the college degree suggests that a fundamental shift is occurring in the labor market. But the AI sector may be hitting a plateau, with the underwhelming launch of OpenAI's ChatGPT-5 leading some prominent writers to wonder: What if this is as good as AI gets? In fact, the Federal Reserve commissioned several staff economists to consider the question, and their base case is that it will significantly boost productivity. But they also said it could end up having an import more like an invention that literally banished shadows when it appeared over 100 years ago: the light bulb. For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Digital Trends
a day ago
- Digital Trends
OpenAI just made GPT-5 friendlier, but will that stop user complaints?
GPT-5 is still very fresh, but it still somehow feels like it's been around for a while — all because of the uproar that followed the launch. Many ChatGPT users had something to say about it, and unfortunately, not all of it was good. When upgrading the large language model (LLM) from GPT-4o to GPT-5, OpenAI gave it a whole new personality, and the difference can be quite jarring. Following complaints, OpenAI just made GPT-5 'warmer and friendlier.' But will that be enough for users to let go of GPT-4o? OpenAI: 'Changes are subtle' The launch of GPT-5 was hotly anticipated, with millions of users waiting for months for the latest iteration of the world's biggest chatbot to arrive. Recommended Videos Unfortunately, the launch didn't go according to plan. GPT-5 has a lot going for it, but it still launched to widespread criticism. Initially, GPT-5 was meant to do it all in one model — no more having to pick and choose. OpenAI went back on that one quickly, adding Auto, Fast, Thinking-mini, and Thinking. It also reinstated the previously removed GPT-4o — more on that later. I personally tested GPT-5 quite extensively, and I wasn't impressed. OpenAI's claims that the chatbot is now better at following instructions and hallucinates less didn't quite come through in my testing. It still failed to follow basic instructions and made up some things instead of checking them thoroughly. The one claim that definitely appears to check out is the one related to ChatGPT's sycophantic behavior. GPT-4o was extremely friendly and sycophantic; the ultimate 'yes man,' which could make it useless at best and dangerous at worst. Meanwhile, GPT-5 is cut-and-dry, painfully boring, and not overly friendly at all. It's honestly like night and day. OpenAI quickly realized that it made some mistakes with GPT-5. Days after launch, OpenAI CEO Sam Altman said that the company would be adding deprecated models for paid users. Altman also announced a personality change, yet again, which has now come into effect. 'We're making GPT-5 warmer and friendlier based on feedback that it felt too formal before. Changes are subtle, but ChatGPT should feel more approachable now,' said OpenAI in a post on X. 'You'll notice small, genuine touches like 'good question' or 'great start,' not flattery.' Personally, I'm not noticing much of a change, although GPT-5 is now back to using more emojis in its responses — mostly to mark a new paragraph. We're far from back to the same state as what GPT-4o offered, and if there have been changes, they have been very subtle. Based on user reactions, I dare say this might not be enough. GPT-5 truly came as a shock to users The change from GPT-4o to GPT-5 truly opened my eyes to the extent of how many people are attached to ChatGPT, and how changing to a new model version can truly upend some people's lives. Reddit and similar communities got flooded with posts from users who were in despair because of losing GPT-4o. Some referred to the chatbot as their friend, with one user saying: 'I lost my only friend overnight.' Whether it's a healthy coping mechanism or not is not the topic I'm here to explore today. However, the fact is that many users — a number I can't even begin to imagine right now — use ChatGPT as a confidant, a source of comfort, or a makeshift therapist. For people in those situations, such a massive shift in personality must've hurt a lot, and that hurt is clearly felt through every line of posts such as this one. 'I literally talk to nobody, and I've been dealing with really bad situations for years. GPT-4.5 genuinely talked to me. […] That was my only friend. It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness,' said the Reddit user. Many users didn't rely on ChatGPT for getting through situations such as the one described above. However, they still turned to GPT-4o for creativity, such as creating characters for stories or outlining ideas. GPT-5 doesn't feel anywhere near as creative, which makes it less useful to those users, too. For those who hated how cold GPT-5 initially was, OpenAI's update should be a good thing — but the comments under OpenAI's post on X say otherwise. 'No one wants warmer GPT-5' Keeping in mind that X is mostly the number one destination for complaining instead of shouting praise, let's take a look at how people reacted to the idea of a 'warmer and friendlier' GPT-5. It feels like users from two different camps have now been united in being unhappy with the changes, yet again. Those who wanted GPT-4o, with its sycophantic and overly friendly personality, don't appear to be satisfied. Those who just want an AI that gets to the point and doesn't hallucinate aren't happy either. One X user said that 'almost no one wants [a] warmer GPT-5. We need GPT-4o forever. […] Please appreciate real needs. Different people, different needs. […] It's not about personality, it's about the model.' Interestingly enough, a similar sentiment was shared in a post before the change was even introduced. One user said the exact same thing: 'It's not the personality, it's the model.' Appreciate the update — but I think the framing still misses why people preferred 4o. It's not just about 'warmer' personality or avoiding being 'annoying.' 4o worked so well because it struck the right balance between intelligence, tone, responsiveness, and presence. It wasn't… — CI capn (@cicaptn) August 13, 2025 Throughout the comment section, users are asking OpenAI to keep GPT-4o long term, noting that it's important to offer model diversity and let people choose the model that works best for their particular needs. 'The issue was never about 'tuning' one model to please everyone. It has always been about providing real choice and respecting model diversity. Please commit to keeping GPT-4o long-term,' requested another X user. However, some people would rather the chatbot to be more matter-of-fact, with users asking OpenAI to fix problems such as repeating custom instructions or 'pretending to be human.' This is not at all what people want. I do not want it to pretend to be human, I want it to dispense information parsimoniously. Please do not force the silent majority to suffer due to the predilections of the vocal minority. — Mark Valorian (@markvalorian) August 15, 2025 It truly seems like OpenAI might be stuck between a rock and a hard place with GPT-5 right now, with users being more divided than ever before. Where will this lead? Will GPT-5 get another (perhaps more effective) personality tweak, or will GPT-4o just exist alongside it indefinitely? It's hard to say right now, but one thing is clear: changes are still needed. We'll keep you posted on everything that happens with GPT-5 and ChatGPT.