logo
View Photos of the 1995 Compact Luxury Convertible Comparison

View Photos of the 1995 Compact Luxury Convertible Comparison

Car and Driver17-05-2025

Read the full review
Up until this here comparison, the BMW 3-series was undefeated in every comparo we threw at the dang thing. But this time around, the mountainous competition might be too tall for BMW to climb.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time20 minutes ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

Sovereignty vs. Journalism in the Belmont gives horse racing a Kentucky Derby rematch
Sovereignty vs. Journalism in the Belmont gives horse racing a Kentucky Derby rematch

Associated Press

time25 minutes ago

  • Associated Press

Sovereignty vs. Journalism in the Belmont gives horse racing a Kentucky Derby rematch

Horse racing is getting a Kentucky Derby rematch in the Belmont Stakes at Saratoga Race Course on Saturday to close out the Triple Crown. Derby winner Sovereignty and runner-up Journalism, who won the Preakness two weeks later, headline the field of eight in the Belmont. Add in Baeza, and the top three finishers from the first Saturday in May are involved. 'We're delighted to have the first three horses out of the Derby challenging each other again,' said Michael Banahan of Godolphin, which owns Sovereignty. 'It's a quality race. ... It should set up well, and may the best horse win.' Journalism opened as the 8-5 morning line favorite with Sovereignty the second choice at 4-1. Journalism won the Preakness run without Sovereignty after owners and trainer Bill Mott opted to give their horse extra rest. The intent was to focus on the Belmont rather than chase the chance for Sovereignty to become the sport's 14th Triple Crown champion and first since Justify in 2018. 'We felt that the best thing for him and to have a career through the whole season, and maybe into next year as well, was spacing his races a little bit,' Banahan said. 'Bill Mott, who's trained horses for us for a long time, is very judicious about where he wants to place his horses. And we put a lot of faith in the recommendations that he would give us.' Michael McCarthy-trained Journalism is the only horse running in all three legs of the Triple Crown this year. And he is the favorite for a reason. 'Journalism is a very tough horse,' said John Shirreffs, who trains Baeza. 'One thing about Journalism, (if) he runs his race (like in) Kentucky, Pimlico, he's very tough. He's solid. So, it's going to be a very difficult horse to beat.' Shirrefs said Baeza is emerging and developing, hoping the half-brother of last year's Belmont winner, Dornoch, can stride along and get past Sovereignty and Journalism this time. 'Hopefully we get out of the gate well and get a nice pace,' Shirrefs said. 'It's just the how the race unfolds and him not getting into any trouble.' Long shot Heart of Honor is running again after finishing fifth in the Preakness three weeks ago. New to the Triple Crown trail are Hill Road, Uncaged, Crudo and Rodriguez, who was scratched from the Derby with a minor foot bruise that also caused him to miss the Preakness. Banahan expects Rodriguez to go to the lead, as so many of Hall of Fame and two-time Triple Crown-winning trainer Bob Baffert's top horses do, and provide the main speed. 'That horse is going to be ready,' Chad Brown, trainer of Hill Road, said of Rodriguez. 'You can be assured of that. And it sure looks like he's by far the fastest horse in the race.' Brown has won the Preakness twice but never the Belmont. After going to Saratoga with his parents while growing up and getting into horse racing as a result, he's hoping to end his drought at his home track. 'We have a very unique time in history where there'll be three Belmont Stakes run total at Saratoga before you'll never see another one again,' Brown said. 'So, to be part of history with that, that would be extra special.' ___ AP horse racing:

Driver crashes into T-Mobile store after medical emergency, MDSO says
Driver crashes into T-Mobile store after medical emergency, MDSO says

CBS News

time25 minutes ago

  • CBS News

Driver crashes into T-Mobile store after medical emergency, MDSO says

A driver in his 70s was taken to the hospital Friday morning after crashing his car into a T-Mobile store, according to the Miami-Dade Sheriff's Office. Deputies said the man had gone to the store to pay his bill when he experienced a medical emergency and accidentally hit the gas pedal instead of the brake, sending his vehicle into the middle of the store. "It was just chaos, everybody was scared, everybody was wondering what was going on," said Felix Morales, who told CBS News Miami he works at a warehouse behind the store and heard the crash. "Poof, like glass, and I said whoa, that's kind of weird, and that's what made me come over here," Morales said. No one else injured Xavier Thompson, who works nearby, said no one was injured inside the store because it had not yet opened. "As far as the T-Mobile people said, they were in the back. Nobody was inside. It happened prior to the five minutes when they open," Thompson told CBS News Miami. Monica Rodriguez, another nearby worker, said she was worried for the man. "I was concerned for him because I was like, oh my God, what happened, I hope he's okay," Rodriguez said. Man expected to recover Authorities said the man is expected to recover. The store was deemed unsafe by the county and there is no word on when it will reopen. CBS News Miami reported that cleanup crews spent the day working to remove the vehicle and assess the damage, which included shattered glass and a damaged entrance. The door frame remained intact. T-Mobile did not immediately respond to requests for comment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store