logo
Watch live: Trump gives commencement address at West Point

Watch live: Trump gives commencement address at West Point

Yahoo24-05-2025

President Trump is set to deliver the commencement address at the U.S. Military Academy graduation ceremony on Saturday.
His speech to West Point graduates comes as tension between Trump and higher education institutions heated up this week. It also follows the administration's crackdown on diversity, equity and inclusion (DEI) programs and hiring practices at schools and federal agencies, including military academies.
In his first 100 days, the president was also quick to issue executive actions to transform the Defense Department.
The event is scheduled to begin at 10:15 a.m. EDT.
Watch the live video above.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time17 minutes ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

GOP lawmakers stick with Trump in messy Musk breakup
GOP lawmakers stick with Trump in messy Musk breakup

Politico

time21 minutes ago

  • Politico

GOP lawmakers stick with Trump in messy Musk breakup

Amid the messy ongoing divorce between the president and the world's richest man, this much is already clear: Donald Trump has sole custody of the House GOP. Republican lawmakers are making clear that, if forced to choose, it's Trump — not Elon Musk — they're sticking by as leaders race to contain the fallout for their 'one big, beautiful bill.' Even Rep. Marjorie Taylor Greene of Georgia, who helms a House panel inspired by Musk's Department of Government Efficiency initiative, blasted Musk's public attacks on Trump as 'unwarranted' and criticized his 'lashing out on the internet.' 'America voted for Donald Trump on Nov. 4, 2024 — every single vote mattered just as much as the other,' Greene said in a brief interview. 'And whether it was $1 that was donated or hundreds of millions of dollars, the way I see it, everybody's the same.' Like many Americans, GOP members watched Thursday's online exchange with a sense of car-crash-like fascination. Many shared that they hoped Musk and Trump could somehow patch things up. But many — including some of the former DOGE chief's biggest backers on Capitol Hill — were wholly unsurprised to see the billionaire suddenly cut down to size after months of chatter about who was really calling the shots at the White House. 'It's President Trump, not President Musk,' said one lawmaker granted anonymity to speak frankly about prevailing opinions inside the House GOP. Speaker Mike Johnson made no secret of where he stands on the public breakup. He told reporters Friday that he hoped the two men 'reconcile' and that it would be 'good for the party and the country if all this worked out.' But in the nearly same breath, Johnson quickly reaffirmed his allegiance to the president and issued a warning to Musk. 'Do not doubt, do not second-guess and don't ever challenge the president of the United States, Donald Trump,' Johnson said. 'He is the leader of the party. He is the most consequential political figure of this generation and probably the modern era. And he's doing an excellent job for the people.' Other House Republicans concurred with the speaker's assessment Friday, even as they faced the looming threat of Musk targeting them in the upcoming midterms or at least pulling back on his political giving after pouring more than $250 million into the 2024 election on behalf of Trump and the GOP ticket. 'I think it's unfortunate,' said Rep. Tim Moore (R-N.C.) of the breakup. 'But Donald Trump was elected by a majority of the American people.' Rep. Warren Davidson of Ohio, who was one of only two Republicans to oppose Trump's megabill in the House last month, also made clear he stood with the president over Musk. 'He does not have a flight mode — he's fight, fight, fight … and he's been pretty measured,' Davidson said of Trump. 'I think Elon Musk looked a little out of control. And hopefully he gets back and grounded.' GOP leaders who have spent weeks cajoling their members to vote for the sprawling domestic-policy bill hardly hid their feelings as Musk continued to bash the legislation online, even calling on Americans to call their representatives in an effort to tank it. 'Frankly, it's united Republicans even more to go and defend the great things that are in this bill — and once it's passed and signed into law by August, September, you're going to see this economy turning around like nothing we've ever seen,' Majority Leader Steve Scalise said in a brief interview Friday. 'I'll be waiting for all those people who said the opposite to admit that they were wrong,' Scalise added. 'But I'm not expecting that to happen.' A few Republicans are still trying to walk a fine line by embracing both Trump and Musk — especially some fiscal hawks who believe Musk is right about the megabill adding trillions to the national debt. 'I think Elon has some valid points about the bill, concerns that myself and a handful of others were working to address up until the passage of it,' Rep. Michael Cloud (R-Texas) said in an interview. 'I think that'll make the bill stronger. I think it'll help our standing with the American people.' Both Trump and Musk 'have paid a tremendous price personally for this country,' Cloud added. 'And them working together is certainly far better for the country.' Notably, House Judiciary Chair Jim Jordan, a key Musk ally on the Hill, declined to engage Thursday when asked about the burgeoning feud. Instead, the Ohio Republican responded by praising the megabill Musk had moved to tank. Democrats, for their part, watched the unfolding and public breakup with surprise and a heavy dose of schadenfreude. 'There are no good guys in a fight like this,' Rep. Jared Huffman (D-Calif.). 'You just eat some popcorn and watch the show.'

Freedom Caucus warns it will ‘not accept' Senate changes on green energy tax credits
Freedom Caucus warns it will ‘not accept' Senate changes on green energy tax credits

The Hill

time21 minutes ago

  • The Hill

Freedom Caucus warns it will ‘not accept' Senate changes on green energy tax credits

The conservative House Freedom Caucus said on Friday that it would 'not accept' changes that 'water down' its cuts to green energy tax credits as the Senate weighs whether to alter the legislation. The House version of the 'big, beautiful bill' would make drastic changes to tax cuts for low-carbon energy sources passed in the Democrats' 2022 Inflation Reduction Act (IRA). Climate-friendly energy projects, including wind and solar, would only be able to qualify for the credits under the House bill if they begin construction within 60 days of the bill's enactment. This brief window would likely make many projects ineligible for the credits, and is expected to significantly hamstring the development of new renewable power. In a post on social media on Friday, the Freedom Caucus warned the Senate against loosening that restriction or others included in the bill. 'We want to be crystal clear: if the Senate attempts to water down, strip out, or walk back the hard-fought spending reductions and IRA Green New Scam rollbacks achieved in this legislation, we will not accept it,' said the post, which was attributed to the Freedom Caucus's board. 'The House Freedom Caucus Board will stand united holding the line. The American people didn't send us here to cave to the swamp — they sent us here to change it,' they added. The Senate has been widely expected to consider changes that could slow the rapid elimination of the tax credit passed under the House version of Trump's 'big beautiful bill.' Republican Sens. Lisa Murkowski (Alaska), Thom Tillis (N.C.), Jerry Moran (Kan.) and John Curtis (Utah) released a letter warning against a 'full scale' repeal of the tax credits. Senate Republicans can only afford three defections and pass their bill. On Friday, a group of 13 House GOP moderates released a letter calling on Senate leadership 'to substantively and strategically improve clean energy tax credit provisions' in the legislation. 'We believe the Senate now has a critical opportunity to restore common sense and deliver a truly pro-energy growth final bill that protects taxpayers while also unleashing the potential of U.S. energy producers, manufacturers, and workers,' said the letter, which was led by Reps. Jen Kiggans (R-Va.) and Brian Fitzpatrick (R-Pa.). Altogether, the letters illustrate what could be a tough task ahead of the Republican leadership as they look to find a measure that will keep at least 50 senators on board and appease the House. Emily Brooks contributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store