Why Musk Hit a Nerve by Saying Trump Is in the Epstein Files
Why is Elon Musk so sure President Donald Trump is in the so-called 'Epstein Files?'
In a bombshell X post Thursday, Musk levied accusations against the president claiming that his name could be found in files pertaining to disgraced financier Jeffrey Epstein's sex trafficking case.
'Time to drop the really big bomb: @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. Have a nice day, DJT!' Musk wrote Thursday. 'Mark this post for the future. The truth will come out.'
Epstein was accused of sex trafficking minors and conspiracy to engage in the sex trafficking of minors, which he pleaded not guilty to, in 2019. A month after his arrest, he died by suicide in his jail cell, according to authorities.
For years, right-wing circles have called on authorities to release all documents related to Epstein's investigation, including flight logs of his private plane, client names, and inquiries into his suicide.
Though the Trump administration has consistently promised to declassify the Epstein files, they have yet to divulge any new details on Epstein's case, as pointed out by Musk.
In a statement to CNN, White House press secretary Karoline Leavitt weighed in on Musk's accusations and said: 'This is an unfortunate episode from Elon, who is unhappy with the 'One, Big, Beautiful Bill' because it does not include the policies he wanted. The president is focused on passing this historic piece of legislation and making our country great again.'
The White House did not immediately respond to the Daily Beast's request for comment on Musk's accusations.
Though Trump has spent the past decade fiercely distancing himself from Epstein, the two had a documented relationship during the late 1980s to early 2000s. In 1992, for example, they were filmed speaking and laughing together at a party in Mar-a-Lago. Years later, in 1997, they were pictured together again at a Victoria's Secret Angels party.
In a 2002 interview with New York magazine, Trump even praised Epstein as a 'terrific guy.'
'I've known Jeff for 15 years. Terrific guy,' Trump said at the time. 'He's a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side. No doubt about it—Jeffrey enjoys his social life.'
Flight logs released in 2021 as part of the trial of Ghislaine Maxwell, Epstein's accomplice, also indicated that Trump flew on Epstein's plane seven times throughout their friendship. 'I was never on Epstein's Plane, or at his 'stupid' Island,' Trump said in a 2024 Truth Social post.
Whatever relationship the two seemed to have, however, came to a halt in 2004 following a real estate feud, according to The Washington Post. At the time, the two men were both eyeing a Palm Beach property, Maison de L'Amitié, for different reasons and subsequently went head-to-head in a bidding war over it.
In the years to follow, Trump grew to become an outspoken critic of Epstein's, both in public and behind closed doors.
Bradley Edwards, an attorney who represented a number of Epstein's alleged victims, told The Conscious Resistance Network in 2018 that Trump was the 'only person' who agreed to talk to him when he was serving subpoenas during an Epstein investigation in 2009 and that he was 'very helpful in the information he gave.'
'He is the only person who picked up the phone and said 'Let's just talk, I'll give you as much time as you want, I'll tell you what you need to know,'' Edwards said, adding that Trump 'gave no indication whatsoever that he was involved in anything untoward.'
One of the president's former 2016 campaign aides, Sam Nunberg, also told The Washington Post that Trump had once called Epstein a 'real creep' and that he had barred him from Mar-a-Lago as a result.
When probed on Epstein following his arrest in July 2019, Trump told a reporter: 'Well I knew him like everybody in Palm Beach knew him… I had a falling out with him a long time ago. I don't think I've spoken to him for 15 years. I wasn't a fan… That I can tell you. I was not a fan of his.'
A few days later, he reiterated that he 'had no idea' about Epstein's alleged crimes and added 'Jeffrey Epstein was not somebody that I respected. I threw him out.'
Trump more recently addressed Epstein during a September 2024 interview with Lex Fridman where he said: '[Epstein] had some nice assets that he'd throw around like islands, but a lot of big people went to that island. But fortunately, I was not one of them.' The president has notably not been accused of any wrongdoing in connection with Epstein.
Epstein, however, had a different version of events.
In 2017, the financier spoke at length with author Michael Wolff as part of his research for his 2018 bestselling book Fire and Fury.
In tape recordings of their conversation, shared by Wolff to the Daily Beast in November, Epstein described himself as Trump's 'closest friend for 10 years' and alleged that the two enjoyed going to 'Atlantic City to try to find girls in the casino.'
'He's a horrible human being,' Epstein, a convicted sex offender, said of Trump in the tapes. 'He does nasty things to his best friends, best friends' wives, anyone who he first tries to gain their trust and uses it to do bad things to them.'
At the time, Trump's representatives referred to the tapes as 'false smears' and 'election interference' as they were published days before the November 2024 election.
Trump's team further told the Daily Beast that Wolff is 'a disgraced writer who routinely fabricates lies in order to sell fiction books because he clearly has no morals or ethics.'
'He waited until days before the election to make outlandish false smears all in an effort to engage in blatant election interference on behalf of Kamala Harris. He's a failed journalist that is resorting to lying for attention,' they continued.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
18 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.


Fox News
23 minutes ago
- Fox News
WATCH LIVE: AG Pam Bondi announces major criminal arrest
All times eastern Special Report with Bret Baier The Evening Edit with Elizabeth Macdonald FOX News Radio Live Channel Coverage WATCH LIVE: AG Pam Bondi announces major criminal arrest


Fox News
23 minutes ago
- Fox News
AG Bondi details ‘very serious charges' facing Kilmar Abrego Garcia
All times eastern Special Report with Bret Baier The Evening Edit with Elizabeth Macdonald FOX News Radio Live Channel Coverage WATCH LIVE: AG Pam Bondi announces major criminal arrest