logo
MAGA Melts Down as Trump Withdraws Anti-Vax Nominee at Last Minute

MAGA Melts Down as Trump Withdraws Anti-Vax Nominee at Last Minute

Yahoo13-03-2025

The MAGA right is seething after anti-vax conspiracy theorist Dave Weldon—Trump's pick to head the Center for Disease Control—had his nomination pulled at the last minute.
Weldon, a former physician and representative, was getting ready for his Senate confirmation hearing on Thursday when the call to withdraw his nomination came down, according to Axios. Even Robert F. Kennedy expressed doubt with Weldon's confirmation, a source told Axios. But the far-right is taking Weldon's axing personally.
'Weldon—who is 'Make America Healthy Again'—his nomination was pulled. The rumor is about they didn't want questioning on measles,' Steve Bannon said on his War Room podcast. 'That would be unsatisfactory. You can't pull him over measles. No way. Impossible. So we gotta get to the bottom of that.'
'This just puts the last nail in the coffin. CDC is no longer a legitimate agency. No one is going to believe anything they say anymore,' one far-right account posted on X. 'Just SHUT IT DOWN and let the states make their own health recommendations.'
'This is absolutely devastating for MAHA. The antisemites are out for blood, and Trump is showing weakness in the last area he should: commitment to @SecKennedy agenda. It's beginning to look very bleak,' said another.
'Dave Weldon, a good man, no longer in the running for CDC. Weldon recognizes the problem with mercury in vaccines, supports parents who do not want to have their newborns vaxxed vs STDs, and drafted the 'Weldon Amendment ' protecting physicians of conscience. If not Dave-who?' one supporter opined.
'Personally, I'm devastated to hear this news. Dave Weldon has been a figure in this fight before RFKjr. He's remained out of the spotlight since he left congress, but the groundwork he laid made those of us with vax injured children hopeful when Trump nominated him,' yet another MAHA supporter wrote on X. 'This is such a profound loss. Whoever they get won't be nearly as aware & committed to doing the research that should have been done decades ago. The autism epidemic marches on. Sad day.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Behind the Curtain: The scariest AI reality
Behind the Curtain: The scariest AI reality

Axios

time21 minutes ago

  • Axios

Behind the Curtain: The scariest AI reality

The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work. Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown. None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever. Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown. The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision. Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now. The big picture: Our purpose with this column isn't to be alarmist or " doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box. Let's start with a basic overview of how LLMs work, to better explain the Great Unknown: LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do. Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'" "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose." Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action. Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons. OpenAI's Sam Altman and others toss around the tame word of " interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year. What Altman and others mean is they can't interpret the why: Why are LLMs doing what they're doing? Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities. Anthropic has been studying the interpretability issue for years, and Amodei has been vocal about warning it's important to solve. In a statement for this story, Anthropic said: "Understanding how AI works is an urgent issue to solve. It's core to deploying safe AI models and unlocking [AI's] full potential in accelerating scientific discovery and technological development. We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward. It's crucial we understand how AI works before it radically transforms our global economy and everyday lives." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.") Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok. "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad." Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested. The study found that state-of-the-art models (OpenAI's o3-min, DeepSeek R1 and Anthropic's Claude-3.7-Sonnet) still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero "beyond certain complexities." But a new report by AI researchers, including former OpenAI employees, called " AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies. It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly. You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger. The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and " improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.

Popular Stephen Starr restaurants boycotted by Democrats
Popular Stephen Starr restaurants boycotted by Democrats

Axios

time22 minutes ago

  • Axios

Popular Stephen Starr restaurants boycotted by Democrats

Top Democrats in the House and Senate are boycotting hot Washington, D.C. restaurants that include those owned by famed Philadelphia restaurateur Stephen Starr over labor disputes. Why it matters: The targeted restaurants in Starr's empire include some of the buzziest spots for Democratic fundraisers. Driving the news: More than 50 House and Senate Democrats have signed onto Unite Here Local 25's pledge to avoid six D.C. venues. Zoom in: Starr, who is a Democratic donor, is facing boycotts of his Le Diplomate, Osteria Mozza and The Occidental. The other three boycotted restaurants are founded by chef Ashok Bajaj of Knightsbridge Restaurant Group. The list: Among the signers are some of Democrats' top fundraisers and biggest names, including Rep. Alexandria Ocasio-Cortez (D-N.Y.) and former House Speaker Nancy Pelosi (D-Calif.). Sens. Elizabeth Warren (D-Mass.) and Bernie Sanders (I-Vt.) are also on the list. Meanwhile, Philly Reps. Brendan Boyle, Dwight Evans and Mary Gay Scanlon signed the boycott list, per Unite Here's website. U.S. Sen. John Fetterman and Philly-regional Rep. Madeleine Dean were not on the pledge list as of Friday. Between the lines: Political groups and candidates have spent thousands of dollars at those spots over the past year, federal campaign records show. Former President Obama and Amazon founder Jeff Bezos made headlines when they dined at Osteria Mozza in January. Then-President Biden was a repeat customer at Le Diplomate during his presidency. What they're saying: Rep. Greg Casar (D-Texas), the chair of the Congressional Progressive Caucus, told Axios: "We can have big policy debates, but we also have to show the American people some concrete examples." He added: "This is our opportunity when we're here in Washington, D.C. to not just go vote in the Capitol but actually go out in the community and make a difference." "We can say that all members on the list are personally boycotting," Benjy Cannon, a spokesperson for the union, told Axios in a message. "Many of them have been meeting personally with STARR and Knightsbridge workers all year." The other side:"Local 25's call for a boycott is baseless," Starr restaurants said in a statement. "A boycott of any kind can result in lost hours, wages, and tips that hardworking employees rely upon." "It is unfortunate that an organization that claims to want to represent employees would call for an action that would harm them." "We respect our employees' wishes," Bajaj said. "How many of these congress members even know themselves that they're signing?" Zoom out: Starr's restaurant group has accused Unite Here Local 25 of overly aggressive tactics. That includes union reps showing up with petitions outside employees' homes, leading one bartender to sign it even though she planned to vote against a union, as Eater reported in February. Francisco López, a Le Diplomate server of five years, told Axios some employees are holding counter protests to the union.

Scoop: DNC launches live, daily show on YouTube
Scoop: DNC launches live, daily show on YouTube

Axios

timean hour ago

  • Axios

Scoop: DNC launches live, daily show on YouTube

The Democratic National Committee is launching its first-ever live, daily show on YouTube, Axios has learned. Why it matters: The party has been reeling since its losses in the 2024 elections, but it's counting on growing discontent with President Trump to help fuel interest in the show. Party officials say the show, which will be called the "Daily Blueprint" and kicks off Monday at 10 a.m. ET, is aimed at focusing Democrats' messaging and highlighting how Democrats are countering Trump's moves. Zoom in: The show is launching as Trump and his allies have made new forms of media a focus — on the campaign trail and in the White House — through interviews with popular podcasters and briefings for social media influencers. "The launch of the Daily Blueprint is an exciting new step for the Democratic Party — it cements our commitment to meet this moment and innovate the ways we get our message across in a new media landscape," Democratic National Committee chair Ken Martin said in a statement first provided to Axios. The show debuts as party leaders are dealing with discontent within the ranks. Over the weekend, Politico obtained a recording of Martin expressing uncertainty about wanting to lead the party because of the infighting sparked by vice chair David Hogg's push to help elect younger Democrats — including primary challengers to incumbents in solidly blue districts. The details: The "Daily Blueprint" will go live for about 15 minutes each weekday. It will be part of the DNC's War Room efforts to combat Trump.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store