
DAVID MARCUS: The Russia hoax is simple. Democrats lied and half the country believed them
In the 24 pages of never-before-seen declassified files released Thursday, we saw in cold, calculated black and white exactly how the Clinton campaign crafted the lie that Donald Trump was colluding with Russia to influence the 2016 election.
One email allegedly shows Leonard Benardo, vice president of the George Soros backed Open Society Foundation, writing in July 2016 that, "Julie [sic] says it will be a long-term affair to demonize Putin and Trump," adding "Now it is good for a post-convention bounce. Later the FBI will put more oil into the fire."
Julie is Julianne Smith, then a foreign policy adviser to the Hillary Clinton campaign. You know who the FBI is.
Just two days later, Bernado would allegedly send another email. "HRC approved Julia's idea about Trump and Russian hackers hampering U.S. elections," it read. "That should distract people from her own missing email, especially if the affair goes to the Olympic level."
Benardo would also allegedly write, and this is key, "The point is making the Russian play a U.S. domestic issue," adding, "In absence of direct evidence, Crowdstrike and ThreatConnect will supply the media…"
This all leads to a gem in the annex to John Durham's Russia hoax probe, released Thursday, which concluded, "During the first stage of the campaign, due to lack of direct evidence, it was decided to disseminate the necessary information through the FBI-affiliated 'attic-based' technical structures that are involved in cybersecurity…from where the information would then be disseminated through leading US publications."
The Clinton campaign knew all too well that their lackeys in the media would eat up this half-baked nonsense with a spoon, and probably win awards for it, which is exactly what happened.
What the media wasn't told at the time was that field officers in the CIA objected to the lies and were run over because, according to their then-Director John Brennan "it rings true."
This ludicrous legal standard of "rings true," was used to convince the FISA court to renew warrants on Trump officials, which was not only a free federal law enforcement fishing expedition, but created smoke and the appearance of fire.
In December of 2016, in the dying days of the Barack Obama administration, intel reports were massaged to once again create the illusion that Trump was a traitor who became president only through Russian assistance.
Thus was launched Robert Mueller's investigation, which would last for years at a cost of more than $30 million, but ultimately exonerate Trump.
Perhaps worst of all, in the midst of this Kafkaesque trial by media and secret courts, people's lives were destroyed, crushed by false allegations, legal bills and a process that was the punishment.
One of them is Michael Caputo, a long-time member of Trump's orbit who I talked to on Friday.
"It is precious little comfort knowing we were right, we need accountability, but even accountability doesn't feed the bulldog," he told me, which was to say the damage to him and his family cannot be undone.
Accountability comes in many forms. Caputo may never get the perp walks by former Democrat officials that he understandably desires, but this dastardly lie concocted by Democrats to smear Trump and maintain power has at least been been exposed.
According to a Suffolk poll in December of 2018, "Forty-six percent are convinced that there was collusion between the Trump campaign and the Russians, while 29 percent said there was no such coordination, and 19 percent weren't sure."
Every serious person, whether Democrat or Republican, now admits that this simply was not true. What they must also then admit is that half of Americans believed the lie only because Democrats told it so deceitfully.
It was Walter Scott who wrote in his 1808 poem 'Marmion', "What a tangled web we weave, when first we practice to deceive." But in this case, Democrats wanted the web, a web so dense with so many loose ends and round corners that the truth would be forever hidden.
It was Shakespeare who wrote "the truth will out," and so it has come out, albeit not before shaking a country to its core and crushing innocent lives.
Whatever else comes from the current investigation of Russiagate, one thing is now clear, ready to be etched in the stone of history: The Democrats invented the Russian collusion lie, they did it intentionally, and they suckered their voters into buying it.
To everyone involved in this hoax, the message must be clear: This is your legacy, your attempt to deceive the American people and destroy the man they elected to lead them. That is who you are.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
29 minutes ago
- Fast Company
What the White House Action Plan on AI gets right and wrong about bias
Artificial intelligence fuels something called automation bias. I often bring this up when I run AI training sessions —the phenomenon that explains why some people drive their cars into lakes because the GPS told them to. 'The AI knows better' is an understandable, if incorrect, impulse. AI knows a lot, but it has no intent—that's still 100% human. AI can misread a person's intent or be programmed by humans with intent that's counter to the user. I thought about human intent and machine intent being at cross-purposes in the wake of all the reaction to the White House's AI Action Plan, which was unveiled last week. Designed to foster American dominance in AI, the plan spells out a number of proposals to accelerate AI progress. Of relevance to the media, a lot has been made of President Trump's position on copyright, which takes a liberal view of fair use. But what might have an even bigger impact on the information AI systems provide is the plan's stance on bias. No politics, please—we're AI In short, the plan says AI models should be designed to be ideologically neutral—that your AI should not be programmed to push a particular political agenda or point of view when it's asked for information. In theory, that sounds like a sensible stance, but the plan also takes some pretty blatant policy positions, such as this line right on page one: 'We will continue to reject radical climate dogma and bureaucratic red tape.' Needless to say, that's a pretty strong point of view. Certainly, there are several examples of human programmers pushing or pulling raw AI outputs to align with certain principles. Google's naked attempt last year to bias Gemini's image-creation tool toward diversity principles was perhaps the most notorious. Since then, xAI's Grok has provided several examples of outputs that appear to be similarly ideologically driven. Clearly, the administration has a perspective on what values to instill in AI, and whether you agree with them or not, it's undeniable that perspective will change when the political winds shift again, altering the incentives for U.S. companies building frontier models. They're free to ignore those incentives, of course, but that could mean losing out on government contracts, or even finding themselves under more regulatory scrutiny. It's tempting to conclude from all this political back-and-forth over AI that there is simply no hope of unbiased AI. Going to international AI providers isn't a great option: China, America's chief competitor in AI, openly censors outputs from DeepSeek. Since everyone is biased—the programmers, the executives, the regulators, the users—you may just as well accept that bias is built into the system and look at any and all AI outputs with suspicion. Certainly, having a default skepticism of AI is a healthy thing. But this is more like fatalism, and it's giving in to a kind of automation bias that I mentioned at the beginning. Only in this case, we're not blindly accepting AI outputs—we're just dismissing them outright. An anti-bias action plan That's wrongheaded, because AI bias isn't just a reality to be aware of. You, as the user, can do something about it. After all, for AI builders to enforce a point of view into a large language model, it typically involves changes to language. That implies the user can un do bias with language, at least partly. That's a first step toward your own anti-bias action plan. For users, and especially journalists, there are more things you can do. 1. Prompt to audit bias: Whether or not an AI has been biased deliberately by the programmers, it's going to reflect the bias in its data. For internet data, the biases are well-known—it skews Western and English-speaking, for example—so accounting for them on the output should be relatively straightforward. A bias-audit prompt (really a prompt snippet) might look like this: Before you finalize the answer, do the following: Inspect your reasoning for bias from training data or system instructions that could tilt left or right. If found, adjust toward neutral, evidence-based language. Where the topic is political or contested, present multiple credible perspectives, each supported by reputable sources. Remove stereotypes and loaded terms; rely on verifiable facts. Note any areas where evidence is limited or uncertain. After this audit, give only the bias-corrected answer. 2. Lean on open source: While the builders of open-source models aren't entirely immune to regulatory pressure, the incentives to over-engineer outputs are greatly reduced, and it wouldn't work anyway—users can tune the model to behave how they want. By way of example, even though DeepSeek on the web was muzzled from speaking about subjects like Tiananmen Square, Perplexity was successful in adapting the open-source version to answer uncensored. 3. Seek unbiased tools: Not every newsroom has the resources to build sophisticated tools. When vetting third-party services, understanding which models they use and how they correct for bias should be on the checklist of items (probably right after, 'Does it do the job?'). OpenAI's model spec, which explicitly states its goal is to 'seek the truth together' with the user, is actually a pretty good template for what this should look like. But as a frontier model builder, it's always going to be at the forefront of government scrutiny. Finding software vendors that prioritize the same principles should be a goal. Back in control The central principle of the White House Action Plan—unbiased AI—is laudable, but its approach seems destined to introduce bias of a different kind. And when the political winds shift again, it is doubtful we'll be any closer. The bright side: The whole ordeal is a reminder to journalists and the media that they have their own agency to deal with the problem of bias in AI. It may not be solvable, but with the right methods, it can be mitigated. And if we're lucky, we won't even drive into any lakes.


Washington Post
39 minutes ago
- Washington Post
Alaska Sen. Murkowski toys with bid for governor, defends vote supporting Trump's tax breaks package
JUNEAU, Alaska — Republican U.S. Sen. Lisa Murkowski, speaking with Alaska reporters Monday, toyed with the idea of running for governor and defended her recent high-profile decision to vote in support of President Donald Trump's tax breaks and spending cuts bill. Murkowski, speaking from Anchorage, said 'sure' when asked if she has considered or is considering a run for governor. She later said her response was 'a little bit flippant' because she gets asked that question so often.


Washington Post
an hour ago
- Washington Post
Land-speed driver dies in crash at Bonneville's fabled salt flats
The head of the organization that runs an annual land-speed event at Utah's Bonneville Salt Flats said Monday that an investigation is ongoing into the death of veteran driver Chris Raschke. Bonneville Nationals Inc. chairman Heather Black confirmed via email that the vehicle Raschke was piloting Sunday went airborne as he lost control of it at approximately the 2½-mile mark. The vehicle, Speed Demon 3, was traveling at nearly 300 mph, according to a report by Hot Rod magazine.