
German court acquits satirist over social media post following Trump assassination attempt
In a quickly deleted post under his alias 'El Hotzo' on X in July last year, Sebastian Hotz drew a parallel between Trump and 'the last bus' and wrote 'unfortunately just missed.' In a follow-up post, he wrote: 'I find it absolutely fantastic when fascists die.'
A gunman opened fire at a rally in Butler, Pennsylvania, while Trump was campaigning for president last July, grazing Trump's ear and killing one of his supporters in the crowd. Trump went on to win the White House in November.
Prosecutors charged Hotz with approval of offenses. At a one-day trial at the Tiergarten district court in Berlin, prosecutors called for the 29-year-old to be handed a 6,000-euro ($7,030) fine. They argued that the posts fell into the category of hate crimes and, because Hotz has nearly 740,000 followers on X, could disturb the public peace, German news agency dpa reported.
Hotz argued that what a satirist says should be understood as a joke, and that 'playing with provocation' is his job.
Judge Andrea Wilms said in her ruling that Hotz's post was satire that should go unpunished, even if the comments may have been tasteless. She argued that no one would feel called upon to commit acts of violence by 'such clearly satirical utterances,' according to a court statement.
The German Journalists' Association earlier this week criticized the trial as excessive and said that the case should be closed, arguing that satirical freedom should be interpreted generously. It noted that public broadcaster RBB already terminated its work with Hotz as a result of the post.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
a minute ago
- San Francisco Chronicle
Creating realistic deepfakes is getting easier than ever. Fighting back may take even more AI
WASHINGTON (AP) — The phone rings. It's the secretary of state calling. Or is it? For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump's administration. Digital fakes are coming for corporate America, too, as criminal gangs and hackers associated with adversaries including North Korea use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets. Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age. Responding to the challenge will require laws, better digital literacy and technical solutions that fight AI with more AI. 'As humans, we are remarkably susceptible to deception,' said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. But he believes solutions to the challenge of deepfakes may be within reach: 'We are going to fight back.' This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voice mail and the Signal messaging app. In May someone impersonated Trump's chief of staff, Susie Wiles. Another phony Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine's access to Elon Musk's Starlink internet service. Ukraine's government later rebutted the false claim. The national security implications are huge: People who think they're chatting with Rubio or Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy. 'You're either trying to extract sensitive secrets or competitive information or you're going after access, to an email server or other sensitive network," Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations. Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state's upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI. Their ability to deceive makes AI deepfakes a potent weapon for foreign actors. Both Russia and China have used disinformation and propaganda directed at Americans as a way of undermining trust in democratic alliances and institutions. Steven Kramer, the political consultant who admitted sending the fake Biden robocalls, said he wanted to send a message of the dangers deepfakes pose to the American political system. Kramer was acquitted last month of charges of voter suppression and impersonating a candidate. 'I did what I did for $500,' Kramer said. 'Can you imagine what would happen if the Chinese government decided to do this?' Scammers target the financial industry with deepfakes The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud. 'The financial industry is right in the crosshairs," said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. 'Even individuals who know each other have been convinced to transfer vast sums of money.' In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers. Deepfakes can also allow scammers to apply for jobs — and even do them — under an assumed or fake identity. For some this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time. Authorities in the U.S. have said that thousands of North Koreans with information technology skills have been dispatched to live abroad, using stolen identities to obtain jobs at tech firms in the U.S. and elsewhere. The workers get access to company networks as well as a paycheck. In some cases, the workers install ransomware that can be later used to extort even more money. The schemes have generated billions of dollars for the North Korean government. Within three years, as many as 1 in 4 job applications is expected to be fake, according to research from Adaptive Security, a cybersecurity company. 'We've entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,' said Brian Long, Adaptive's CEO. 'It's no longer about hacking systems — it's about hacking trust.' Experts deploy AI to fight back against AI Researchers, public policy experts and technology companies are now investigating the best ways of addressing the economic, political and social challenges posed by deepfakes. New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers could also impose greater penalties on those who use digital technology to deceive others — if they can be caught. Greater investments in digital literacy could also boost people's immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers. The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person. Systems like Pindrop's analyze millions of datapoints in any person's speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance. Similar programs may one day be commonplace, running in the background as people chat with colleagues and loved ones online. Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop's CEO.


The Hill
a minute ago
- The Hill
Watch live: Trump meets UK prime minister
President Trump will meet with British Prime Minister Keir Starmer for 'wide-ranging talks' on Monday. Trump and Starmer are expected to discuss the implementation of the U.S.-U.K. trade deal, agreed to in May, the prime minister's office said Sunday. The struggle to reach a ceasefire between Israel and Hamas, along with the hunger crisis in Gaza, as well as the war in Ukraine, are also on the table as the two leaders meet in Scotland. The event is scheduled to begin at 7 a.m. EDT.


The Hill
a minute ago
- The Hill
Watch out, Google: Trump should move online in his fight against left-wing bias
Opinions on President Trump's budget cuts to institutions like the Department of Education, NPR and PBS tend to differ along partisan lines. But the intent behind these actions is clear: to dismantle longstanding ideological bias within publicly funded educational and media institutions. However, if the administration's objective is genuinely to root out institutional bias, its scope must extend beyond traditional media. Trump must also confront the digital giants — most notably, Google, whose influence over public discourse dwarfs that of any single news outlet or federal program. Google receives substantial financial benefit from federal contracts and partnerships. It has also repeatedly faced scrutiny for partisan behavior and the suppression of conservative voices. While some argue that it is the only realistic option for search, there are alternatives that offer a more balanced experience and directly address concerns about partisan filtering. In 2019, former Google engineer Zach Vorhies leaked more than 950 internal documents exposing ideological manipulation within the company. These documents, shared with the U.S. Department of Justice, revealed a troubling ecosystem of 'blacklists,' manual overrides and algorithmic demotions specifically targeting such right-leaning news sources as Newsmax and The Western Journal. Public perception echoes these concerns. A Pew Research Center survey revealed that 73 percent of Americans believe social media platforms and browsers censor political views. Among Republicans, the figure rose to a staggering 90 percent. While partisan suspicion alone is insufficient to prove systemic bias, academic research adds weight to the claim. Robert Epstein, a behavioral psychologist with credentials from Harvard and former editor-in-chief of Psychology Today, provided peer-reviewed findings to the U.S. Senate indicating that Google's search manipulation may have influenced up to 2.6 million votes in favor of Hillary Clinton during the 2016 election. Epstein's research reveals how subtle algorithmic biases in search results can profoundly influence voter behavior. In certain demographic groups, that swing can climb to an astonishing 80 percent. This influence is particularly dangerous in close elections, where even a small nudge — between 4 percent and 8 percent — could determine the winner. He emphasized that such influence operates below the threshold of awareness, undetectable by users and immune to oversight, making it one of the most powerful and least accountable forms of political persuasion in the digital age. To be clear, Epstein had no political motive behind his findings. He was a supporter of the Clinton campaign in the 2016 election, said he has never supported a conservative candidate, and has remained center-left throughout his life. More recent audits show the trend continues. In 2024, AllSides conducted a systematic review of Google's election-related search results and found that 65 percent were geographically mismatched. AllSides concluded that this misdirection not only limited access to localized, relevant information, it also diminished the digital presence of conservative voices. Yet despite these troubling findings, Google's footprint within the federal government continues to grow. In July 2025, the tech giant's public-sector arm secured a Department of Defense contract worth up to $200 million for artificial intelligence services. Google also participates in the Joint Warfighter Cloud Capability initiative, a $9 billion cloud infrastructure project shared with Amazon and Microsoft. Through the General Services Administration, Google supplies Workspace tools to numerous federal agencies under a contract projected to save $2 billion over three years. Collaborations with DARPA, NASA, and the Department of Energy further entrench Google within critical government operations. Google isn't the only search engine available, but it still dominates the market, accounting for roughly 90 percent of global usage. That said, there are meaningful alternatives worth considering. Luxxle, for example, is a privacy-focused search engine that gives users greater control over their data and the ideological slant of the content they see. Unlike Google, it doesn't track searches, monitor user behavior, or build consumer profiles. If the Trump administration truly aims to uphold ideological neutrality and preserve intellectual freedom, cutting off funding to biased government institutions is just the first step. The greater challenge lies in confronting powerful private entities like Google, which function as modern-day gatekeepers of public discourse.