Recent antisemetic attacks worry, embolden central Ohio Jewish community
COLUMBUS, Ohio (WCMH) — On Sunday, a group in Boulder, Colorado, that was raising awareness for hostages in Gaza, was attacked, leaving 12 people injured.
Now, local groups are upping their security measures following this latest string of attacks against Jewish people and institutions.
'It's really beautiful when people can come together for a zero tolerance of hate and to combat anti-Semitism,' Julie Tilson Stanley, JewishColumbus president and CEO, said. 'And we're seeing that in Columbus.'
Many in the Jewish community see these incidents as signs of growing antisemitism in the United States. According to JewishColumbus, this uptick in violence is a dangerous reminder of the consequences of unchecked hate and antisemitism, leaving the community shaken but undeterred.
'While it is a scary time, it's also a time of resilience and hope and action,' Tilson Stanley said.
There have been other high-profile, antisemitic attacks in recent weeks, including the targeted killings of Israeli embassy staff in Washington D.C.
'People are feeling uneasy and they are anxious about going about their lives and even wearing a Jewish star or some sort of semblance of showing that they are Jewish,' Tilson Stanley said.
In an email sent to the local Jewish community, JewishColumbus leaders said they've increased security.
'What that means is having officers present, as well as extra patrols, just making sure they are making the rounds at different institutions to ensure safety and security,' Tilson Stanley said.
She said JewishColumbus has hired a chief security officer who communicates with all of the Jewish institutions in town, alongside their security director.
'Those two have an expertise in counterterrorism and what it means to really secure a community and do so across central Ohio,' Tilson Stanley said.
JewishColumbus is also working closely with local, state and federal law enforcement.
Bexley Police Chief Gary Lewis shared a statement on his department's efforts:
'The Bexley Police Department is committed to public safety and in response to the recent incidents which have occurred in our nation targeting the Jewish community we have increased our presence and efforts geared towards keeping everyone safe. We continue to work with our local, state, and federal partners such as the FBI JTTF and leadership with JewishColumbus.'
'When we are in a moment of fear, we know that how we can get through this is making sure we communicate, because a safe Jewish Columbus is a safe Columbus,' Tilson Stanley said.
The recent attacks have shaken the community, but also brought them together, she said.
'Our Jewish clergy across Columbus work hard to ensure the safety of their populations, and also working with superintendents across central Ohio, of public schools and interfaith clergy of different, you know, of every religion, really trying to understand and educate,' Tilson Stanley said.
JewishColumbus encourages the community to increase their situational awareness and contact police if something seems suspicious. They said if what you see is threatening physical harm, run until you're in a safe place, hide by denying the attacker access to and awareness of yourself, or, if needed, fight to save your life or the lives of those near you. Individuals can reach out to their security team with any concerns.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
18 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.


Fox News
22 minutes ago
- Fox News
WATCH LIVE: AG Pam Bondi announces major criminal arrest
All times eastern Special Report with Bret Baier The Evening Edit with Elizabeth Macdonald FOX News Radio Live Channel Coverage WATCH LIVE: AG Pam Bondi announces major criminal arrest


Fox News
22 minutes ago
- Fox News
AG Bondi details ‘very serious charges' facing Kilmar Abrego Garcia
All times eastern Special Report with Bret Baier The Evening Edit with Elizabeth Macdonald FOX News Radio Live Channel Coverage WATCH LIVE: AG Pam Bondi announces major criminal arrest