Progressive groups plan ‘People's Session' to set North Dakota priorities for future legislation
Prairie Rose Seminole, board vice chair of Gender Justice, and Dalton Erickson, executive director of the North Dakota Human Rights Coalition, speak at the Capitol May 8, 2025, about policy issues they say went unaddressed during the 2025 legislative session. (Mary Steurer/North Dakota Monitor)
Progressive advocacy groups frustrated with what they saw as a tone-deaf and unproductive legislative session will host an all-day organizing event May 16 at the Heritage Center.
Called 'The People's Session,' the goal of the event is to hear what North Dakotans' policy priorities are. Representatives from its organizers — Gender Justice, Prairie Action and the North Dakota Human Rights Coalition — met Thursday morning on the steps of the Capitol to share what they have planned for the conference.
'It's time our Legislature listens to people across our state and leads with our values that we share — like fairness, privacy, freedom and respect,' said Prairie Rose Seminole, who is vice chair of Gender Justice's board and also sits on the board for the North Dakota Human Rights Coalition.
North Dakota lawmakers approve needs, some wants with $20.3 billion budget
A survey of North Dakota residents commissioned by Gender Justice last fall indicates that North Dakotans support policies like expanding reproductive rights, and investing in child care and affordable housing, Gender Justice Executive Director Megan Peterson said Thursday. The survey, which was conducted by firm PerryUndem, gathered responses from 800 adults across the state.
The policy priorities reflected in the survey aren't what lawmakers spent most of their time on this session, Peterson said.
She pointed to legislation that prevents K-12 schools from having multi-stall all-gender restrooms, as well as a bill that would have required school and public libraries to put 'sexually explicit' material in areas children cannot easily access, which Gov. Kelly Armstrong ultimately vetoed.
'Why did lawmakers spend their time on book bans and bathroom bills while families are struggling to find affordable child care, pay the rent and access basic health care?' Peterson asked.
While lawmakers defeated many of the most controversial proposals introduced this session — including a bill to require public schools to display the Ten Commandments in lunchrooms, and a resolution to declare the kingship of Jesus Christ — advocates at the Thursday morning gathering said these bills took valuable time and resources away from more pressing policy issues.
The group acknowledged that the Legislature passed bills this session aimed at improving access to child care, housing and health care, but argued the policies do not go far enough.
'When politicians can't or don't want to solve real problems, they create fake ones to divide and distract us,' Peterson said.
She said 'The People's Session' will put together policy proposals for the next legislative session that better reflect what North Dakota residents actually want.
The event is free, but registration is required. It will take place at the Heritage Center from 9 a.m. to 4 p.m. Topics of discussion will include health care, housing, schools, LGBTQ issues, reproductive rights and more.
'We hope to take the people's agenda forward, crafting and refining it into bills through public input, and then put it forward for sponsors to bring to the 2027 legislative session,' North Dakota Human Rights Coalition Executive Director Dalton Erickson said.
For more information about the event, including registration details, visit Gender Justice's website. Those who aren't able to participate in person can still submit their thoughts through the registration form.
SUPPORT: YOU MAKE OUR WORK POSSIBLE
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
11 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.


Fox News
16 minutes ago
- Fox News
WNBA should investigate Brittney Griner video after probing false claims about Fever fans, governor says
In just the first week of the WNBA season, Caitlin Clark found herself in more wars of words. A day after she and the Indiana Fever slaughtered Angel Reese and the Chicago Sky, the WNBA began investigating reports of "hateful comments" toward Reese by Fever fans that were ultimately "not substantiated." When the WNBA announced the claims were not true, U.S. Sen. Jim Banks, R-Ind., called on the WNBA to issue an apology to Fever fans and Indiana residents. It is a sentiment Indiana Gov. Mike Braun agreed with during a recent interview with Fox News Digital. "I'm one that absolutely has zero tolerance for bigotry and bullying and all the stuff that comes along with that whole discussion. When comments are made that don't make sense … when it's out of line, you need to acknowledge it. So, I agree with Sen. Banks there, and I hope that there's not any of that there," Braun said. Although the WNBA investigated reports of verbal attacks against Reese, the WNBA has not investigated a viral video of Brittney Griner in which many people on social media believe she was calling Clark "trash" and a "f---ing white girl." The video of Griner emerged shortly after Griner fouled out against the Fever on a questionable foul during a game in which Clark and Griner, who play two separate positions, were hardly near each other. Others say Griner was actually calling the referee or the call that was made "trash," adding it was a "f---ing whack call." In any case, the video has been a hot topic online, and the WNBA has not acted. Braun said if the WNBA investigated what were determined to be false claims of hateful speech against Reese, it should look into Griner. "I don't think there's any place for it. I think Caitlin has kind of done things for the league itself where it ought to be embraced. I think, if anything, it's showcased the talent across that league. You ought to be grateful for it, not throwing around comments that might indicate otherwise," Braun said. "Yes, I think if there was some look into what Angel Reese did, I think it would be good to put that to rest as well. And I hope that exits the stage, because it's no good regardless of where it's coming from." Braun admitted he "didn't pay quite as much attention" to women's basketball before Clark got to his state, "but I have a lot recently." "She sells out the Gainbridge arena just like the Pacers do, and her team," Braun said. "And the more I watch the games, they're as competitive, in many cases more so, in terms of the scrap and the talent. We're lucky that Caitlin ended up in the basketball state. She has rekindled a lot of that spirit at both the college and high school levels. Caitlin has been a wonderful addition, and she's in the right state. "Maybe that was destiny." Follow Fox News Digital's sports coverage on X, and subscribe to the Fox News Sports Huddle newsletter.


Fox News
21 minutes ago
- Fox News
Fox News tops all news brands on YouTube with staggering 362 million views during May
FOX News Media finished May as the No. 1 news brand on YouTube with a staggering 362 million video views to dominate ABC, NBC, CBS and cable news organizations. Fox News' 362 million video views easily topped runner-up MSNBC, as the progressive outlet managed 298 million. CNN finished third with 195 million video views. ABC delivered 114 million video views and NBC managed 111 million, while CBS News had 54 million. The New York Times finished May with only 13 million video views. Fox News saw a 98% year-over-year advantage with YouTube video views as FOX News Media has emphasized the platform. The newly launched Fox News Clips platform drew close to 30 million views in May in its first month on YouTube. Fox News Clips provides the latest reporting and analysis from Fox News Channel. FOX Business' YouTube accumulated 63 million views during the month. Fox News also piled up 1.08 billion social video views across Facebook, TikTok, Instagram and X last month. The YouTube success in May coincides with Fox News Channel ratings crushing cable news outlets across the board. Fox News averaged 1.6 million total day viewers compared to 545,000 for MSNBC and 353,000 for CNN. Fox News' total day viewers grew 21% since May 2024, while MSNBC was down 33% and CNN shed 24% of its audience. During primetime, Fox News averaged around 2.5 million total viewers compared to 877,000 for MSNBC and only 426,000 for CNN. Fox News piled up nearly 65% of the cable news share among viewers across both total day and primetime. YouTube video view and social media data courtesy of Emplifi.