In surprise victory, UK's Labour wins Scottish by-election after bitter contest
By Andrew MacAskill
LONDON (Reuters) -British Prime Minister Keir Starmer's Labour Party won a surprise victory in a fiercely fought by-election for the Scottish parliament on Friday after one of the most bitter election campaigns in the country's recent history.
In an election when race became a major issue, Labour won the seat from the Scottish National Party in Hamilton, Larkhall and Stonehouse following the death of a former government minister.
The result provides some relief for Starmer, whose party has suffered a steep fall in support since it won a landslide in a British general election last year after it raised taxes, cut welfare benefits and got into a row over the use of donations.
Voters rallied around Labour in the closely watched contest after Reform UK, led by Nigel Farage, repeatedly referred to the ethnicity of the Scottish Labour leader Anas Sarwar, who is of Pakistani heritage, drawing criticism from the other parties.
The Labour candidate Davy Russell won 8,559 votes in the election for the Scottish parliament, which has devolved powers over issues like health and education. The SNP won 7,957 votes, and Reform finished third with 7,088 votes.
The betting odds had predicted a comfortable victory for the SNP with Labour pushed into third place behind Reform.
Russell told his supporters that his victory had "sent a message to Farage and his mob, the poison of Reform isn't us, it isn't Scotland and we don't want your division here".
But a recent surge in support for Reform suggests the party is making inroads in Scotland as well as in England, a year before the Scottish parliament elections are held next year.
The support for Reform in Scotland is particularly surprising because the party's brand of low tax, anti-immigration, anti-EU politics, has often been more associated with English nationalism.
Farage visited Scotland on Monday, days after a row over an online video put out by his party that falsely claimed Sarwar had said he would "prioritise" the Pakistani community.
Labour branded the advert "blatantly racist" and the SNP leader John Swinney urged voters to reject Reform's "gutter politics". Farage responded by accusing Sarwar of introducing sectarianism into Scottish politics.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
12 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.


Axios
19 minutes ago
- Axios
D.C. schools are banning cellphones, joining almost half of the nation
D.C.'s public schools will enforce a cell phone ban starting next school year, the district said on Friday. Why it matters: D.C. joins nearly half the country in the bipartisan push to limit students' cellphone use in the classroom. D.C. middle schools and several of its high schools already implemented the ban, the district said. Catch up quick: Phone bans have gained momentum across Democratic and Republican state legislatures in recent years. Arizona, Arkansas and New York 's governors signed bills into law this year to implement bans. By the numbers: As of April, 11 statewide phone bans or restrictions were implemented and seven states issued policy recommendations, according to health nonprofit KFF. An additional 17 states introduced legislation to ban or restrict cellphone use in schools. State of play: The phone bans are aimed at boosting students' attention during class as they struggle to recover from COVID learning loss. Screen time is also partially at fault for a youth mental health crisis, research has found. What they're saying: "Piloting a phone-free program in our middle schools demonstrated that storing students' personal devices throughout the school day enriches academic, social, and emotional learning," Lewis Ferebee, D.C. schools chancellor, said in a statement. "From increased classroom engagement to reduced anxiety and stronger student relationships — DCPS is ready to scale the program so we can keep driving outcomes that positively impact our students." What we're watching: Sens. Tom Cotton (R-Ark.) and Tim Kaine (D-Va.) reintroduced a bipartisan bill in February to study the effects of cellphones in schools, but the legislation has not seen movement since. The bill proposes $5 million annually for five years for a pilot program to provide schools with secure containers for the phones. It would allow exceptions for students with health conditions, disabilities and non-English speakers.

Epoch Times
24 minutes ago
- Epoch Times
Group Targeted in Boulder Attack Undeterred as Support Builds, Leader Says
The group victimized in an anti-Semitic attack plans to return undaunted to the scene of the Boulder, Colorado, firebombing—and is emerging stronger worldwide, an organizer said. 'Leaders are going to go out this weekend and hold their regularly scheduled walk, maybe with more security,' Shira Weiss, global coordinator of the 'Run for Their Lives' group, told The Epoch Times. The group was formed shortly after the Hamas terrorist ambush of Israel on Oct. 7, 2023, launching the Israel-Hamas War. As the war persists, more than 50 people remain captive, Weiss said, and the apolitical, nonpartisan group has gathered weekly ever since. 'Unfortunately, this awful event and attack has actually strengthened the initiative ... and we've actually gotten a really big uptick in new people wanting to stand up new chapters,' Weiss said, noting the group includes more than 230 chapters worldwide. The Boulder attack marked 'the first-ever attack on any of our groups,' she said. Initially, the group 'paused all walks' while organizers evaluated safety considerations, Weiss said. 'Once we figured out that it was an isolated incident, we allowed the group leaders to resume their walks if they wanted to,' she said. Run for Their Lives has received an outpouring of support, Weiss said, with many people urging the organization to continue its work. A Denver group has planned 'to walk in solidarity' with the Boulder group on Sunday, she said. Some news coverage of the group's activities has been inaccurate, she said. 'I wouldn't even call it a 'march,'' she said. 'It's literally a walk of about 20 minutes,' sometimes sandwiched between a few short remarks. For example, participants 'might read the names of the hostages,' or profile one of them and 'tell the story of who they are and where they came from,' Weiss said. The walk in Boulder on Sunday will be followed by the 30th annual Boulder Jewish Festival, according to an The festival and the Run for Their Lives observance will be held at the city's Pearl Street outdoor mall, where an Egyptian national was arrested for allegedly injuring more than a dozen people with explosives and a makeshift flamethrower. The defendant, Mohammed Sabry Soliman, 45, is Attendees can expect 'a thoughtfully reimagined version' of the festival, with 'enhanced security protocols,' the announcement said, adding, 'these are precautionary measures and not based on any known threat.' 'Our program and planned experiences will bring us together as a community at a time when nothing could be more important,' festival organizers wrote. 'We recognize that each one of us holds a range of emotions—from fear to pride, from anger to the joy of being part of the Boulder Jewish community. With this festival, we hope to provide the space and opportunity for all to show up authentically.' However, news of the planned Colorado gatherings comes as one of America's largest Jewish gatherings canceled its conference because of escalating anti-Semitic threats in Texas. On June 5, four days after the Boulder attack, organizers of The Israel Summit announced that its Dallas-area gathering 'has become the latest casualty of growing hostility toward public support for Israel in the United States,' according to a news release that David Friedman, former U.S. ambassador to Israel, Friedman said he had been scheduled to speak at the summit, and that more than 1,000 attendees were expected. 'Law enforcement was completely cooperative but the threats were of a nature that required cancellation,' Friedman wrote in his post. 'When [President Donald Trump] says we need to take our country back, this is a good example of what he means!' Friedman served as ambassador from 2017–2021, during Trump's first presidency.