Irish government to apologise over fatal crash
The Irish government is to apologise to the family of a cyclist killed in a hit-and-run by a driver who should have been in jail at the time of the crash.
Shane O'Farrell was 23 when he was hit by a car while cycling near his home in Carrickmacross, County Monaghan, in 2011.
The driver, Zigimantas Gridziuska, had more than 40 convictions at the time, including some in Northern Ireland.
Mr O'Farrell's mother, Lucia O'Farrell, told Irish broadcast RTÉ that her family is expecting a public apology to be made in the Dáil (Irish parliament) on Tuesday.
The O'Farrell family have campaigned for years for a public inquiry into their son's death.
"Shane's case raises serious issues about how the criminal justice system works and how it ought to work," Mrs O'Farrell told RTÉ's Colm Ó Mongáin.
"We await the apology on Tuesday."
The body of the young law graduate was found nearly 60 yards (54m) from the point where he was hit by a car on 2 August 2011.
Gridziuska, a Lithuanian national, left the scene of the fatal crash but the following day he handed himself into gardaí (Irish police).
He had been a regular defendant in courts in Monaghan, Cavan and Louth in the years prior to the killing, according to RTÉ
In the January before the fatal crash, a ruling from Monaghan Circuit Court meant Gridziuska should have been in jail at the time of Mr O'Farrell's death.
Instead, he remained on bail.
Another prison sentence handed down in February 2011 was also not imposed.
The Court Services of Ireland previously said Gridziuska's prison sentences were never served because of what it described as an administrative error.
As part of their long-running campaign, the O'Farrell family have previously travelled to Belfast and Brussels to raise their case in Stormont and the European Parliament.
Shane O'Farrell: Irish PM to meet family of County Monaghan hit-and-run victim
Irish families take cases to Europe
Victims' relatives to go to Stormont
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Former Georgia-based Army soldier convicted for sexual abuse of 2 children
The U.S. Department of Justice announced Friday that a former U.S. Army soldier stationed at Fort Stewart was convicted of sexually abusing two children. Austin Michael Burak, 32 of Oak Harbor, Wash., was investigated by the Army Criminal Investigation Division and the FBI for sexually abusing a nine-year-old and sexually assaulting a 13-year-old on the same in in August 2017, according to courtroom testimony. 'No child should have to experience this heinous abuse. The FBI is committed to tracking down and holding accountable people like Burak who prey on children,' FBI Atlanta Special Agent in Charge Paul Brown said in a statement. 'We will ensure that criminals engaged in this depraved conduct are held accountable in a court of law.' [DOWNLOAD: Free WSB-TV News app for alerts as news breaks] TRENDING STORIES: Burned body found at Stone Mountain Park, GBI on scene 'It was startling': Buckhead couple detail finding starving dog that led to R&B singer's arrest At least 7 in GA sickened in Salmonella outbreak linked to cucumbers A federal jury convicted Burak after four days at trial. 'This sends a clear message: Army CID and our law enforcement partners will not tolerate these heinous acts,' Special Agent in Charge Michele Starostka of the Department of Army Criminal Investigation Division's Western Field Office, said. 'We are committed to aggressively investigating all crimes, establishing the facts, and supporting the legal process against those responsible.' Now convicted, Burak faces at least 30 years in prison, USDOJ said. 'Burak awaits sentencing upon the U.S. Probation Services completing a presentence investigation,' according to the Justice Department. [SIGN UP: WSB-TV Daily Headlines Newsletter]
Yahoo
2 hours ago
- Yahoo
DC man wanted for allegedly shooting, killing woman in Southeast: police
WASHINGTON, D.C. - A D.C. man is wanted for a shooting that left one woman dead in Southeast. The Metropolitan Police Department is seeking the community's help to locate a wanted suspect. The suspect has been identified as 20-year-old Mercury Ford of Southeast, D.C. He is currently wanted on a D.C. Superior Court arrest warrant for first-degreemurder while armed. On Tuesday, May 27, around 12:41 p.m., officers responded to the 2300 block of Pitts Place, Southeast, for the report of a shooting. Upon arrival, officers located an adult woman, unconscious and not breathing, suffering from a gunshot wound. After all lifesaving measures failed, the victim was pronounced dead. The decedent has been identified as 32-year-old Tequila Parker of Southeast, D.C. Anyone with any information on this incident is asked to contact police.


Forbes
4 hours ago
- Forbes
2025 Gen AI Trends: Privacy, Adoption, And Compliance
In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across global industries. Here's what privacy counsels are saying about Generative AI and its regulatory impact. In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across ... More global industries. 2025 has already thrown plenty of curveballs, and AI governance is no exception, diverging sharply from what many predicted just a year ago. Though broad AI adoption remains in its early phases, sectors like education and mental health are seeing noticeable momentum, especially in individual-facing applications and services. But things are shifting quickly. 'Prior to the AI Act coming into force, AI governance was fractured,' said Caitlin Fennessy, Vice President and Chief Knowledge Officer at the IAPP, formerly known as the International Association of Privacy Professionals. 'Academics, civil society, and professional associations were involved, but they were often late to the conversation because there were no rules yet.' Today, the governance community has matured. The technology has advanced. Public engagement has surged. Since ChatGPT, AI is no longer just the domain of specialists. Friends and families are asking whether 'Deep Seek is the real deal.' At the IAPP's AI Governance Global conference (AIGG25) in Dublin, regulators, legal counsels, product leaders, and privacy professionals compared notes. Here's what the front lines of AI governance are revealing in 2025. AI no longer operates in a regulatory vacuum. In Europe, the EU AI Act came into force in August 2024 and is now rolling out in phases. As of February 2025, prohibitions on unacceptable-risk AI are in effect, alongside requirements for AI literacy. By August, obligations will apply to general-purpose AI providers, and national competent authorities must be appointed. Between 2026 and 2027, high-risk AI systems in sectors like healthcare, law enforcement, and infrastructure will be subject to extensive conformity assessments, documentation, and post-market monitoring. By 2030, some requirements will extend to large-scale government systems. Support comes from mechanisms like the AI Pact, a voluntary initiative inviting providers to implement provisions ahead of schedule, as well as ongoing guidance from the European Commission and the newly established AI Office. At the same time, EU officials have considered softening their approach. When asked whether the Commission was open to amending the AI Act, Kilian Gross said the first priority would be simplifying implementation, to make it easier for companies while still remaining effective. In contrast, the United States is exploring a deregulatory path. A proposed 10-year moratorium on state-level enforcement of AI-specific laws is under Congressional consideration. It would suspend enforcement of design, performance, documentation, and data-handling laws unless those apply across all technologies. 'Yes, there is a complex regulatory landscape for AI systems,' said Ashley Casovan, Managing Director of the IAPP's AI Governance Center. 'However, it's not insurmountable. For those who have started to navigate this web of rules, there are clear pathways for complying with overlapping requirements.' The message from the conference was consistent. AI governance cannot be owned by a single function. It requires coordination between legal, privacy, compliance, product, design, and engineering. Casovan described this shift as being highly dependent on use cases. The specific roles and responsibilities within governance teams vary by sector and application. But as the regulatory landscape becomes more complex and AI adoption expands, the need for people who can navigate and translate these obligations is growing. In highly regulated industries such as healthcare, finance, and education, governance efforts are advancing most rapidly. At a dedicated AI in Healthcare workshop, multiple speakers stressed that AI compliance must align with existing obligations in patient care, medical recordkeeping, and safety. One panelist described it as a 'complex web of laws, regulations, rules, standards, and industry practices.' Other sectors are adopting risk-based governance aligned with the AI Act's classification system, especially in use cases involving biometrics or automated decision-making in employment and HR. Many organisations are using the EU's framework globally as a benchmark rather than creating their own from scratch. AI governance is being embedded into existing privacy and compliance programs, leveraging what's already in place. In some jurisdictions, state-level legislation and sector-specific rules are shaping governance even further. In cities like New York, organisations are adopting more targeted mitigation strategies, aligning AI obligations with longstanding standards around data use and safety. All of this signals a shift. AI governance is becoming more mature, risk-aware, and integrated into broader organisational operations. Despite visible progress, several challenges remain. Innovation continues to outpace regulation. Product cycles are faster than rulemaking. There is still no agreement on when or how to intervene. There is also no consensus on a best-practice model. 'We haven't seen [the] best practice structure for AI governance yet,' said Ronan Davy of Anthropic. 'Company-specific contexts—risk management, size, style, use cases—all need to be considered.' The diversity of organisational needs makes a universal framework difficult to establish. Fragmentation across jurisdictions continues to challenge multinationals. But many organisations are adapting. They are building jurisdiction-specific playbooks and aligning AI oversight with established sectoral requirements. The field is still young, drawing from disciplines including privacy, compliance, safety engineering, IT risk, and ethics. Building internal capability, and external networks, is now central to AI governance work. Casovan emphasized the organisational change underway. The EU AI Act intersects with more than 60 other legislative instruments, especially in areas like financial regulation and product safety. Companies are responding by creating new governance roles such as Chief AI Officer, Head of Digital Governance, and hybrid roles like Chief Privacy and AI Officer. These titles reflect a demand for leadership that can span legal, technical, and operational responsibilities. In the US, privacy continues to fill the gap in the absence of comprehensive AI laws. Fennessy pointed to an earlier pattern. The US privacy profession outpaced Europe not because of regulation, but because of market pressure and consumer trust. She sees a similar dynamic playing out in AI. 'Organisations can't afford to conduct ten different risk assessments,' she said. 'We're seeing a shift toward integrating privacy, security, and ethics into a single framework. This helps surface the most critical issues and elevates them to the board.' Trustible CEO Gerald Kierce challenged the idea that governance slows down innovation. 'We've seen this firsthand,' he said. 'One of our customers saw a 10x increase in use cases in just one year after adopting a robust governance framework.' Before implementing governance, they lacked clear processes and tools. Once structure was in place, they were able to responsibly scale. 'There's a false narrative that governance slows things down,' said Kierce. 'That's only true when it's approached as a checkbox exercise. In reality, governance enables progress by creating clarity, trust, and accountability.' AI governance is becoming cross-functional by necessity. Legal interpretations must be converted into operational controls that governance and compliance teams can manage. Companies are integrating AI risk into familiar tools like DPIAs and cybersecurity protocols. Casovan reinforced the foundation: 'Start with your inventory. Know what AI systems you have, how they're being used, and who is responsible.' Rather than start from zero, most organisations are building on existing governance structures: privacy programs, ethics boards, safety reviews. 'Don't reinvent the wheel,' said Casovan. 'Follow governance practices you already have in place.' The goal is to adapt known systems to meet new demands, not duplicate effort. Fennessy underscored the need for a unified model. Fragmented approaches don't scale. 'That integrated governance approach is what enables organisations to manage AI risks holistically,' she said. Privacy, security, and ethics are converging, not diverging. Organisations are consolidating impact assessments, surfacing the most critical risks, and aligning AI oversight with strategic goals. The work is complex, but the direction is clear - and necessary.