Protecting the vulnerable, or automating harm? AI's double-edged role in spotting abuse
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people – including children in foster care, adults in nursing homes and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs.
Developers are using natural language processing, for example — a form of AI that interprets written or spoken language – to try to detect patterns of threats, manipulation and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse.
When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier.
But as a social worker with 15 years of experience researching family violence – and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator – I've seen how well-intentioned systems often fail the very people they are meant to protect.
Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements – not faces or voices – to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?
Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias and flawed assumptions. So are people, who design, test and fund AI.
That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels – scores given to hotline staff to help them screen calls – would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.
Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English — up to 62% more often, in certain contexts.
Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.
These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems — sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation and family separation compared with white families, even after accounting for income and other socioeconomic factors.
Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.
Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.
In hospitals and elder-care facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy.
In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months – overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report.
Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning.
But they've also been shown to flag harmless behaviors – like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality.
Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing or roughhousing — sometimes prompting intervention or discipline.
These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans — data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in 'Automating Inequality,' AI systems risk scaling up these long-standing harms.
I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.'
Survivor control: People should have a say in how, when and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help.
Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment – as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate.
Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models.
Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data.
Honoring these principles means building systems that respond with care, not punishment.
Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development – from needs assessments to user testing and ethical oversight.
Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.
As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Aislinn Conrad, University of Iowa
Read more:
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns
Is using AI tools innovation or exploitation? 3 ways to think about the ethics
Healing from child sexual abuse is often difficult but not impossible
Aislinn Conrad is developing iCare, an AI-powered, real-time violence detection system.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 hours ago
- Yahoo
Digital Museums Canada's 2025 Call for Proposals opens June 15
GATINEAU, QC, June 13, 2025 /CNW/ - The Digital Museums Canada 2025 Call for Proposals will open in two days! As of June 15, eligible museums and heritage, cultural and Indigenous organizations are invited to apply for up to $250,000 in funding from Digital Museums Canada. Over $2 million in funding is available to create online projects, develop digital capacity, and reach new audiences through digital storytelling. Digital Museums Canada (DMC) is an investment program managed by the Canadian Museum of History. Organizations can choose from one of two investment streams: Digital ProjectsWith an investment of up to $250,000, develop an online project from the ground up with the support of a web development agency. Community StoriesWith an investment of $25,000, use DMC's user-friendly website-building platform to create an engaging story. Starting in September, DMC will also offer a robust application assistance program to help applicants strengthen their funding applications. This includes Information Sessions, Office Hours, and Mentorship Chats. Digital Museums Canada is the largest investment program of its kind in the country, having provided over $22 million in funding to more than 275 projects to date. It offers Canadian museums and heritage organizations funding, expert guidance, and user-friendly tools to tell stories on a variety of online platforms, such as virtual exhibitions and tours, online games, web apps, and educational resources. DMC is committed to online accessibility as well as to advancing equity, diversity and inclusion in the heritage sector. Visit to apply and check out some of the recently funded projects! The 2025 Call for Proposals closes on December 1. Managed by the Canadian Museum of History, the Digital Museums Canada (DMC) investment program helps build digital capacity in museums and heritage, cultural and Indigenous organizations across Canada, offering unique access to diverse stories and experiences. For more information, visit Follow us on Facebook or LinkedIn. If you would like to have your name removed from our distribution list, or if this information should be directed to someone else, please send us an email. SOURCE Canadian Museum of History View original content:
Yahoo
17 hours ago
- Yahoo
Protecting the vulnerable, or automating harm? AI's double-edged role in spotting abuse
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people – including children in foster care, adults in nursing homes and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, for example — a form of AI that interprets written or spoken language – to try to detect patterns of threats, manipulation and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse. When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier. But as a social worker with 15 years of experience researching family violence – and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator – I've seen how well-intentioned systems often fail the very people they are meant to protect. Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements – not faces or voices – to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias and flawed assumptions. So are people, who design, test and fund AI. That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels – scores given to hotline staff to help them screen calls – would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%. Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English — up to 62% more often, in certain contexts. Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress. These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems — sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation and family separation compared with white families, even after accounting for income and other socioeconomic factors. Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers. Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost. In hospitals and elder-care facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy. In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months – overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report. Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning. But they've also been shown to flag harmless behaviors – like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality. Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing or roughhousing — sometimes prompting intervention or discipline. These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans — data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in 'Automating Inequality,' AI systems risk scaling up these long-standing harms. I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.' Survivor control: People should have a say in how, when and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help. Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment – as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate. Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models. Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data. Honoring these principles means building systems that respond with care, not punishment. Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development – from needs assessments to user testing and ethical oversight. Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling. As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Aislinn Conrad, University of Iowa Read more: Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns Is using AI tools innovation or exploitation? 3 ways to think about the ethics Healing from child sexual abuse is often difficult but not impossible Aislinn Conrad is developing iCare, an AI-powered, real-time violence detection system.
Yahoo
2 days ago
- Yahoo
Minister Hajdu shared Canada's commitment to ensure the full inclusion of persons with disabilities at the United Nations
NEW YORK, June 12, 2025 /CNW/ - Canada is a dynamic country that celebrates our diversity, cares for the most vulnerable among us, and strives for a better future for all. This week, the Honourable Patty Hajdu, Minister of Jobs and Families and Minister responsible for the Federal Economic Development Agency for Northern Ontario, brought that message to the United Nations (UN) where she led Canada's delegation to the 18th session of the UN Conference of States Parties to the Convention on the Rights of Persons with Disabilities, which took place from June 10 to 12 in New York City. As global challenges intensify, the Government of Canada is working with domestic and international partners to remove barriers for persons with disabilities to help create a more inclusive future for everyone. Delegates from various countries met around this year's overarching theme, "Enhancing public awareness of the rights and contributions of persons with disabilities for social development leading up to the Second World Summit for Social Development." Important discussions also took place on innovative ways to finance disability inclusion, inclusive Artificial Intelligence (AI), and protecting and promoting the rights of Indigenous persons with disabilities. During the opening session of the Conference, Minister Hajdu reaffirmed Canada's commitment to advancing disability inclusion. The Minister highlighted the importance of collaborating with the disability community to develop key elements of the Disability Inclusion Action Plan, such as the Canada Disability Benefit and the Employment Strategy for Canadians with Disabilities. When it comes to advancing disability-inclusive AI, Minister Hajdu noted that Canada introduced a national standard on accessible and equitable AI, which helps ensure no one is left behind in technological progress. The Minister also emphasized Canada's commitment to reconciliation and justice for Indigenous persons with disabilities, guided by the UN Declaration on the Rights of Indigenous Peoples Act. As part of the Conference, Canada hosted a side event on inclusive AI, where participants shared best practices on how AI can be leveraged to foster meaningful workforce participation for persons with disabilities. The Minister also participated in bilateral meetings with her counterparts from France, Ireland and Brazil to share valuable insights and learn from other countries' experiences in advancing disability inclusion. Quote "As the Minister of Jobs and Families, with the responsibility for advancing accessibility and disability inclusion, it was an honour to represent Canada at the United Nations this week. This year marks the 15th anniversary of Canada's ratification of the UN Convention on the Rights of Persons with Disabilities, which continues to guide our work to build a country free from barriers, where no one is left behind. By sharing our experiences, learning from other nations and collaborating with our allies, we are joining forces to build a truly accessible, inclusive and equitable future for all." –Minister of Jobs and Families, Patty Hajdu Quick facts The 18th session of the Conference of States Parties to the Convention on the Rights of Persons with Disabilities took place from June 10 to 12, 2025. The overarching theme of the Conference was "Enhancing public awareness of the rights and contributions of persons with disabilities for social development leading up to the Second World Summit for Social Development." The three sub-themes discussed at this year's session were: empowering persons with disabilities and enhancing social development policies through innovative financing. leaving no one behind: using AI as a tool for supporting inclusivity to strengthen the participation of persons with disabilities. recognizing and addressing the rights of Indigenous persons with disabilities and their role in advancing disability inclusion. The Convention on the Rights of Persons with Disabilities was adopted by the General Assembly on December 13, 2006, and came into force on May 3, 2008. This year marks the 15th anniversary since Canada ratified the Convention in 2010. Canada acceded to the Optional Protocol in 2018. Associated links 18th session of the Conference of States Parties to the Convention on the Rights of Persons with Disabilities Rights of people with disabilities Follow us on X (Twitter) Follow us on Facebook SOURCE Employment and Social Development Canada View original content: