
AI And Domestic Violence: Boon, Bane — Or Both?
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a ... More photograph of a man, with facial features identified and facial bounding boxes present. (Photo by Smith Collection/Gado/Getty Images)
One evening, a woman clicked open the camera app on her phone, hoping to capture what nobody else was around to see — her partner's escalating rage.
Their arguments followed a familiar pattern: she would say something that set him off, and he would close the physical distance between them, screaming in a high-pitched voice, often threatening her with violence. 'I'd never actually do it, of course,' he would often say later on, once the dust had settled between them. 'You're the one mentally torturing me.'
Sometimes, the response would escalate even further. He would throw her phone across the room, upset by the 'invasion of [his] privacy', or snatch an object from her hands, raising it as if to strike her with it. No physical bruises were left, but the writing was on the wall — with no device to capture it, no alert to trigger and no safe place to store the evidence.
For many women, this isn't a plot point from a cringey Netflix drama — it's near-daily reality, and comprises the kind of behavior that rarely results in a police complaint. Notably, while threats of physical harm are explicitly criminal in many jurisdictions — including India and the U.S. — they've long gone undocumented and unprosecuted.
Experts note that this very pattern — escalating verbal threats, threatened or actual destruction of property and intimidation — often marks the early stages of more serious and damaging domestic violence. And in certain contexts, AI-enabled tools are making it easier to discreetly gather evidence, assess personal risk and document abuse — actions that were previously unsafe or more difficult to carry out.
At the same time, these technologies open up unprecedented avenues for new forms of harm. Increasingly, the most common 'eyewitness' in situations like these is a phone, a cloud account or a smart device — secretly recording, storing and offering support or a lifeline. But just as easily, the same tools can be turned into instruments of control, surveillance and even manipulated retaliation.
Tech For Good
Around the world, one in three women has experienced physical or sexual violence by a partner, according to the World Health Organization. As AI becomes more embedded in everyday devices, a growing number of tools have come up, often with the stated goal of making homes safer for those at risk — particularly those experiencing intimate partner violence.
During the COVID-19 pandemic, as cases of domestic violence surged, Rhiana Spring, a human rights lawyer and founder of the Swiss-based nonprofit Spring ACT, saw an opportunity to deploy AI for good.
Her organization developed Sophia, a chatbot that offers confidential, 24/7 assistance to domestic violence survivors. Users can talk to Sophia without leaving a digital trace, covertly seek help and even store evidence for use in legal proceedings. Unlike traditional apps, Sophia doesn't require a download, minimizing surveillance risks.
'We've had survivors contact Sophia from Mongolia to the Dominican Republic,' Spring told Zendesk after winning a Tech for Good award in 2022.
Meanwhile, smart home cameras, like those from Arlo or Google Nest, now offer AI-driven motion and sound detection that can distinguish between people, animals and packages. Some can even detect screaming or unusual sounds and send alerts instantly — features that can be valuable for creating a digital record of abuse, especially when survivors are worried about gaslighting or lack physical evidence.
Several CCTV systems also allow cloud-based, encrypted storage, which prevents footage from being deleted or accessed locally by an abuser. Services like Wyze Cam Plus offer affordable cloud subscriptions with AI tagging, and features like 'privacy masking' allow selective blackouts in shared spaces.
For discreet assistance, several smartphone apps also integrate AI with panic alert features. Examples include MyPlan, Aspire News — which poses as a news app but offers emergency contacts and danger assessment tools — and Circle of 6. Smart jewelry like InvisaWear and Flare hide panic buttons in accessories, where, with a double-tap, users can clandestinely notify emergency contacts and share their GPS location.
Beyond home safety and personal apps, AI is also entering hospitals and law enforcement in the context of domestic violence response and prevention.
Dr. Bharti Khurana, a radiologist at Brigham and Women's Hospital, developed an AI-powered tool called the Automated Intimate Partner Violence Risk Support (AIRS) system, which scans medical records and imaging data for subtle injury patterns often missed by doctors and flags patients who may be victims of abuse. According to Khurana's team, AIRS has helped identify domestic violence up to four years earlier than patients typically report it.
Another U.S.-based initiative, Aimee Says, was launched in Colorado to help survivors navigate the complexities of the legal system. The chatbot walks users through the process of filing protection orders, finding support organizations and understanding their rights. The app features guest mode sessions that disappear after use as well as a hidden exit button for quick redirection if an abuser walks into the room.
'We want to be there before the person is ready to reach out to a victim service organization — hopefully, early enough to prevent a future of violence,' said co-founder Anne Wintemute in a December 2024 interview with The Decatur Daily.
Double-Edged Sword
In India and much of the Global South, domestic violence continues to be rampant, widespread and hugely underreported.
According to the National Family Health Survey (NFHS-5), nearly one in three Indian women aged 18 to 49 has experienced spousal violence — but only a fraction seek help, often due to stigma, dependency, fear of escalation or lacunae in response frameworks and accountability.
In these contexts, AI has the potential to be a particularly powerful tool — helping survivors document abuse or seek help — but its reach is limited by access, resources and trust in the technology itself. Surveillance concerns also loom large, especially in environments where privacy is already compromised.
Moreover, the same technologies that support survivors can also open new avenues for harm — particularly when wielded by abusers.
Deepfake technology, which uses generative AI to produce hyper-realistic fake audio, images or video, is already complicating legal proceedings, with fabricated call logs, messages or videos sometimes used to falsely implicate victims. In restraining order hearings or custody disputes, which often happen quickly and with limited fact-finding, courts may have little time or capacity to assess the authenticity of digital evidence.
Products that store data, enable remote surveillance and monitor behavior can just as easily become weaponized by abusers. Few tech companies offer transparency and answerability on how their tools could be misused in these ways, or build in strong enough safety features by design.
'In parallel, the emergence of deepfake technology … also raises alarms regarding privacy invasion, security risks and propagation of misinformation,' warned Supreme Court Justice Hima Kohli of India, explaining how easy it has become to manipulate trust in digital content.
The same code that is used as a lifeline, then, can also become a weapon in the same breath. As AI evolves, while the real test for the tech industry is indeed about how 'smart' their tools can become, it's also about how safely and justly they can adapt to serve those who need them most.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
13 minutes ago
- Yahoo
Alleged drunk driving crash near Kalispell costs passenger an arm
Jun. 27—Prosecutors say a driver involved in a U.S. 2 wreck earlier this month that cost her passenger a limb had a breath alcohol concentration of more than twice the legal limit. Kelly Ann Hartigan, 48, faces one count of felony negligent vehicular homicide in Flathead County District Court following the June 14 crash. Held in the county jail with bail set at $100,000, she is scheduled to appear before Judge Danni Coffman on June 30 for her arraignment. Hartigan was headed eastbound on U.S. 2 near Kalispell when she drove off the road and hit a light pole, according to court documents. The collision tore the right arm off of her passenger at the shoulder, and Hartigan allegedly continued driving for another 500 yards before stopping. An arriving Montana Highway Patrol trooper allegedly smelled the odor of an alcoholic beverage on Hartigan. They described her as unsteady on her feet and seemed poised to fall over, according to court documents. During the ensuing field sobriety tests, Hartigan allegedly showed multiple signs of impairment. She also sported glossy eyes and swayed on her feet. A subsequent breath test returned with an alcohol content of 0.180, according to court documents. Investigators later secured a blood draw at a medical center following her arrest. If convicted, Hartigan faces up to 10 years behind bars and a fine of $10,000. News Editor Derrick Perkins can be reached at 758-4430 or dperkins@


Fox News
14 minutes ago
- Fox News
Justice Department investigating University of California over alleged DEI-based hiring
The Justice Department has announced it is investigating the University of California (UC) for alleged Title VII discrimination violations in its hiring practices. The agency announced Thursday that its Civil Rights Division is looking into the university's individual campuses regarding potential race- and sex-based discrimination in employment practices. The university's "UC 2030 Capacity Plan" directs its campuses to hire "diverse" faculty members to meet race- and sex-based employment quotas, the Justice Department said. "These initiatives openly measure new hires by their race and sex, which potentially runs afoul of federal law," the Justice Department said in a press release. "The Civil Rights Division's Employment Litigation Section will investigate whether the University of California is engaged in a pattern or practice of discrimination based on race, sex, and other protected characteristics, pursuant to Title VII of the Civil Rights Act of 1964." Title VII prohibits an employer from discriminating against an individual on the basis of race, color, religion, sex, or national origin, Harmeet Dhillon, the assistant attorney general of the Justice Department's Civil Rights Division, said. "Public employers are bound by federal laws that prohibit racial and other employment discrimination," Dhillon said. "Institutional directives that use race- and sex-based hiring practices expose employers to legal risk under federal law." The Justice Department's Civil Rights Division wrote to the university on Thursday, informing it of the investigation. "Our investigation is based on information suggesting that the University of California may be engaged in certain employment practices that discriminate against employees, job applicants, and training program participants based on race and sex in violation of Title VII," the letter reads. "Specifically, we have reason to believe the University of California's 'UC 2030 Capacity Plan' precipitated unlawful action by the University of California and some or all its constituent campuses." UC said it will work in good faith with the Justice Department as it conducts its investigation. "The University of California is committed to fair and lawful processes in all of our programs and activities, consistent with federal and state anti-discrimination laws," a UC statement provided to Fox News Digital reads. "The University also aims to foster a campus environment where everyone is welcomed and supported." The university's UC 2030 Capacity Plan lays out a goal of becoming a national model as a Hispanic-Serving Institution (HSI) and Minority-Serving Institution (MSI) system. The plan outlines a pipeline strategy to diversify faculty and researchers through expanded graduate enrollment and outreach to institutions that serve underrepresented students. The DOJ, however, claims these initiatives may violate Title VII by functioning as de facto employment quotas. In March, UC dropped diversity statements from its hiring practices amid President Donald Trump's threats that schools could lose federal funding. The university's provost, Katherine S. Newman, sent out a letter to the system's leaders informing them that diversity statements are no longer required for new applicants. Newman wrote that while some programs and departments have required them, the university has never had a policy of diversity statements and believes it could harm applicant evaluation. "The requirement to submit a diversity statement may lead applicants to focus on an aspect of their candidacy that is outside their expertise or prior experience," the letter obtained by Fox News Digital reads. She added that employees and applicants can still reference accomplishments related to diversity, equity and inclusion (DEI) on their own, but requiring stand-alone diversity statements is no longer permitted.
Yahoo
14 minutes ago
- Yahoo
Former school resource officer at Bloomington North accused of child seduction
A former Bloomington High School North school resource officer is accused of sexual misconduct with a female student on a fishing trip in June last year. The former officer, Jason Tatlock, 46 of Owen County, is now charged with child seduction as a child care worker, battery, contributing to the delinquency of a minor and furnishing alcohol to a minor. His initial hearing was Thursday, June 26. An officer with the Indiana Department of Natural Resources began investigating after the student's sister reported to police that, on a fishing trip to Cataract Falls, Tatlock kissed the student, touched her inappropriately and confessed his attraction to her. The student told police in an interview that on the fishing trip, Tatlock 'assaulted' her and she was 'scared to death,' the probable cause affidavit said. The student told police he had inappropriately touched her and kissed her several times on the trip. "I haven't been the same person since what he did to me,' the student told police, the affidavit said. 'I haven't been the same person. I've had trouble sleeping. ... I've been traumatized." Access Bloomington news anywhere, anytime with the Herald-Times app She also told investigators Tatlock mentioned several instances of cheating on his wife. She told police that two days after the incident on the boat, she asked how many women he had cheated on his wife with. He responded, she told investigators, by saying 'she was the youngest.' In an interview with police, Tatlock said he had touched her inappropriately and made sexually explicit comments but did not recall kissing her. He and the female student continually met in his office throughout the school year, according to the affidavit. The student told police he had looked her name up on the first day of school. She told police he would buy her snacks and coffee, the affidavit said, and continued to email after the school year ended. She and Tatlock both told police that on the day of the outing he bought her a bottle of vodka. She described feeling drunk on the trip. Bloomington North families and employees received a message about the incident on June 24, after law enforcement concluded its investigation. The message said the Monroe County Community School Corp. received a report on June 18, 2024, from the guardian of a now-graduated Bloomington North student alleging Tatlock had sexually assaulted the student. The district immediately contacted the Indiana Department of Child Services. Tatlock was placed on administrative leave that day. According to the affidavit, he told police that school officials called him multiple times and took his school vehicle. He resigned June 19, last year, the district said. He joined MCCSC in July 2023 after 20 years working at the Monroe County Sheriff's Office and the Owen County Sheriff's Office, the email said. The affidavit said he told police he also had worked for the Seymour Police Department. 'The safety and security of our students is the highest priority to the MCCSC,' the email said. 'We will continue to work cooperatively with our law enforcement partners and the prosecutor's office in this matter." Contact Andrew Miller at AMiller@ This article originally appeared on The Herald-Times: Former Bloomington North school resource officer accused of child seduction