8 hours ago
AI And Domestic Violence: Boon, Bane — Or Both?
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a ... More photograph of a man, with facial features identified and facial bounding boxes present. (Photo by Smith Collection/Gado/Getty Images)
One evening, a woman clicked open the camera app on her phone, hoping to capture what nobody else was around to see — her partner's escalating rage.
Their arguments followed a familiar pattern: she would say something that set him off, and he would close the physical distance between them, screaming in a high-pitched voice, often threatening her with violence. 'I'd never actually do it, of course,' he would often say later on, once the dust had settled between them. 'You're the one mentally torturing me.'
Sometimes, the response would escalate even further. He would throw her phone across the room, upset by the 'invasion of [his] privacy', or snatch an object from her hands, raising it as if to strike her with it. No physical bruises were left, but the writing was on the wall — with no device to capture it, no alert to trigger and no safe place to store the evidence.
For many women, this isn't a plot point from a cringey Netflix drama — it's near-daily reality, and comprises the kind of behavior that rarely results in a police complaint. Notably, while threats of physical harm are explicitly criminal in many jurisdictions — including India and the U.S. — they've long gone undocumented and unprosecuted.
Experts note that this very pattern — escalating verbal threats, threatened or actual destruction of property and intimidation — often marks the early stages of more serious and damaging domestic violence. And in certain contexts, AI-enabled tools are making it easier to discreetly gather evidence, assess personal risk and document abuse — actions that were previously unsafe or more difficult to carry out.
At the same time, these technologies open up unprecedented avenues for new forms of harm. Increasingly, the most common 'eyewitness' in situations like these is a phone, a cloud account or a smart device — secretly recording, storing and offering support or a lifeline. But just as easily, the same tools can be turned into instruments of control, surveillance and even manipulated retaliation.
Tech For Good
Around the world, one in three women has experienced physical or sexual violence by a partner, according to the World Health Organization. As AI becomes more embedded in everyday devices, a growing number of tools have come up, often with the stated goal of making homes safer for those at risk — particularly those experiencing intimate partner violence.
During the COVID-19 pandemic, as cases of domestic violence surged, Rhiana Spring, a human rights lawyer and founder of the Swiss-based nonprofit Spring ACT, saw an opportunity to deploy AI for good.
Her organization developed Sophia, a chatbot that offers confidential, 24/7 assistance to domestic violence survivors. Users can talk to Sophia without leaving a digital trace, covertly seek help and even store evidence for use in legal proceedings. Unlike traditional apps, Sophia doesn't require a download, minimizing surveillance risks.
'We've had survivors contact Sophia from Mongolia to the Dominican Republic,' Spring told Zendesk after winning a Tech for Good award in 2022.
Meanwhile, smart home cameras, like those from Arlo or Google Nest, now offer AI-driven motion and sound detection that can distinguish between people, animals and packages. Some can even detect screaming or unusual sounds and send alerts instantly — features that can be valuable for creating a digital record of abuse, especially when survivors are worried about gaslighting or lack physical evidence.
Several CCTV systems also allow cloud-based, encrypted storage, which prevents footage from being deleted or accessed locally by an abuser. Services like Wyze Cam Plus offer affordable cloud subscriptions with AI tagging, and features like 'privacy masking' allow selective blackouts in shared spaces.
For discreet assistance, several smartphone apps also integrate AI with panic alert features. Examples include MyPlan, Aspire News — which poses as a news app but offers emergency contacts and danger assessment tools — and Circle of 6. Smart jewelry like InvisaWear and Flare hide panic buttons in accessories, where, with a double-tap, users can clandestinely notify emergency contacts and share their GPS location.
Beyond home safety and personal apps, AI is also entering hospitals and law enforcement in the context of domestic violence response and prevention.
Dr. Bharti Khurana, a radiologist at Brigham and Women's Hospital, developed an AI-powered tool called the Automated Intimate Partner Violence Risk Support (AIRS) system, which scans medical records and imaging data for subtle injury patterns often missed by doctors and flags patients who may be victims of abuse. According to Khurana's team, AIRS has helped identify domestic violence up to four years earlier than patients typically report it.
Another U.S.-based initiative, Aimee Says, was launched in Colorado to help survivors navigate the complexities of the legal system. The chatbot walks users through the process of filing protection orders, finding support organizations and understanding their rights. The app features guest mode sessions that disappear after use as well as a hidden exit button for quick redirection if an abuser walks into the room.
'We want to be there before the person is ready to reach out to a victim service organization — hopefully, early enough to prevent a future of violence,' said co-founder Anne Wintemute in a December 2024 interview with The Decatur Daily.
Double-Edged Sword
In India and much of the Global South, domestic violence continues to be rampant, widespread and hugely underreported.
According to the National Family Health Survey (NFHS-5), nearly one in three Indian women aged 18 to 49 has experienced spousal violence — but only a fraction seek help, often due to stigma, dependency, fear of escalation or lacunae in response frameworks and accountability.
In these contexts, AI has the potential to be a particularly powerful tool — helping survivors document abuse or seek help — but its reach is limited by access, resources and trust in the technology itself. Surveillance concerns also loom large, especially in environments where privacy is already compromised.
Moreover, the same technologies that support survivors can also open new avenues for harm — particularly when wielded by abusers.
Deepfake technology, which uses generative AI to produce hyper-realistic fake audio, images or video, is already complicating legal proceedings, with fabricated call logs, messages or videos sometimes used to falsely implicate victims. In restraining order hearings or custody disputes, which often happen quickly and with limited fact-finding, courts may have little time or capacity to assess the authenticity of digital evidence.
Products that store data, enable remote surveillance and monitor behavior can just as easily become weaponized by abusers. Few tech companies offer transparency and answerability on how their tools could be misused in these ways, or build in strong enough safety features by design.
'In parallel, the emergence of deepfake technology … also raises alarms regarding privacy invasion, security risks and propagation of misinformation,' warned Supreme Court Justice Hima Kohli of India, explaining how easy it has become to manipulate trust in digital content.
The same code that is used as a lifeline, then, can also become a weapon in the same breath. As AI evolves, while the real test for the tech industry is indeed about how 'smart' their tools can become, it's also about how safely and justly they can adapt to serve those who need them most.