
Schools relying on digital surveillance find security still takes a human touch
Now, student-run Hope Squads uplift peers with homemade cards and assemblies. Volunteer fathers patrol hallways through Dads on Duty. A team of counselors, social workers and probation officers gathers to discuss and support struggling students. Thanks to a new cell phone ban, students are talking to each other more. The positive results of these combined efforts have been measurable.
'We've helped change … lives,' said Brianna Vasquez, a Rigby High senior and Hope Squad member. 'I've had friends who have been pulled out of the hole of depression and suicidal thoughts because of (the Hope Squad).'
___
The Education Reporting Collaborative, a coalition of eight newsrooms, is investigating the unintended consequences of AI-powered surveillance at schools. Members of the Collaborative are AL.com, The Associated Press, The Christian Science Monitor, The Dallas Morning News, The Hechinger Report, Idaho Education News, The Post and Courier in South Carolina, and The Seattle Times.
___
America's educators are working to prevent harm like Rigby's shooting. Many U.S. districts have turned to technology — especially digital surveillance — as the antidote. Not everyone is sold on that approach, as there can be issues, including with privacy and security. Without broad agreement on which strategies work best, some districts are trying a combination of technology, on-the-ground threat assessment teams and mental health supports.
Jennifer DePaoli, a senior researcher at the Learning Policy Institute who has studied school safety, said a multi-pronged approach is 'very sensible.'
' People are the solution'
In Rigby, educators lean toward human interaction. Artificial intelligence and digital surveillance systems are perhaps less likely to identify who's eating alone at lunch or withdrawing from friends.
'It's all about culture,' said Chad Martin, superintendent of Jefferson County School District in Rigby. 'It starts with that – just having a friend, having a group of friends, having a connection somewhere.'
Rigby school leaders use technology to detect threats, including an app, STOPit, which allows students to anonymously report safety concerns, and surveillance software that monitors students' keystrokes for troubling terms. Martin said those are helpful but must be used in concert with human-led initiatives.
The district's version of a threat assessment team has been one of the most impactful tools, Martin said. In monthly group conversations, school staff may realize that a student who's been missing class has a parent who was recently arrested, for example.
'Everybody has a little piece of information,' Martin said. 'The goal is to put those people in the same room and be able to paint a picture that can help us support kids.'
Although Idaho doesn't mandate the use of in-school threat assessment teams, 11 states do. In 2024, the National Center for Education Statistics reported that 71% of U.S. public schools have a threat assessment team.
A leading model, used by thousands of districts, is the Comprehensive School Threat Assessment Guidelines (CSTAG). These were developed by forensic clinical psychologist Dewey Cornell after he spent years studying homicides committed by children or teens, including school shootings. He said digital surveillance technology can offer schools 'an illusion of safety and security.'
With CSTAG, teams made up of school staff members use a multi-step process when threats emerge. The group might suspend or relocate a student while conducting mental health screenings, facilitating a law enforcement investigation and developing a safety plan.
If implemented correctly, Cornell said, this approach is less punitive and more rooted in intervention. Cornell co-authored a recent study in Florida, where threat assessment teams are mandatory, finding 'low rates of school removal and very low rates of law enforcement actions.'
'If you're a school counselor and you can work with a troubled kid and help get them on the right track, you're not just preventing a school shooting, but you're more likely to be preventing a shooting that would occur somewhere else and maybe years in the future,' he said.
Threat assessment teams aren't immune from scrutiny. Complaints have emerged about them operating without student or parent knowledge, or without staff members to represent children with special needs, and about discrimination against Black and Hispanic students. The Learning Policy Institute's DePaoli said more research is needed on whether they successfully identify threats and provide students with appropriate support.
Utah's Jordan School District uses the CSTAG model. Travis Hamblin, director of student services, credits the 'human connection' with strengthening how the district handles threats, boosting student safety and well-being.
Earlier this school year, it received an alert through Bark, a digital monitoring tool that scans students' school-issued Google suite accounts. It flagged a middle schooler who'd uploaded a hand-drawn picture of a gun.
Through the CSTAG decision-making process, the threat assessment team avoided unnecessarily escalating the situation by determining the student didn't intend any harm, Hamblin said. They chalked it up to immaturity and asked the student to refrain from such drawings.
The district employs someone — a former administrator and counselor — to field the Bark alerts and communicate with school staff. Administrators from every Jordan school have undergone threat assessment training, along with select staff members.
'A digital tool for us is a tool. It's not the solution,' Hamblin said. 'We believe that people are the solution.'
Student- and parent-led efforts in Idaho
In Rigby, one of those people is Ernie Chavez, whose height makes him stick out in a hallway streaming with middle schoolers. He's with Dads on Duty, which brings in parents to help monitor and interact with students. Throughout the school, students reach out to Chavez for high-fives. On one February afternoon, he was greeted with applause and cheers.
Similarly, the district's Hope Squads, in place since 2021, have become an active presence on campus. The student-led coalitions, implemented in thousands of schools across the U.S. and Canada, aim to foster connection and reduce the risk of suicide.
'We refer … students every year to counselors, and those students go from some of the worst moments in their life (to getting help),' Vasquez said. 'We build the connection between adults and faculty to the student.'
Hope Squad members notice peers who seem down or isolated and reach out with a greeting or handmade card. We 'show them that we care and they're not alone,' said member Dallas Waldron, a Rigby High senior.
The groups also plan special events — like a week of mental-health activities. Sophomore Emilie Raymond said the shooting showed 'people need to feel included and they need to find that hope.'
Another change: a new cell phone ban. Before, students were 'sitting in the corners, isolated, staring at a screen,' said Ryan Erikson, Rigby Middle's principal. Now, 'they're playing games, they're goofing off … they're actually conversing.'
While the district's approach to stemming violence is robust, 'it's not perfect,' said Martin, Jefferson's superintendent. 'We're still going to have things come up that we haven't prepared for or weren't on our radar. But we address them and just try to do whatever we can to support kids.' ____ The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
a day ago
- The Independent
Lawyer issues apology after using AI-generated fake quotes in murder case
A senior Australian lawyer has issued an apology to a judge after court submissions in a murder case were found to contain fake quotes and non-existent legal judgments generated by artificial intelligence. The significant error occurred in the Supreme Court of Victoria, adding to a growing list of AI-related blunders impacting justice systems globally. Rishi Nathwani, a defence lawyer holding the prestigious title of King's Counsel, accepted "full responsibility" for the incorrect information submitted in the case involving a teenager charged with murder. Court documents, reviewed by The Associated Press on Friday, detailed the admission. Addressing Justice James Elliott on Wednesday, Mr Nathwani stated on behalf of the defence team: "We are deeply sorry and embarrassed for what occurred." The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.


The Independent
2 days ago
- The Independent
Australian lawyer apologizes for AI-generated errors in murder case
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took 'full responsibility' for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. 'We are deeply sorry and embarrassed for what occurred,' Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. 'At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,' Elliott told lawyers on Thursday. 'The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,' Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes,' court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. 'It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,' Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.


Daily Mail
2 days ago
- Daily Mail
Minority Report-style map of Britain that aims to stop criminals before they strike by identifying local areas most at risk is being developed by experts
Experts are creating an AI-driven crime map of England and Wales that aims to stop criminals before they strike. The Government today announced a £4million investment in the futuristic project, which will use official data to identify local areas that are likely to see criminal activity, such as knife offences and anti-social behaviour. Police will then target these locations with extra resources, including patrols and visits to the homes of convicted criminals, in the hope of intervening before any offences take place. Peter Kyle, Science and Technology Secretary, said the project would draw on data from police, councils and social services, including criminal records, previous offence reports and the behavioural patterns of known offenders. The Concentrations of Crime Data Challenge will see experts from universities and businesses tasked with creating the model with the aim of making it operational across England and Wales by 2030. 'We want to identify areas that have a higher probability of experiencing crime,' Mr Kyle told the Daily Mail during a visit to the Met's central communications command centre. 'That means police can ensure officers are in the area and are able to prevent these crimes happening in the first place or respond before they escalate. 'The Met and other forces have an enormous amount of data on historic and emerging crime. By using that - and incorporating data from other authorities such as local councils - the model can become more accurate as time passes.' AI can identify patterns and links in evidence and sift through data more quickly than humans. Civil liberties campaigners have previously raised ethical concerns about the use of algorithms to try to predict crimes in advance. But Mr Kyle said the government's crime map would only use official data and not information from private sources, such as social media. He insisted any interventions police make based on the map's findings would be subject to 'human oversight' and insisted the system was merely an extension of traditional policing methods. 'Police forces throughout history have always looked at individuals who are committing crime and patterns of behaviour that indicate they will commit crime again - this is not new,' he said. 'But in an era where we are able to use digital technology and data to assist in that process we can't ignore the power of that technology to keep us all safe. 'This will be one tool alongside many others that crime fighters use. Human oversight and respect for the experience of senior officers will always remain at the forefront. 'We will not be using digital technology to blindly identify individuals on a vague probability. That is not what we are talking about.' The initial £4m investment aims to deliver a series of initial prototypes by April 2026. My Kyle spoke to the Daily Mail on a visit to the Met's Central Command Communications Centre in Lambeth, where he also viewed other tech innovations including 'first responder' drones. The gadgets, which will be deployed in London in October, can be sent to the scene of emergency incidents in minutes to give officers an overview of the situation and gather evidence. The Mail also saw a demonstration of a smaller device that can be used to search buildings for hazards, such as improvised explosive devices (IEDs), before they are raided by armed police. Mr Kyle also viewed a van fitted with facial recognition cameras, which are already used in London and are now being rolled out across seven police forces – Greater Manchester, West Yorkshire, Bedfordshire, Surrey, Sussex, Thames Valley and Hampshire. According to officials, the technology has already been used to make 580 arrests in London over the last year, including 52 registered sex offenders. Campaign group Big Brother Watch claimed the move 'represents a significant expansion of the surveillance state.' But the government maintains the technology will only be used to catch 'high–harm' offenders with rules to ensure 'safeguards and oversight'. 'What we are doing is ensuring that those people who are known perpetrators of crime are found, identified and brought to justice so they can't commit more crimes,' said Mr Kyle. 'People should be reassured that we keep people's digital information safe and secure and always act appropriately. Anyone who is not identified by these cameras has their data immediately and permanently deleted. 'There are criminals out there using data and digital technology to commit crimes. It would be absurd to suggest we don't use this technology to keep people safe.' This week, the Home Office said every community across England and Wales had now been assigned a 'named, contactable' officer to handle reports of crimes such as anti-social behaviour. Their details will be made available for residents on their local force's website, it is understood. The pledge was made as part of the Government's previously announced Neighbourhood Policing Guarantee, under which forces have committed to respond to neighbourhood queries within 72 hours.