logo
How the Internet of Things impacts everyone's privacy

How the Internet of Things impacts everyone's privacy

Fast Company7 hours ago

Some unusual witnesses helped convict Alex Murdaugh of the murders of his wife, Maggie, and son, Paul.
The first was Bubba, Maggie's yellow Labrador retriever. Prosecutors used a recording of Bubba to place Alex at the site of the murders. Given Alex's presence at the crime scene, other witnesses then revealed his movements, tracked his speed and explained what he had in his hands. Those other witnesses were a 2021 Chevy Suburban and Maggie, Paul and Alex's cellphones, which all provided data. They're all part of the Internet of Things, also known as IoT.
The privacy implications of devices connected to the internet are not often the most important consideration in solving a murder case. But outside of criminal prosecution, they affect people's privacy in ways that should give everyone pause.
The Internet of Things
The Internet of Things includes any object or device that automatically sends and receives data via the internet. When you use your phone to message someone or social media to post something, the sharing is deliberate. But the automatic nature of connected devices effectively cuts humans out of the loop. The data from these devices can reveal a lot about the people who interact with them—and about other people around the devices.
As an assistant professor of law at the University of South Carolina, I have watched as new kinds of connected devices have entered the market. New devices mean new ways to collect data about people.
Connected devices collect information from different contexts. Take your refrigerator. As a non-IoT device, your fridge generated no data about your kitchen, your food, or how often you peeked inside. Your relationship with the fridge was effectively private. Only you knew about that midnight snack or whether you ogled a coworker's lunch.
Now, smart refrigerators can respond to voice commands, show images of the items in your fridge, track who opens it, suggest recipes, generate grocery lists and even contact your car to let you know the milk has expired. All these functions require continuous streams of data.
Device data and your privacy
Connected devices generate lots of data in contexts that have typically produced little data to make those situations 'legible' to whoever can access the data.
In the past, if you wanted to monitor your heart rate, blood oxygenation, sleep patterns, and stress levels, you might have undergone a battery of tests at a hospital. Specialized equipment in a controlled setting would have measured your body and make these parts of you visible to highly trained, licensed professionals. But now, devices such as the Oura Ring track and analyze all that information continuously, in non-health care contexts.
Even if you don't mind sharing data with an Internet of Things company, there are privacy risks to using a device like this. In the health care context, a series of rules enforced by several groups make sure that connected equipment and the data the equipment generates have adequate cybersecurity protections. Away from that context, connected devices that perform similar functions don't have to meet the same cybersecurity standards.
The U.S. Cyber Trust Mark program, administered by the Federal Communications Commission, is developing cybersecurity standards for Internet of Things devices. But the program is voluntary. In some states, such as Washington, state laws set standards for protecting health data from connected devices. But these laws don't cover all data from all devices in all contexts. This leaves the devices, and the data they generate, particularly vulnerable to unwanted access by hackers.
Your inability to control who sees the data that connected devices gather is another privacy risk. It can give advertisers insights about potential customers. Absent a mandated opt-out, each device provider can decide what it does with customer data. Amazon, for example, recently removed the 'Do Not Send Voice Recordings' option from the privacy settings of its popular smart speaker, Alexa.
Some connected-device providers participate in data markets, selling your data to the highest bidder. Sometimes those purchasers include government agencies. So, instead of needing a warrant to track your whereabouts or learn about activity in your home, they can purchase or access Internet of Things records.
A connected device can also compromise the data privacy of someone who just happens to be nearby.
Connected cars
Cars have joined the ranks of the Internet of Things. The 2021 Chevy Suburban that helped convict Alex Murdaugh simply tracked information about the vehicle. This included the vehicle's speed, the turning radius of the steering wheel, and time stamps.
Most modern vehicles also incorporate data from external sources. GPS data and infotainment systems that connect to cellphones also track the vehicle's movements. All of this data can also be used to track the whereabouts and behavior of drivers and other people in the vehicles.
And as vehicles become increasingly automated, they need to make driving decisions in increasingly complex situations. To make safe driving decisions, they need data about the world around them. They need to know the size, speed, and behavior of all the nearby vehicles on the roadway, moment to moment. They need to instantly identify the best way to avoid a pedestrian, cyclist or other object entering the roadway.
If you and I are driving in separate cars on the same roadway, it means my car is collecting information about you. And if my vehicle is connected, then data about you is being shared with other cars and car companies. In other words, if a Tesla had been present at the scene of the Murdaugh murders, its outward facing cameras could have captured footage. Bubba's testimony might not have been necessary.
Spillover data collection
Internet of Things devices generate data from similar situations in a highly structured way. Therefore, what data collectors learn about me from my connected device may also give them insights about someone else in a similar situation.
Take smart meters that share information with the water utility every 15 minutes. Imagine a subdivision with a narrow range of house and yard sizes. Water usage should be relatively comparable for each household. Data from even just a couple of houses can give a good sense of what water use should be for everyone in the neighborhood. Without actually collecting data from each house, data from connected devices reveals potentially private information about similarly situated people.
Data from IoT devices can also fuel insights into people who never use or make contact with these devices. Aggregated data from Oura Rings, for instance, could contribute to decisions a health insurer makes about you.
Connected devices are also changing. In addition to collecting data about the person using the device, a growing number of sensors collect information about the environment around that person.
Some of my research has examined what privacy means for people observed by vehicle sensor systems such as radar, lidar, and sonar. These technologies capture potentially very revealing information about people and their property. Even the most comprehensive privacy laws in the United States offer people little recourse for the impact to their privacy.
Civilian drones are capable of gathering data about other people. But people observed by drones would have a tough time learning that data about them exists and an even harder time controlling how that information might be used.
Meanwhile, artificial intelligence systems are expanding the ways Internet of Things data can affect the privacy of other people by automating the process of training IoT systems. AI chipmaker Nvidia has created a digital environment, or model, where people can upload their connected device data. This environment can help train IoT devices to ' predict the outcomes of the device's interactions with other people,' according to Nvidia.
Models like this make it easy for AI devices that you don't own to collect data or reach conclusions about you. In other words, IoT data processed by AI can make inferences about you, rendering you legible to the AI system even before you interact with an IoT device.
Looking forward
Internet of Things devices and the data they generate are here to stay. As the world becomes increasingly automated, I believe it's important to be more aware of the way connected devices may be affecting people's privacy.
The story of how vehicle data combined with cell data in the Murdaugh trial is a case in point. At the start of the trial, prosecutors came ready to show 'phone call logs and texts, steps recorded, apps asking for information, GPS locations, changes when the phone went from vertical portrait mode to horizontal landscape mode and back, and—key to the prosecution's case—when the camera was activated.'
But that was probably not enough to merit a conviction. During the trial, GM called and said something like ' oh wait, we found something,' according to the prosecution. That vehicle data, combined with the cellphone data, told a story that Alex Murdaugh could not deny.
There are at least two lessons from this story. First, not even GM fully realized all the data it had collected in its vehicles. It's important to be aware of just how much information IoT devices are collecting. Second, combining data from different IoT devices revealed incontestable details of Alex Murdaugh's activities. Away from criminal court, combining data from multiple IoT devices can have a profound effect on people's privacy.
If people's data privacy matters, how do we address this reality? One way of potentially protecting people's privacy is to make sure people and communities observed by connected devices have a direct say in what data the devices collect and how the data is used.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

5 ChatGPT Prompts That Help Teens Build $100-A-Day Businesses
5 ChatGPT Prompts That Help Teens Build $100-A-Day Businesses

Forbes

time12 minutes ago

  • Forbes

5 ChatGPT Prompts That Help Teens Build $100-A-Day Businesses

Teen photographer who uses chatgpt to help with her business. Teen entrepreneurs today have an advantage: ChatGPT prompts that can accelerate business development without requiring significant capital or experience. While their peers compete for minimum-wage summer jobs, savvy young entrepreneurs are utilizing specific ChatGPT prompts to create scalable income streams that can generate $100 or more per day. The power lies not just in using ChatGPT but in crafting the right prompts that produce actionable business results. Research shows that users who craft clear, specific prompts consistently receive more accurate and helpful responses, while effective prompting can double the output efficiency in tasks such as content creation and research. With 26% of American teenagers now using ChatGPT for various tasks—double the number from 2023—those who master strategic prompting gain a competitive edge in both academics and business ventures. As prompt engineering becomes incorporated into job descriptions across tech, marketing, and healthcare sectors, teens who develop these skills early position themselves for future career advantages. 1. Digital Product Creation with AI Assistance The ChatGPT Prompt: "Act as a digital product creator. Help me outline a 10-page workbook for [target audience] on [topic]. Include title suggestions, content structure, and visual recommendations for each section." Digital products offer the ultimate scalability—create once, sell repeatedly. With ChatGPT handling content structure and copy, teens can develop products without extensive subject matter expertise. Popular digital products include productivity planners, study guides, social media templates, and educational workbooks. These typically sell for $5-25 on platforms like Etsy, Gumroad, or personal websites. Success metric: One well-positioned digital product selling five units daily at $20 each generates $100 daily revenue. The key is identifying specific audience needs and creating targeted solutions that meet those needs. 2. Social Media Management for Local Businesses The ChatGPT Prompt: "Create a week of Instagram content for a [business type] targeting [audience]. Include educational posts, engagement questions, and promotional content with captions under 150 words each." Small businesses recognize the importance of social media but often lack the time for consistent posting. Teen entrepreneurs can fill this gap by offering content packages powered by AI efficiency. Services can include caption writing ($25-$ 50 weekly), content scheduling, and basic graphic design using tools such as Canva. Managing 3-4 clients at $150 per month each creates a sustainable income. The advantage: Teens understand social media platforms intuitively and can combine AI-generated content with authentic brand voice development. 3. Newsletter Curation and Content Publishing The ChatGPT Prompt: "Summarize five trending stories in [industry] using conversational language. Include one actionable takeaway per story and suggest relevant questions for audience engagement." Passionate teens can monetize their interests by creating focused newsletters or content channels that cater to their specific interests. AI handles research synthesis and writing, while human insight provides curation and voice. Monetization strategies include affiliate marketing, sponsored content, and paid subscriptions. Even newsletters with 500 engaged subscribers can generate $100+ daily through strategic partnerships and affiliate commissions. Growth strategy: Start by offering free content to build an audience, then introduce premium content or exclusive access for paying subscribers. 4. Academic Support Services The ChatGPT Prompt: "Create a comprehensive study guide for [subject/topic], including key concepts, definitions, practice questions, and memory techniques. Format for high school students." Teen entrepreneurs can utilize AI to systematize academic help, creating standardized products and services. This includes custom study guides, flashcard sets, and exam preparation materials. Parents often pay $15-$ 30 for personalized study materials, especially for challenging subjects or standardized test preparation. The scalable approach involves creating templates that can be easily customized for individual students. Competitive edge: Teen creators understand current educational challenges and can design solutions that resonate with their peer audience. 5. Business Development Consulting The ChatGPT Prompt: "Help me create a 6-slide pitch deck for a business solving [problem] for [audience]. Include slide titles, key talking points, and suggested visuals for each section." Young entrepreneurs participating in business competitions, school presentations, or actual fundraising often need structured guidance. AI can help organize ideas into compelling narratives and professional formats. Services include pitch deck creation, business plan development, and presentation coaching. Charging $50-$ 100 per project, teens can serve their classmates, other young entrepreneurs, or even adult founders seeking fresh perspectives. Market opportunity: The rise in youth entrepreneurship programs creates steady demand for peer-to-peer business development support. Implementation Strategy: From ChatGPT Prompt to Profit Success requires moving beyond experimentation to systematic execution. Companies are increasingly integrating AI tools into workflows, making prompt engineering skills valuable for employees who need to extract the most from these systems. Here's how to transform these strategies into sustainable income: Week 1-2: Choose and Test Select one strategy that matches your interests and skills. Create your first product or service offering using AI assistance. Launch with friends, family, or local networks for initial feedback. Weeks 3-4: Refine and Scale. Based on early results, adjust your approach and expand your reach. Use social media, local marketing, or online platforms to attract paying customers beyond your immediate network. Months 2-3: Systematize. Develop templates, processes, and systems that allow you to serve multiple clients or sell multiple products efficiently. This is where AI becomes most valuable—handling routine tasks while you focus on growth. Beyond ChatGPT Prompts: Building Future-Ready Skills While $100 daily represents a tangible goal, the real value lies in developing capabilities that can be applied to any career path. Teen entrepreneurs who use AI strategically develop critical thinking, market analysis, customer service, and financial management skills. The entrepreneurial mindset—identifying opportunities, solving problems creatively, and taking calculated risks—becomes increasingly valuable as AI transforms traditional job markets. Young people who learn to leverage AI as a business tool today position themselves for success in tomorrow's economy. The key insight for teen entrepreneurs is treating ChatGPT as a creative partner, not a replacement for human judgment. The most successful combine AI efficiency with authentic customer relationships, creative problem-solving, and persistent execution. As these teens master the art of crafting effective prompts and interpreting AI-generated insights, they develop a skill set that will serve them whether they become business founders, corporate leaders, or innovators in any field. For teens ready to start, the barrier to entry has never been lower. You need a problem to solve, ChatGPT prompts to accelerate solution development, and the willingness to iterate based on market feedback. The question isn't whether you can build a $100-daily business—it's whether you're ready to begin.

Why Companies Need To Institute Stricter AI Policies
Why Companies Need To Institute Stricter AI Policies

Forbes

time12 minutes ago

  • Forbes

Why Companies Need To Institute Stricter AI Policies

As the hype grows around AI and more people try it out, many are using it at work to see how it can help them become more efficient. But there's a real problem: Some employees are using public instances of chatbots, putting proprietary company data at risk by giving an open LLM access. Others use the company's access to AI, but input highly sensitive and personal data, like Social Security numbers and financial data. A new study from technology security company Kiteworks found that 27% of companies reported that nearly a third of all of the data sent to AI systems is the type of information that should be kept private, like company records, employee information and trade secrets. (It could be more; 17% of companies don't know how much private data ends up getting sent to AI.) It's a problem that's growing. A Stanford University report on AI found a 56.4% increase in security incidents with the technology last year. While it may seem obvious to the tech savvy, many employees might not know the risks of this kind of AI sharing, and 83% of companies only rely on training or warning emails to let them know. Kiteworks found that just 17% have automatic controls that keep employees from uploading sensitive information to public AI tools. Further, most companies don't have much of an AI governance structure—only 9%, according to Deloitte research cited by Kiteworks. The study results show that companies need to add policies and infrastructure to control employee use of AI and protect their own data. This kind of use can cause real damage to companies—not to mention their employees and clients. The study concludes that companies need to acknowledge the threat, deploy controls that can be verified, and ensure that they can stand up to regulatory scrutiny. 'With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing,' Kiteworks CMO Tim Freestone said in a statement. Many people still haven't grasped how to truly use AI to benefit their company—we all know its practical business functions go beyond asking a chatbot for advice. I talked to Lindsay Phillips, cofounder and COO of tech change coaching firm SkyPhi Studios, about how to bring AI to your company and get people using it. An excerpt from our conversation is later in this newsletter. This is the published version of Forbes' CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday. STOCK MARKET NEWS Nvidia founder and CEO Jensen Huang speaks at the VivaTech technology startup and innovation fair in Paris earlier this month. Mustafa Yalcin/Anadolu via Getty Images Nvidia is back on top, pulling ahead of Microsoft as the world's most valuable company. A research note on Wednesday from Loop Capital analyst Ananda Baruah raised his price target for Nvidia's stock from $175 to $250. Baruah wrote that Nvidia will lead the next 'Golden Wave' for generative AI, and will see 'stronger-than-anticipated demand.' The research note—as well as Nvidia's annual shareholder meeting, where CEO Jensen Huang laid out an optimistic view of the company's future—drove the company's stock to hit a record high of $154.43. While share prices dropped somewhat before markets closed, it was a banner day for the chips company. Chips and robotics company, that is. At the annual meeting, Nvidia positioned itself squarely in the robotics and physical industrial AI spaces. In the company's annual report, Huang wrote that so far this year, Nvidia has fundamentally transformed itself from a chip maker to a builder of infrastructure. While AI data centers will continue to be important, AI-powered robots in factories, hospitals, farms and cities will be what moves society forward in the not-so-distant future. 'We stopped thinking of ourselves as a chip company long ago,' Huang said at the meeting, according to CNBC. CYBERSECURITY An Iranian man stands in his apartment in Tehran, which was destroyed by Israeli attacks. Morteza Nikoubazl/NurPhoto via Getty Images While it seems for now that the physical conflict between Iran and Israel—which the U.S. inserted itself into by bombing Iranian nuclear facilities—is on hold, the war will likely still be bitterly fought online. Forbes' Thomas Brewster writes that Iran, which has a 'robust cyber apparatus' according to Middle East cybersecurity experts, is likely to launch disruptive attacks on U.S. and Israeli interests. For the U.S. government, this could be especially challenging. Since the start of President Donald Trump's second term, the Department of Homeland Security's Cybersecurity and Infrastructure Agency has lost many experienced employees and does not even have a permanent director. Insiders told Forbes that CISA already is stretched thin, but coordinated attacks from Iran could make it much worse. The PR war over how the attacks are seen is also riddled with conflicting imagery. AI-generated Iranian propaganda showing a rocket launch and a convoy of tanks and missiles have racked up millions of views, writes Forbes' Emily Baker-White. These videos, which were seen throughout social media, abruptly disappeared. None of the platforms have any policies against releasing propaganda, and the videos didn't appear to be labeled as AI-generated. NOTABLE NEWS getty AI can be a force for good, but it can also be nefarious. New research from Barracuda, Columbia University and the University of Chicago found that AI writes more than half of all spam email now, writes Forbes senior contributor Davey Winder. This includes the vast majority of messages in your personal account's spam folder—but it's also getting prevalent in business emails as well. About 14% of business inbox attacks, in which often senior people in organizations are targeted with requests for financial transactions, are written by AI. Winder writes that it makes sense to use AI for spam. It tends to do a passable job of writing with proper grammar and spelling across a variety of languages. It can also write in a convincing way to provoke a recipient to respond. The study found that attackers seem to be using AI to draft different versions of spam and hacking emails, searching for the best way to get a recipient's attention. BITS + BYTES How To Get Your Company To Use AI To Its Full Potential SkyPhi Studios cofounder and COO Lindsay Phillips. SkyPhi Studios AI truly can transform the way we do business, but in order for it to make a difference, employees actually need to use it. AI is unlike any other tech shift in recent years, and it can be difficult for everyone at a company to embrace it. I talked to Lindsay Phillips, cofounder and COO of tech change coaching firm SkyPhi Studios, about the best way to get your employees through these challenges. This conversation has been edited for length, clarity and continuity. What is the biggest hurdle that you see to widespread AI adoption in companies? Phillips: The biggest hurdle is companies approaching AI as a software shift, and AI is really a mindset shift. It's not just about teaching people how to do the same work in a different tool. They have to completely change how they think about work, how they think about the value that they're bringing to the workplace, how they approach their workflows. Companies that are just trying to make tools available and don't help people understand how to use it, how to adopt it successfully and how to integrate it, are running into adoption challenges. How do you get past that roadblock? Understanding how you want people to change to use the tool. Understanding how roles and responsibilities need to change, the RACI [responsibility matrix] that's required so that folks adopt this tool and integrate it into their workflows. You want to also make sure that leadership across the org understands that strategy. Most folks want to get guidance from their manager, so you want to leverage leadership cascades and really make sure they're carrying that message forward and reinforcing it. The other thing that we're seeing with companies that are successful is a community of practice: Creating ambassadors throughout the org, at the ground level, boots on the street, that are supporting their team members and adopting these tools and helping them make that mindset shift. How does the move toward AI compare to other technology shifts in the past, like moving to the cloud and increasing cybersecurity? It definitely feels like we're driving the car while we're building the car, so we need our team members to be very collaborative in helping us figure out how to use these tools. Experts are going to be able to tell you how AI is going to be able to make their jobs more productive and be able to help them do their work better. Leadership's not necessarily going to know that, so it really does need to be a lot more collaborative and agile than it needed to be in the past. In the past, you find requirements, you help people adopt those new ways of working, you meet those requirements. This we're figuring it out as we go, and it's much more experimental. What sets the companies that are most successfully integrating AI apart? A comfort with experimentation and a certain amount of comfort with uncertainty of saying, 'We're going to give it a try. We're going to see how it goes, and we'll pivot as we need to.' Companies that are very attached to clarity and certainty and always having the right answer are going to struggle with AI adoption. It's not a linear straight path. It is going to be something you have to figure out as you go. What advice would you give to executives that are trying to get their employees to use AI? The more accessible we can make it, the better. There's companies doing weekly challenges. Just start to encourage your folks to incorporate it into their daily lives. Encourage folks to share wins. You want to hear about how other people are using these tools so that it can give you ideas of how you can also use it in your day-to-day. It is a massive mindset shift. Approach it with little tiny bites that you can take to start to shift habits daily. COMINGS + GOINGS Defense contractor Lockheed Martin appointed Dr. Craig Martell as its new vice president and chief technology officer, effective June 23. Martell previously worked as chief AI officer for Cohesity, and as the first chief digital and AI officer for the U.S. Department of Defense prior to that. appointed as its new vice president and chief technology officer, effective June 23. Martell previously worked as chief AI officer for Cohesity, and as the first chief digital and AI officer for the U.S. Department of Defense prior to that. Convenience store chain Love's Travel Stops hired Tim Langley-Hawthorne as chief technology officer, effective June 23. Langley-Hawthorne steps into the role after working as Hertz's executive vice president and chief information officer. hired as chief technology officer, effective June 23. Langley-Hawthorne steps into the role after working as Hertz's executive vice president and chief information officer. Newspaper chain Gannett selected Joe Miranda for its chief technology and data officer role, effective June 23. Miranda joins the firm after working as executive vice president and chief digital and technology officer of Herbalife, and has also held leadership roles at Thomson Reuters and Voya Financial. STRATEGIES + ADVICE As more companies adopt enterprise AI packages, several trends in what they are using and how are coming to the surface. Here are 10 trends that are shaping enterprise LLM use today. While part of business advancement today is all about personal branding and showing the world your expertise and strategic viewpoint, you may feel more comfortable quietly sitting in front of a screen. The good news is that you don't have to be an extrovert. Here are some tips for shy people who want to build personal brands. QUIZ After Meta's deal to take a 49% stake in data-labeling company Scale AI, another company with a very different main business stepped up as a potential giant in data labeling. Which company is it? A. Priceline B. Uber C. Instacart D. Electronic Arts See if you got the answer right here.

Federal lawsuit adds to allegations of child sexual abuse in Maryland youth detention centers
Federal lawsuit adds to allegations of child sexual abuse in Maryland youth detention centers

Associated Press

time13 minutes ago

  • Associated Press

Federal lawsuit adds to allegations of child sexual abuse in Maryland youth detention centers

BALTIMORE (AP) — A federal lawsuit could open a new chapter in an escalating legal battle in Maryland, where officials are struggling to address an unexpected onslaught of claims alleging child sexual abuse in state-run juvenile detention facilities. With thousands of similar claims already pending in state court, the litigation has raised questions about how Maryland will handle the potential financial liability. The new federal suit, filed Wednesday on behalf of three plaintiffs, seeks $300 million in damages — an amount that far exceeds caps imposed on claims filed in state court. It alleges Maryland juvenile justice leaders knew about a culture of abuse inside youth detention facilities and failed to address it, violating the plaintiffs' civil rights. Messages seeking comment were left Thursday with the state's Department of Juvenile Services and the Maryland Office of the Attorney General. The department generally doesn't comment on pending litigation. An estimated 11,000 plaintiffs have sued in state court, according to the attorneys involved. Maryland Senate President Bill Ferguson said Wednesday that he believes negotiations for a potential settlement are ongoing between attorneys for the plaintiffs and the attorney general's office. Officials have said the state is facing a potential liability between $3 billion and $4 billion. Lawsuits started pouring in after a state law passed in 2023 eliminated the statute of limitations for child sexual abuse claims in Maryland. The change came in the immediate aftermath of a scathing investigative report that revealed widespread abuse within the Archdiocese of Baltimore. It prompted the archdiocese to file for bankruptcy to protect its assets. But Maryland leaders didn't anticipate they'd be facing similar budgetary concerns because of claims against the state's juvenile justice system. Facing a potentially enormous payout, lawmakers recently passed an amendment to limit future liabilities. The new law reduces caps on settlements from $890,000 to $400,000 for cases filed after May 31 against state institutions, and from $1.5 million to $700,000 for private institutions. It allows each claimant to receive only one payment, instead of being able to collect for each act of abuse. Suing in federal court allows plaintiffs to sidestep those limits. 'Despite Maryland's recent unconstitutional legislative efforts to insulate itself from liability for the horrific sexual brutalization of children in its custody, Maryland cannot run from liability under Federal law,' plaintiffs' attorney Corey Stern said in a statement. 'The United States Constitution was created for all of us, knowing that some would need protection from the tyranny of their political leaders.' The three plaintiffs in the federal case allege they were sexually abused by staff at two juvenile detention centers. While other lawsuits have mainly presented allegations of abuse occurring decades ago, the federal complaint focuses on events alleged to have happened in 2019 and 2020. The plaintiffs were 14 and 15 years old. The victims feared their sentences would be extended if they spoke out, according to the complaint. They accuse state officials of turning a blind eye to a 'culture of sexual brutalization and abuse.' Stern said he anticipates more federal claims will be forthcoming.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store