
Is your data exposed?
Let's start with a museum analogy. Artwork is, for all intents and purposes, read-only. As we were told as kids: 'Look with your eyes, not with your hands!' Museums even place protective glass in front of masterpieces. So why is this protective layer acceptable for art, but not for your data?
In the world of security, we talk about access control, authorization, and authentication—lots of permissions floating around. Once granted, it's like giving people the green light to handle the artwork—or, in this case, your data.
Why the double standard? It's like asking why police stations have lockers for on-duty officers. The answer is simple: As Depeche Mode so wisely sang, 'People are people.' So why should we expect people to be more responsible with critical data—data protected by laws that, if compromised, could cost companies millions, if not billions?
Subscribe to the Daily newsletter.
Fast Company's trending stories delivered to you every day
Privacy Policy
| Fast Company Newsletters
This brings us to the title of this article, 'Is Your Data Exposed?' The short and unfortunate answer is: Yes, it is. Behind all these controls, you have to ask yourself, 'Can someone 'touch' the data?' If so, I wouldn't hire you as a museum curator. Would you get the job if you allowed someone to touch the Mona Lisa? Absolutely not!
Now that I've exhausted my analogies, let's talk tech. If you could prevent people from accessing your data, would you? If this solution allowed for business continuity while keeping the data untouched, wouldn't that be appealing? If you could shield your data and hide it from prying eyes, wouldn't you?
But I know what you're thinking: 'This sounds impossible.' Many tech professionals might not even know such a solution exists. Well, don't worry, dear reader; I've got you covered. It does exist, and I'm here to tell you about it!
There are technologies that can shield your data, creating a protective layer between users and the data. Solutions that bridge the gap between security and backup. Wondering if you need this? Ask yourself: Are breaches still happening? If the answer is yes, then the gaps are real, and the struggle continues. It's time to explore alternative solutions. Thinking that cyber insurance is a magic bullet isn't going to address the root problem. Just like having car insurance doesn't mean you can drive recklessly, data owners still have a responsibility to protect their data.
If you've read my other articles, you know I love checklists. So, here's one to ponder: Is your data exposed?
Is your data accessible to human touch?
Can a non-database process access your database (for example, can your admin log into the server and grab the database without going through SQL Management Studio)?
Are you 110% confident you can recover from an attack?
Are you 120% confident you can restore your data to its pre-attack state (and in a timely manner)?
Are you 130% confident you've done everything you can to protect it?
I could keep going, but I'll stop here.
advertisement
If any of these questions are keeping you up at night, it's time to stop relying solely on insurance and start acting responsibly. Shield your data! And if you're feeling bold, show up to work dressed as a Roman gladiator, stand on your desk, and ask your coworkers, 'Are you not entertained?' If they give you strange looks, tell them this article gave you permission.
All jokes aside, now is the time to embrace a solution that protects your data from threats, ransomware, and human error. You've got this. But how do you get started?
Start by evaluating your current data security stack. Is your approach reactive (i.e., backup and recovery) or proactive (i.e., preventing modification or encryption in the first place)? A true data-shielding solution should function like protective glass: users can interact with applications without altering the underlying data.
Here are some considerations to help get started: Know What You're Shielding: Catalog your critical data assets. This includes databases, file shares, cloud storage, endpoints—anywhere sensitive information lives.
Catalog your critical data assets. This includes databases, file shares, cloud storage, endpoints—anywhere sensitive information lives. Evaluate Your Environment: Understand where you're vulnerable. Are users accessing data directly? Are backups unprotected or easily corrupted? What's your exposure from insider threats or third-party access?
Understand where you're vulnerable. Are users accessing data directly? Are backups unprotected or easily corrupted? What's your exposure from insider threats or third-party access? Prioritize Business Continuity: Look for solutions that don't interrupt workflows. The best approaches allow data to be used without the ability to modify it, so operations continue smoothly, even under attack.
Look for solutions that don't interrupt workflows. The best approaches allow data to be used without the ability to modify it, so operations continue smoothly, even under attack. Avoid Detection-Only Tools: Signature-based or AI-only defenses may miss zero-day or fileless malware. Choose technologies that don't rely solely on detection but can prevent changes to data, even if malware slips past your defenses.
Signature-based or AI-only defenses may miss zero-day or fileless malware. Choose technologies that don't rely solely on detection but can prevent changes to data, even if malware slips past your defenses. Plan For Implementation Challenges: Expect resistance. Shielding solutions can be misunderstood as restrictive or complex. Involve stakeholders early. Pilot deployments with limited scope can help prove the concept and win support.
Expect resistance. Shielding solutions can be misunderstood as restrictive or complex. Involve stakeholders early. Pilot deployments with limited scope can help prove the concept and win support. Watch For Red Flags: Beware of solutions that require constant updates to remain effective, only protect files during backup and not in real time, and rely on heavy system performance trade-offs.
Beware of solutions that require constant updates to remain effective, only protect files during backup and not in real time, and rely on heavy system performance trade-offs. Budget For Success: While cost varies, shielding technologies are far more affordable than the cost of a single breach. But budget for more than software. Include training, monitoring, and change management in your planning.
Finally, don't expect perfection overnight. Shielding data is about adding a resilient layer of protection to an already complex system. It's not a silver bullet—but it's an essential step toward ensuring that the next breach doesn't become your headline.
Just as Beyoncé encouraged us to 'put a ring on it,' I'm here to encourage you to put a shield on it—and protect what matters most.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
2 hours ago
- Engadget
OpenAI is removing ChatGPT conversations from Google
OpenAI has removed a feature that made shared ChatGPT conversations appear in search results. The "short-lived experiment" was based on the chatbot's link creation option. After complaints, OpenAI's chief information security officer, Dane Stuckey, said the company is working to remove the chats from search engines. The public outrage stems from a Fast Company article from earlier this week (via Ars Technica ). Fast Company said it found thousands of ChatGPT conversations in Google search results. The indexed chats didn't explicitly include identifying information. But in some cases, their contents reportedly contained specific details that could point to the source. To be clear, this wasn't a hack or leak. It was tied to a box users could tick when creating a shareable URL directing to a chat. In the pop-up for creating a public link, the option to "Make this chat discoverable" appeared. The more direct explanation ("allows it to be shown in web searches") appeared in a smaller, grayer font below. Users had to tick that box to make the chat indexed. You may wonder why people creating a public link to a chat would have a problem with its contents being public. But Fast Company noted that people could have made the URLs to share in messaging apps or as an easy way to revisit the chats later. Regardless, the public discoverability option is gone now. In Fast Company 's report, Stuckey defended the feature's labeling as "sufficiently clear." But after the outcry grew, OpenAI relented. "Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Stuckey announced on Thursday.


Fast Company
9 hours ago
- Fast Company
OpenAI pulls ChatGPT feature that showed personal chats on Google
OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversations were becoming publicly accessible. Earlier this week, Fast Company revealed that private ChatGPT conversations—some involving highly sensitive topics like drug use and sexual health—were unexpectedly showing up in Google search results. The issue appeared to stem from arguably vague language in the app's 'Share' feature, which included an option that may have misled users into making their chats publicly searchable. When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results. Within hours of the backlash spreading on social media, OpenAI pulled the feature and began working to scrub exposed conversations from search results. 'Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option,' said Dane Stuckey, OpenAI's chief information security officer, in a post on X. 'We're also working to remove indexed content from the relevant search engines.' We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat… — DANΞ (@cryps1s) July 31, 2025 Stuckey's comments mark a reversal from the company's stance earlier this week, when it maintained that the feature's labeling was sufficiently clear. Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, commended OpenAI for its prompt response once it became clear that users were unintentionally sharing sensitive content. 'We know that companies will make mistakes sometimes, they may implement a feature on a website that users don't understand and impact their privacy or security,' she says. 'It's great to see swift and decisive action from the ChatGPT team here to shut that feature down and keep user's privacy a top priority.' In his post, OpenAI's Stuckey characterized the feature as a 'short-lived experiment.' But Carissa Veliz, an AI ethicist at the University of Oxford, says the implications of such experiments are troubling. 'Tech companies use the general population as guinea pigs,' she says. 'They do something, they try it out on the population, and see if somebody complains.'


Fast Company
11 hours ago
- Fast Company
This ingenious Switch 2 controller doubles as a french fry holder
BY Listen to this Article More info 0:00 / 2:03 A new 3D-printed model takes advantage of the design of the Nintendo Switch 2's snap-on magnetic controllers by turning the video game console into a french fry holder. The ' GamiFries ' rig has magnetic connectors that allow the Switch 2's controllers and screen to snap on in two modes, handheld and controller. It's built to hold a medium-size order of fries, with a circular carve-out that's perfectly positioned to show the McDonald's golden arches logo. An anonymous user with no other post history uploaded the model as a free download with fair-use promotional images in McDonald's red and yellow. 'We're fans of 3D-printed models and how we can use them to bring ideas to life, especially for small-scale fabrication,' the creator behind the model told Fast Company in an email. 'You can have an idea and suddenly it becomes a product. Then you have to hope that people find it funny or useful.' The Switch 2 has inspired a host of 3D-printed accessories on sites like MakerWorld since Nintendo released the console in June. The Switch 2 sold 1.6 million units in the U.S. in one month, making it the fastest-selling gaming hardware in U.S. history, and it's become the fastest-selling Nintendo console of all time globally. GamiFries joins gaming chopsticks holders, a Pizza Hut video game pizza warmer, and Nintendo Switch soda cups as the latest innovation in gaming-snack accessories, a novelty category well suited for fast food like french fries. It's also especially well suited for streamers. A 2023 U.K. report by the entertainment wiki hosting site Fandom found 'gamers are 50% more likely than the average person to value taste over nutrition,' while gamers' top snack purchases trend toward salty (41%) over sweet (33%). Seems like gamers might just be natural spokespeople for fast-food and snack brands that want to get their products in front of consumers. The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.