Home > Blog > Do Data Leaks Have to Be So Common?

Do Data Leaks Have to Be So Common?

Do Data Leaks Have to Be So Common?

Just as you would protect your physical assets by locking the doors to your shop, the data that is stored and transacted in your cloud must also be secured like valuable assets. The problem, however, is that data changes, assets are spun up on the fly, and change is continuous. Data is used with such regularity that internal users don’t always apply the required level of configuration compliance and threat detection to understand where issues exist.

Business operations depend on data to be available for new applications and through multiple channels. Oversights, misconfigurations, poorly defined requirements, and other operational issues contribute to data being exposed, and mostly without any awareness on the part of the security team.

There are countless prescriptive lists that define best practices for protecting data in the cloud but breach prevention can’t be managed simply through a checklist. Data leaks have been around as long as people have transcribed and stored information, and because it’s valuable, bad actors will continue to find new ways to infiltrate.

Data breaches typically come in one of five forms:

Unauthorized access: Some servers and databases have corresponding access capabilities based on groups, title, or some other broad definition. Access is not defined at a granular level, and it is often not checked or validated because the presumption is that everyone in Group A, for example, should rightfully have access. There’s the potential for serious problems with this. For one thing, as data is added to repositories, there is not always a check/balance to distinguish once it becomes more sensitive. As it does so, access should probably be reduced. Secondly, unauthorized users can gain access by impersonating valid users when controls are managed according to groups.

Ransomware: This is the malicious software that is injected into your environment. It usually targets vital data and locks down files, repositories, and other systems where data access is critical for the organization’s business pursuits. Imagine your APIs being unable to access a database with stored customer information – you won’t be transacting much business. Then again, imagine what happens if you don’t pay the demanded ransom – that data could be leaked by the attacker and then you’ve lost the faith of your users (not to mention the legal fallout). Either way, the attacker controls your data.

Phishing: You’ve all seen what this looks like. An email, text, or website looks valid, but is far from it. Upon engaging with it, the attacker finds a way into your environment and can access sensitive data. Remember the path of the attacker – he or she just needs to get in, and once arrived, pretty much can roam free until detected.

Malware: Malware is software built to destroy data. There’s generally no ransom or warning; it’s just a “find and kill” type of operation that’s distributed through bad links and fake emails.

Internal threats: Consider the unsettling fact that your own employees are most likely to be a cause of your leaks. Some of this is inadvertent – someone is clicking on those bad links in emails that say, “You gotta see this!” But you cannot avoid the fact that you possibly have leakers lurking around your halls, and they may be the most dangerous because they know where the data lives.

These attacks in a rules-based system are hard to define because effective breaches are often based on impersonation of something legitimate. Take unauthorized access for example. Employees leave their jobs, but organizations don’t follow a discipline to remove them, so dead accounts are actually active. Since those accounts have access to cloud resources, someone impersonating credentials is able to operate freely within the environment while looking like they’re legitimate.

With automated anomaly detection, however,  you will always be alerted to events that are not normal to your environment. This ensures that you are notified of activity that you want to know about without having to sift through all the noise. In the case of dead accounts, anomaly detection will recognize and report on accounts that are suddenly active after periods of inactivity. They can identify when abnormally large amounts of data are extracted from databases, or if users are accessing from unusual IP addresses.

Human activity doesn’t scale to meet these demands, nor can it adapt to the complexity required to continuously update rules, and organizations need to know their security posture is aligned with how fast they need to move. The Lacework approach removes the rule-writing element because of our unsupervised machine learning that performs automated anomaly detection. Once the product is deployed it begins to learn and understand your environment by analyzing data from your cloud accounts and workloads. From here it creates a baseline and automatically alerts on you any anomalous behavior.  

The solution to preventing data leaks requires continuous monitoring of inter-process activities, even those occurring inside the same file. Enterprises need a host-based intrusion detection system designed to monitor process hierarchy, process and machine communications, any changes in user privileges, internal and external data transfers, and all other cloud activity. An effective system looks across all layers, and it analyzes activity based on normalized behavior, which gives a continuous real-time view even across short-lived services that may only exist for a few minutes. Having that process-to-process visibility is a critical factor in having strong, effective security built into any cloud environment.

Share this with your network
FacebookTwitterLinkedInShare