Home > Blog > Anomaly Detection vs. Rules: Better Security Insights, Designed for the Modern Enterprise

Anomaly Detection vs. Rules: Better Security Insights, Designed for the Modern Enterprise

Anomaly Detection vs. Rules: Better Security Insights, Designed for the Modern Enterprise

Traditionally with monitoring tools – whether security, application, or infrastructure – it’s necessary to invest considerable time configuring the product and writing rules that are specific to your environment. This is done so your team gets the right alerts on issues that run counter to your requirements and environmental setup. With innovations in machine learning and AI technology, we now have the ability to apply solutions that are built to understand behaviors, detect anomalies in those behaviors, and report based on actual activity that might pose a threat.

In the area of security, this is huge benefit as it can be difficult to predict the behaviors of bad actors and write the appropriate rules. Using anomaly detection delivers a more accurate assessment of the vulnerabilities in a cloud environment, and ultimately gives security teams fewer and more actionable alerts.

There are still skeptics that are not completely sold on the idea of this new approach, however, there are many advantages to using machine learning to detect anomalous activity within your environment, and it’s important that security, operations, and even business teams understand these:

Time to Value

So you’ve spent a total of nine to ten months evaluating a product, selling your peers on it, then enduring a long and grueling procurement process. Great. Well, now you’ve got to implement it and spend considerable time configuring it so it works in the way you intend it to. First off, you will need resources to perform the task of writing rules. This usually will involve some training time, planning time, testing time, troubleshooting time, and evaluation. That’s only if you can first find qualified people who have a background in security and the creation and implementation of rules. Time is also of the essence; you’ve already spent a great deal of time procuring the product, but now you’ll need to spend even more time to get value out of it.

The Lacework approach removes the rule-writing element because of our unsupervised machine learning that performs automated anomaly detection. Once the product is deployed it begins to learn and understand your environment by analyzing data from your cloud accounts and workloads. From here it creates a baseline and automatically alerts on you any anomalous behavior. You’re getting value almost immediately, and you don’t need to wait to determine if your rules are working. You’ll just have to identify the resources that you would like to monitor and allow the product to do the rest.

Simple Operations & Maintenance

Let’s stick with our rule-writing scenario. Let’s say you’ve got your rules written and you are getting events. Things are all good, right? Not really. What if there are changes in your environment? Maybe you have deployed new applications that report data in a different way than your rules are built for. Time to update the rules. Maybe you’ve hardcoded IP addresses or hostnames into your rules and you realize that they are no longer in use. Time to update the rules. Rules can be very brittle and require constant attention to ensure that they actually remain effective. And think about the fact that every time a rule needs to be created or rewritten, it is potentially leaving vulnerabilities open that put your data and infrastructure at risk.

With an anomaly-driven approach, there are no rules to maintain, which means there is more time to focus on tasks that truly bring value to your organization.

Accurate Alerting & Low Alert Noise

Fine-tuning rules to accurately alert on critical events is a challenge. Many times you end up writing rules that are too granular that miss out on events that are likely just as important. This can end up resulting in a missed event which could eventually have catastrophic consequences. Other times you may write your rules that are almost ‘catch-alls’ resulting in an overload of events. This results in alert fatigue, where events are mostly ignored. This can also result in a missed event which can lead to a critical security incident.

Using automated anomaly detection you will always be notified of events that are not normal to your environment. This ensures that you are notified of the activity that you want to know about without having to sift through all the noise.

The way enterprises approach security is change to meet the rapid adoption of the public cloud. While agile and driven to meet business needs through innovative technology, the cloud has also introduced many potential risks and threats which are increasingly difficult to keep up with. Human activity doesn’t scale to meet these demands, nor can it adapt to the complexity required to continuously update rules, and organizations need to know their security posture is aligned with how fast they need to move. Maybe it is time to go with the new approach of using automated anomaly detection to identify bad actors within your environment; I encourage you to try Lacework to get a sense for our approach and see how it fits with your cloud environment.

Share this with your network
FacebookTwitterLinkedInShare