It’s been reported that Citrix’ internal networks were attacked for six months before the breach was discovered. Citrix officials stated that the hackers “removed files from our systems, which may have included files containing information about our current and former employees and, in limited cases, information about beneficiaries and/or dependents.” Apparently, that information may have included Social Security numbers and personal financial information.
The company says the attack was probably achieved through the attacker’s brute forcing through commonly used passwords that aren’t protected by multi factor authentication (MFA).
What’s the issue when this happens – is it lack pf MFA enforcement? Does it happen because of poor visibility into your environments? Is it a result of weak security tools? It’s actually a combination of all of those things, plus a lot more. To put it simply, most organizations just don’t have a mindset, nor the supporting tools, to enable them to operate safely in cloud, container, serverless, hybrid, or any other type of modern environment.
There is nothing good that comes from breaches like the Citrix one or the massive exposure of 80 million personal records on a Microsoft database. But if there is any silver lining in any of this, perhaps these examples can provide an anti-roadmap. We can learn from these misfortunes and create a strategy for operating in environments that are inherently insecure. Smart organizations will do this by taking measures to strengthen their enterprise security posture and ensure that they can eliminate vulnerabilities. They will need to develop confidence that they can rapidly fix those issues they become aware of. The key is just that, however — visibility. But it’s also the use of security tools and approaches that are built for cloud-native applications and workloads. And organizations must get employees and other users to buy in to a new way of “doing” security.
Cloud environments are all about sharing; data, applications, access, resources – it’s all about a continuous flow of activity that works best when shared. The traditional approach to monitoring that activity is with flow logs, which enable you to log traffic through all network interfaces in your environment. This is useful for identifying potentially malicious traffic, traffic that fits the signature of known threats, and traffic that violates defined policies.
The problem with Citrix, however, is that they just didn’t know. And that’s simply just not a good enough answer. Customers, employees, shareholders, members of the board – they want to be able to trust the organizations to whom they entrust their data.
Security teams are overwhelmed by the magnitude of the environments they manage so they have to rely on shortcuts like dashboards and logs to make sense of activity and, specifically, if that activity is threatening. Security dashboards are typically fueled by data that’s generated from a rules-based approach where activity that runs counter to structured rules is flagged through alerts. This approach limits visibility, and you can’t secure what you can’t see.
Human activity doesn’t scale to meet these demands, nor can it adapt to the complexity required to continuously update rules, and organizations need to know their security posture is aligned with how fast they need to move. This may be the time to go with the new approach of using automated anomaly detection to identify bad actors within your environment.
The Lacework approach removes the rule-writing element because of our unsupervised machine learning that performs automated anomaly detection. Once the product is deployed, it begins to learn and understand your environment by analyzing data from your cloud accounts and workloads. From here it creates a baseline and automatically alerts you of any anomalous behavior. You’re getting value almost immediately, and you don’t need to wait to determine if your rules are working. You’ll just have to identify the resources that you would like to monitor and allow the product to do the rest.