The Biggest Cloud Breaches of 2019 and How to Avoid them for 2020

2019 has been a year of shocking security breaches in the cloud, a trend that will only continue unless businesses make a significant course correction. Gartner recently updated their evaluation of cloud security and concluded: “Through 2025, 99% of cloud security failures will be the customer’s fault.” This is a sobering thought, but in the same article Gartner also warns of the dangers of exaggerated fears. The cloud is undeniably the future of ecommerce, and it can be secured with a bit of wisdom and some best practices that are newly evolving as this technology matures.

Still the damage of a data breach should not be underestimated. If 2019 has shown us anything, it is the scale that data exposures can reach. We have more data now than ever before. The cloud has freed us to scale almost endlessly. Our data has scaled along with it. As what was thousands of security events becomes millions, we realize our old techniques are not going to be up to the task. Gartner further warned: “Through 2025, 90% of the organizations that fail to control public cloud use will inappropriately share sensitive data.” For the companies so unfortunate, there are significant repercussions to brand reputation, as well as increased legal exposure and obligations. As the world internationally comes to understand the power of data and the erosion of privacy in our modern world, new, tougher legislation can be expected as governments try to protect their citizen’s rights.

The challenge of securing the cloud is not something that can be ignored or delayed. Fortunately, we have no shortage of cautionary tales to reveal the dangers. Let’s have a look at the five biggest ones.

 

April 2: Facebook (Cultura Colectiva)

Breach size: 540,000 records, 146 GB of data
Need: Cloud Configuration Compliance, Stronger Vetting of Business Partner Strategy Visibility (HIDS)

 

CAUSE
For many of its users, Facebook’s biggest attraction is the apps offered from third parties. Who can resist a game of Scrabble with an old high school friend across the country? Yet these same applications that attract users can be a conduit for data compromise. Often these third parties do not operate at the same security standards and can expose shared data that is left on unsecured servers. The UpGuard Cyber Risk team revealed in a web posting that a digital media company operating out of Mexico called Cultura Colectiva exposed over 540 million records from Facebook users on an improperly secured AWS server. These records contained data that could be used to profile these users in great detail and included user IDs, account names, likes, and comments.

A similar debacle with a different third party app was discovered at about the same time, but its scale is dwarfed by the headline grabbing incident with Cultura Colectiva. A less popular app called “At the Pool” exposed a still impressive 22,000 passwords by leaving an unencrypted backup on an unsecured S3 bucket in AWS. The passwords were also plain text, rather than the hashing that is typically used by modern companies. Password reuse remains a reality for many users, even as excellent password managers are making significant progress in changing those habits. When a large cache of passwords is exposed like this, they generally go on sale on the dark web where criminals will attempt to use them at other sites where the user may have an account in a technique called “credential stuffing.” In a further complication, “At the Pool” has been out of business for about five years. Fortunately, that means this data is not fresh and hopefully most have changed their passwords in that time. It is a sobering illustration of how a company can lose control of data once shared with unreliable partners. 

PREVENTION
Cloud Configuration Compliance is one of the newer ideas that has evolved with the rise of IaaS providers like Amazon. Just as servers, network equipment, laptops, and file servers would be routinely scanned to ensure critical security settings were correct, it is now essential to scan your IaaS settings to ensure continual compliance. With the network infrastructure, hardware, and much of the identity management now off-loaded to a cloud provider, it is easy to overlook the importance of close inspection of the security controls set with the provider. A compliance tool with close integration and the ability to check for best practices is essential. Leaving such tasks to manual configuration is a certain way to produce errors and inconsistencies that could cost dearly.

AWS does have useful features that can assist in some parts of this. The AWS Service Control Policies (SCP) are able to set up inheritable policies that limit the maximum permissions of users and resources. Yet these policies by themselves are not sufficient to manage rights. Amazon accurately describes them as “guardrails”.

There are still other important controls that can be used to prevent a breach like this. Encryption at rest would have provided a defense-in-depth approach and likely have prevented the exposure even with the weak permissions. Strong business practices of controlling data flows, vetting business partners, and avoiding retaining data longer than it is useful, all could have lessened the impact of this event for Facebook and their customers.

 

April 25th: Docker Hub

Breach size: 190,000 accounts
Need: Container Visibility (HIDS)

 

CAUSE
Adopters of containerization were dealt a blow this year as the popular Docker Hub repository was compromised exposing 190,000 accounts. “On Thursday, April 25th, 2019, we discovered unauthorized access to a single Hub database storing a subset of non-financial user data,” Kent Lamb, director of Docker Support, said in a statement posted to the Docker website. “Upon discovery, we acted quickly to intervene and secure the site.”

The breach reached only 5% of Docker Hub customers, but it included compromise of tokens and access keys for autobuild functions in Github and Bitbucket. This gives the incident a possibility of bypassing authentication, possibly injecting malicious code into production pipelines of many companies, and perhaps also gaining copies of proprietary code. Given the gravity of this possibility, Docker made the unusual step of revoking these tokens before notifying customers. While some were upset by this disruption, others acknowledged the wisdom of rapidly removing the threat. Password reset notifications were also sent out to those affected.

Companies using Docker had to regenerate keys to spin their autobuild features back up. They also needed to trace back through log files to identify potential malicious activity. Docker did not reveal the cause of the breach, describing it only as “a brief period of unauthorized access.” It can only be speculated that the attacker was able to get hold of credentials or exploit the servers involved. Meanwhile Docker customers are left with an uneasy realization that their containers could have been tampered with.

PREVENTION
With containerization, visibility is key. Containers virtualize programs at the operating system level, taking advantage of the process separation features of Linux to run multiple containers on the same kernel, thus lowering overhead. This means that containers communicate with each other and the host operating system through well understood channels. These inter-process communications should be monitored, as well as events occurring inside the containers. A powerful Host Intrusion Detection System can baseline normal activity and surface events that indicate that workloads have changed or anomalous behavior has been discovered.

Additionally, companies may choose in a situation such as this to restore backups known to be trustworthy. Use of hashing and encryption at rest can add confidence of the image integrity in such a situation. It may also be prudent to have an image scanning tool to discover any malicious code that may have been injected.

 

May 20th: Instagram (Chtrbox)

Breach size: 49 Million Records
Need: host-based intrusion detection system &
Cloud Configuration Compliance

 

CAUSE
This was an especially bad year for Facebook as Instagram, the photo/video sharing social network it acquired, had one of its business partners expose an AWS database with almost 50 million records from their users. With no password required to access the data, the database, which was growing by the hour, contained data users shared publicly such as bios, profile pictures, number of followers, etc. More concerning, those records were linked to private data as well, such as email addresses and phone numbers. Among the records were many high-profile celebrities and influencers. Unlike the previous Facebook exposure, this data was fresh and new records were coming in.

The database appears to be owned by Mumbai-based Chtrbox, a media firm that bills itself as the provider of an “influencer marketing tool” that pays influencers to post sponsored content from their accounts on services such as Instagram. For many of the accounts, payment methods, amounts, and even a metric that determined the account’s worth, was included in the data.

Once the company was notified of the exposure, they immediately pulled the database offline to contain the breach. This is not the first time Instagram has been at the center of a security issue. Two years ago, the service disclosed a security bug in its developer API that enabled attackers to exfiltrate the personal contact information of six million accounts. The company determined the data was then sold by the attackers for bitcoin.

While the exact cause of this incident has not yet been revealed, it is becoming increasingly common for cloud  environments to have resources that are left wide open due to lack of passwords, or some other lax security practices. In the Chtrbox case, as is common with many of these incidents, the company was not made aware of the issue until notified by a third-party. These organizations are not applying effective security and compliance monitoring, breach detection, nor anomaly awareness.

PREVENTION
Organizations that run workloads in the cloud move fast, from development to runtime. The entire nature of their infrastructure is predicated upon being able to rapidly spin up new instances of computing and storage capabilities to meet the needs of their business and technology demands. But moving fast sometimes comes at the cost of neglecting security best practices like demanding strong passwords for resources, requiring multi-factor authentication (MFA), rotating keys regularly, adhering to the principle of least privilege, or a host of other key practices that should be gospel.

It’s also critical that organizations have insight into their cloud accounts, workloads, and container infrastructure. Without insight, the organization is prone to information gaps that prevent their ability to detect misconfigurations, unenforced policies, or other issues that could easily lead to a breach.

 

July 29: Capital One

Breach size: 80,000 Bank Account Numbers, Over 1 Million Government ID Numbers
Need: Cloud Workload Visibility (HIDS) and analyzing AWS CloudTrail logs

 

CAUSE
The biggest breach headline in 2019 has to be the Capital One breach. Its disclosure had its customers asking themselves “What’s in your wallet?” A former Amazon software engineer from Seattle who had been operating online under the handle “Erratic” was arrested after hacking Capital One using a Server-Side Request Forgery attack (SSRF). She used the technique to obtain credentials for a role that had access to sensitive information stored in S3. She discussed her exploits in some detail on her Slack channel and posted instructions for duplicating it on Github.

The New York Times reported the damage at over 80,000 account numbers, 140,000 Social Security numbers, 1 million Canadian Social Insurance Numbers. From information gathered from her online posts it is clear Capital One is not her only victim.

We’ve seen this before, and we’ll see it again. In the spirit of innovation and transformation, Capital One adopted the strategy of many enterprises who have moved fast to migrate to the cloud. While speed may or may not have anything to do with this case, it illustrates the fact that cloud (and multicloud and hybrid) environments are complex. Yes, they’ve been sold as a way to reduce overhead and increase all manner of efficiencies, and there’s no doubt they do these things, but they also bring a model of continuous change and the need to manage all that change. When that doesn’t happen correctly, and the scale at which cloud environments change is beyond anything that even massive numbers of humans could maintain, configurations aren’t watched, access is not controlled, and oversights become crushing problems from which disentanglement is nearly impossible.

Capital One appears to be handling this issue a very un-Equifax-like way; they are trying to get ahead of the story by taking responsibility and cooperating. But brand value and company reputation are at stake all the time, especially in an age where data is among the most valuable corporate assets they own.

PREVENTION
The solution to detecting attacks like this lies within anomaly detection, both on the server and in the API audit logs. By monitoring process on the server and baselining their behaviors you can detect anomalous actions such as unusual inbound connections or new internal connections. For instance if a process doesn’t normally communicate with a service, such as the metadata IP in AWS, these new connections will be flagged. Having that process-to-process visibility is a critical factor in having strong, effective security built into any cloud environment.

Cloud providers often bring powerful audit logging to their customers, such as AWS CloudTrail which can provide some of that visibility at the network and user layers. This data can also be massive and unwieldy. A powerful platform that can sort through the noise to find the important signals is essential. In an attack such as this, being able to detect when a role is used from a new location is critical to catching anomalous and malicious behavior. This type of analysis can be very difficult to perform in-house.

The solution to these potential gaps in cloud security is one that monitors and logs all inter-process activities, even those occurring inside the same file. You need a host-based intrusion detection system designed to monitor process hierarchy, process and machine communications, any changes in user privileges, internal and external data transfers, and all other cloud activity. An effective system looks across all layers, and it analyzes activity based on normalized behavior, which gives a continuous real-time view even across short-lived services that may only exist for a few minutes. Having that process-to-process visibility is a critical factor in having strong, effective security built into any cloud environment.

Cloud providers often bring powerful tools to their customers, such as AWS CloudTrail which can provide some of that visibility at the network and user layers. This data can also be massive and unwieldy. A powerful platform that can sort through the noise to find the important signals is essential.

 

Sept. 13: Autoclerk

Breach size: 100,000’s of booking reservations, over 179 GB of data
Need: Cloud Configuration Compliance

 

CAUSE
Autoclerk, a hotel reservations management system, had an unsecured Elasticsearch database hosted in AWS that exposed hundreds of thousands of booking reservations. The system was heavily utilized by military personnel, and the exposed data revealed sensitive information about travel by military, including high ranking officers and troops being deployed.

vpnMentor, whose researchers discovered the data leak wrote a detailed blog post describing their findings. They wrote, “Our team viewed logs for U.S. army generals traveling to Moscow, Tel Aviv, and many more destinations. We also found their email address, phone numbers, and other sensitive personal data.”

Elasticsearch is typically used for big data and a compromise of the database can yield substantial information. Elasticsearch is also challenging to secure, with many of the most important security features reserved for premium licenses, and is easily misconfigured. Many companies set up their dataset with no granular access controls. In such a case, a compromised password could give access to all data.

PREVENTION
Similar to the other security incidents we have discussed, this incident also would benefit from Cloud Configuration Compliance. Where setting and maintaining appropriate permissions can be complicated, it is best to have a tool that can check your settings and make an actionable report of issues.

Accurate monitoring of network traffic would also likely have exposed an unusual transfer of a large data set. Such activity would be out of baseline for the application and could trigger an alert to begin containment procedures.

Conclusion

There is a common thread running through all of the incidents examined. The greatest danger to businesses moving to the cloud is a failure to adapt security culture and tools to the new reality that cloud computing brings. Operations are more volatile and scale larger. System admins need tools that can scale with the cloud and keep up with a dynamic environment driven by capacity on demand.

The need for a professional platform could not be clearer. Companies who have suffered this year have often done so because they have lost control over their configurations and lack the tools to give visibility and monitoring required to keep the cloud safe. Lacework has been an industry leader in this area. We have developed powerful solutions that can lower the risks of human error by taking a comprehensive view of cloud configurations, even in multi-cloud environments. Our platform also give the insight needed to discover processes and files that are not behaving as designed and to alert you before the issue gets out of control.

 

Come chat with us. We would love to share more about how we can keep you off of next year’s list.

 

 

Photo by Hugo Jehanne on Unsplash.