Originally published in TechSpective on April 24, 2018.

Microservices have been touted as a revolutionary way of building applications in the cloud which in turn is fueling the demand for containers. This symbiotic relationship between application portability and containers for delivering a single function makes for an ideal platform. At scale, this distribution of discrete jobs, when compartmentalized across dynamically provisioned containers becomes really attractive in comparison to the heavy weighted Virtual Machines (VMs).

Containers own up to the tenet of “Leave no trace.” Since every container does one thing and shares nothing, it does a pretty good job at cleaning up after itself—in the same way as it delivers on the promise of functionality. All is good thus far! However, this poses a significant challenge for security, where security or DevSecOps teams spend countless hours of sleepless nights as they work in overdrive to keep company assets protected. We all have heard one too many alarms sounding a breakdown of security protocols leading to a RCE (Résumé Changing Event).

Let’s dissect the architecture of application portability further. Containers interact with multiple resources. These include:

  • Enlisting the resources of the host VM (process, memory and network connectivity),
  • Tapping into data sources (local or remote) and perform computation (assigned function),
  • Transforming intermediate data, and finally,
  • Rendering the output (local or remote) before cleaning up.

Containers deliver on a single function extremely well, but they do so with a lot of opacity. This opens a Pandora’s Box of security incidents that can be hard to track: exploits, resource hijacking, manipulation of resources, or worse yet, data exfiltration.

From a security viewpoint, the problem calls for a multi-pronged approach where containers can become first-class security citizen. Container security needs to be brought on par with virtual machines (VMs) and the level of monitoring and visibility available with processes, memory and network resources in a VM.

Monitoring and collection of activity for visibility is a must throughout the lifecycle of a container—from provisioning to runtime and teardown. Provisioning can either be done via System and Service Manager like systemd or using orchestration software like Kubernetes. At runtime, many observations must be collected as processes spawn and die, memory allocations are performed, network connections are established and torn down and containers interact with system resources.

All of the above observations can be continuously fed and processed in real-time by machine learning to provide application-level visibility, detect malicious activity and eliminate the need to sift manually through mountains of logs for detection and investigation.

The visibility gained will improve the response time of any security team by several orders of magnitude without the need for security staff to delve deeper into the domain of application architecture—application behavior, data consumer and producer, design choices and constraints.

As an avid user of containers for many of the functions that I build, deploy, and maintain, I am deeply aware of the security risks and gaps brought by a container-based application architecture. It’s critical for fast-operating applications teams that deploy in the cloud to take proactive steps to close the gap on container security.