Trisha Paine, Head of Cloud Marketing Programs

Containers have become one of the fastest growing technologies in the history of IT. Since DockerHub’s inception in 2013, billions of container images have been downloaded, and hundreds of thousands of images are currently stored there. Countless more are stored in other container image repositories, both public and private, and virtually every company maintains some kind of container strategy.

Explaining Containers

Although Linux containers were popularized by Docker, Inc. in 2013, their history dates back much further; in fact, Docker containers are essentially a particularly user-friendly implementation of container technologies. Put simply, a container is an operating system process that is separated from other processes. The process running appears to have its own filesystem, network stack, hostname, process table, and other so-called “namespaces” that make the container appear to be isolated, much like a virtual machine (VM). Unlike a VM, however, the container does not run a separate operating system.

Linux containers grew out of a 2006 Google initiative intended to limit the resources a process could use. Container technology as we know it today was then implemented in LXC containers in 2008, and Docker popularized these technologies in a packaged form in 2013. Since then, Kubernetes has become a mainstream platform for the orchestration of containerized applications, and alternatives to Docker—such as RedHat’s Podman—have emerged based on common industry standards set by the Open Container Initiative.

The Popularity of Containers

The quick surge in popularity of containers can be attributed to a number of factors. First, while VMs were a relatively heavyweight solution to the application packaging problem, containers proved a cheaper, lightweight alternative for application development. Second, by having a single object containing all the information required to run it on any Linux operating system, a whole host of software development challenges could be avoided. Among these obstacles were library conflicts, configuration management, process management, as well as reducing disk usage and process management overheads.

Third, Docker containers gained the support of heavyweight industry players from the start. Following Google’s announcement of their Kubernetes project in 2014, RedHat embraced Docker and Kubernetes early on. The company invested significant amounts to move its OpenShift application-as-a-service platform in order to use both technologies as a base for OpenShift version 3. Similarly, Microsoft invested major sums to make their operating system Docker-capable. Finally, Amazon launched its own container orchestration system, Elastic Container Service (ECS), in 2014. And in 2017, AWS released its Fargate serverless container platform, unveiling Elastic Kubernetes Service (EKS) the following year.

Containers and DevOps

In addition to the widespread industry support and convenience of containers for developers, another factor behind the technology’s rapid growth has been how it has dovetailed nicely with the DevOps movement. By packaging up application components into discrete units, it has greatly facilitated software sharing between developers and operations. This, in turn, has allowed development and operations engineers to work on automating application pipelines through integrated and scripted testing and deployment phases.

Before containers existed, finding and managing validation environments for fixes and upgrades could be a painful process. It often involved hijacking previously created environments in varying states of repair or the costly process of building a new environment that didn’t necessarily reflect the existing reality. While containers don’t solve all these problems, when combined with the DevOps principles of automation and collaboration, they can significantly reduce overhead.

Adoption Challenges and Remediations

Despite the many advancements this new technology has brought, container adoption is not without its challenges. These can be broken down into three main areas: security, technical limitations, and architecture.

Container Security

The explosion of container images available for download—and therefore the explosion of software available for use within any organization—has created a pressing need to address the security implications. An article published in 2019, “Docker containers are filled with vulnerabilities: Here’s how the top 1,000 fared,” claimed that “over 20% of files contained at least one vulnerability that would be considered high risk.” Official images are those images that are actively maintained by the community and part of Docker’s “Official Images” program; they are also the most popular images, based on the number of downloads.

For the most part, these vulnerabilities resulted from difficulties keeping track of the exponentially increasing number of dependencies software stacks have on one another as more and more open source libraries are shared. In this context, managing these security risks can become a major concern for your organization as the threat of infiltration appears to grow.

These concerns have created a gap that many tools, both commercial and open source, have tried to fill. For instance, CloudGuard and others provide managed services and software solutions that allow you to centrally manage container workloads to ensure they are meeting compliance requirements and thus reduce risk. CloudGuard goes one step further by also auto-remediating risks and provides active protection to prevent attacks. These tools enable DevOps teams to quickly deploy containers while allowing security teams to monitor and automatically prevent threats to the container environment.

Technical Limitations

While containers have proven ideal for compute-heavy stateless workloads, such as web and application servers, when it comes to the management of stateful systems, they are less mature. Containers excel in situations where transience and flexibility of location is desired, but these qualities don’t translate as well with stateful systems like databases, which tend to be anchored wherever the data is stored. Databases also tend to be high-value workloads with high uptime demands that will take longer for IT teams to entrust to a relatively new platform without much of a track record.

The process of moving high-performance workloads to containerized workloads is also slower than for stateless workloads. Since most Kubernetes systems have some kind of shared tenancy model, software performance can be less predictable, as the workload has to share its host’s resources with other workloads. By default, the Kubernetes networking model increases the amount of processing per request on the network, which can also decrease the efficiency of high-performance workloads. Moreover, there is an overhead per host to run Kubernetes (or the Docker daemon, if Kubernetes is not being used), which will reduce the resources available to the application.

In order to address these challenges, it would be wise to carefully evaluate the workloads you want to run on containers, and only place those assets that are a good fit for containerization. Leaving high-performance and mission-critical database workloads where they are while they are stable is a good strategy while the container landscape matures. These workloads can be revisited later, once you—and the industry—become more experienced in running these applications reliably.

Container Architecture

Getting the most out of your containers may require rearchitecting your application. If you only use containers as isolated processes, the amount of rearchitecting required may be minimal; but if you’ve gone as far as adopting Kubernetes, this may involve a complete rethink of how the application works.

Traditional three-tier applications, for example, tend to assume a stable set of IP addresses for their application servers behind the load balancer. However, this assumption may no longer be valid if you naively port it to a standard Kubernetes deployment in a container. You may also want to divide your application’s internal components into separate containers, or even separate services if you want to go down the microservices route.

While this may seem like a major obstacle, it can also be considered a welcome opportunity for organizations to do a complete audit of their application architecture and usage patterns to determine what is needed and what is not. It can also drive changes to improve efficiency, security, and cost-effectiveness.

The Future of Containers

For the past five years, container adoption across the software world has grown at an incredible pace. Their lightweight, well-defined, and broadly applicable features have made them relatively easy to adopt. The move toward Kubernetes and microservices, as well as the rise of the DevOps movement have also contributed to their popularity. While the container adoption process comes with its challenges due to their general lack of maturity, stateful workloads, security concerns, and architecture, it’s safe to say that in light of the broad support the technology has enjoyed among industry players—both big and small—containers are here to stay.

For more information on how to automate container security, please visit.

You may also like