Containers are lightweight virtualization technologies, they run an image you or someone else prepared previously, in a contained environment. Each container runs a single app in the foreground as the main process and preferably logs to stdout/stderr.
- the runtime environment is consistent (immutable infrastructure)
- serves as a comfortable atomic component for more complex systems
- clear separation of components in complex design
- simple building block for the cloud
- requires more effort to build the initial release pipeline
- for simple use cases can be an overkill in time and resources
There are many container frameworks out there today, Docker was the first one which got mainstream support in 2013, and a few more since then.
Containers and virtualization have been around for decades, but the simplicity of Docker was needed for mass adoption. Since Docker has been the front runner of the container technology, as time passed on and more alternatives appeared on the horizon, Docker decided to donate most of their core technology to the Linux Foundation in order to help creating standards to support the industry.
Sandbox container frameworks
After the rising popularity of Docker it soon turned out that the container technology had major weaknesses in security. As a result Docker started to focus more on security, while some other companies saw the solution in a new form of containers: hypervisor based containers. That means you don’t have a guest OS to worry about. Some of them requires specific hardware virtualization like Intel’s Clear Containers which requires VT-x, some are hypervisor agnostic and can run on most hypervisors like KVM, Xen.
Hypervisor based container frameworks
Which one should I use?
Unless you have a specific reason not to use Docker, I recommend to stick with Docker. It works, it’s the most advanced and mature framework of all.
Each and every alternative has it’s edge, but in general the motivation behind the alternatives was that Docker is Swiss army knife, it is all-round container solution, and some people see this as a disadvantage. Some people are convinced that the core runtime framework, and the process of creating an image should be separate. As a result some of the alternatives outperform Docker in performance, but also requires more work to configure and maintain.
Still, all the container frameworks are luckily compatible with each other since they support the OCI (Open Container Initiative) standards, which defines the runtime standard and the image specification standard as well.
Creating an image
Creating an image is the first step, and you can either use Docker itself, or one of the alternative tools specifically built for building images. You’ll need the app you want to containerize and a configuration file where you describe details about the app. Once the image is complete, you’ll have it on your local disk, so you preferably want to store (upload/push) it at a remote repository.
|OCI Image building tool||Released||Company|
All container frameworks can fetch from these repositories, you just have to point them there and name the image you want to use.
Eventually when you’ll become familiar with building images, you can move one step further and automate the building process, and even expand it to automate testing and deployment. This is Continuous Integration, when you have an efficient pipeline which allows you to push even small changes to production more often. This way if something breaks, it’s easier to point out the potential issue and apply a fix or rollback to a previous version.
Initially for simple tasks -for example, a static nginx- a single Docker image was enough, you created an image you ran it and it was all happy.
Multi-container, within the same box
The next step is when you need multiple containers to make your service available. For example you need a MySQL container to support your Nginx container. Still, these two containers can fit a single host, and you don’t have to worry much about scheduling and complex networking.
Once your services requires multiple hosts either because of resilience or it simple your combined container resource requirements can’t fit a single machine, you need to work with multiple machines. Once you reach that, you need an additional tools beyond the container framework to help scale your containers.
We call these tools Container Orchestration (Management/Clustering) tools, and the de-factor standard today is Kubernetes. There are some alternatives to this as well, initially there has been a competition between Mesos and Kubernetes, but Kubernetes came out as a clear winner. True, the comparison is a bit unfair, since Mesos was designed way before Docker and the mass adoption of containers. Still, there are some notable alternatives out there, and as Kubernetes gets more and more complex there is a good chance that people will look for simpler solutions for simpler use cases.
|Apache Mesos||2009||UC Berkeley|
|Docker Swarm||2016||Docker Inc.|
Orchestration tools are useful if you want to keep your infrastructure in-house, although if you want to skip the complexities of setting up such a system, most major cloud providers will provide you a managed container/kubernetes service where you only worry about managing the Kubernetes, and not about the infrastructure below.
|Google Kubernetes Engine||2014|
|Elastic Container Service||2015||Amazon|
|OpenShift Container Platform||2016||RedHat|
|Azure Kubernetes Service||2018||Microsoft|