We hear all the time about how much developers love containers – but are containers for IT admins too? In a previous post, we discussed the history structural differences between containers and VMs, but what do IT admins need to know about containers to support them properly?
For this blog post, I’m going to assume that you already have a solid understanding of what a virtual machine (VM) is, so we can concentrate on defining container components. Also, most container posts I could find are written for developers, so this post is designed for IT admins who are being asked to support container environments. Finally, I should remind you that I work for VMware (for full transparency).
Container definition review
First let’s review what a container is. Here is how RedHat defines a container:
…a set of one or more processes that are isolated from the rest of the system. All the files necessary to run [containers] are provided from a distinct image, meaning that Linux containers are portable and consistent as they move from development, to testing, and finally to production.
That’s a great technical definition. But I also like this definition from Docker Captain:
Containers can be identified as a method to package our software so that they can be run isolate without depending on what the host operating system and which libraries and frameworks that has been installed in the host operating system.
Developers like using containers because they are lightweight and portable. Containers can be moved to different hosts, provided the OS’s are compatible.
So that defines what a container is, but the concepts of runtime and packaging are also important.
A container will typically contain the application, any required dependencies, libraries, binaries, and configuration files. There can be many containers on one host’s operating system, and all the containers will share that OS’s kernel, networking, storage, and other resources.
These container components are packaged into an image. The idea of container images was pioneered by Docker. This OpenSource.com post has great detailed info on containers for sysadmins. They explained container packaging based on Docker images:
- Rootfs (container root filesystem): A directory on the system that looks like the standard root (/) of the operating system. For example, a directory with /usr, /var, /home, etc.
- JSON file (container configuration): Specifies how to run the rootfs; for example, what command or entry point to run in the rootfs when the container starts; environment variables to set for the container; the container’s working directory; and a few other settings.
Docker takes these elements and TARs them up. If you have Linux or any UNIX experience, this should all look very familiar to you (especially if you’ve ever built a kickstart or jumpstart server). The container image has its own root filesystem and all the subsystems required to run the container. You can get prepackaged container images on DockerHub. For example, this is what the page for Apache webserver looks like:
When this image is called (by command line or an orchestration system) all of its components are started, and the container is in a running state. The instructions on how to configure and run this Apache container image are on this page, or you can use a platform like Kubernetes to do that.
Container Runtime is a huge topic, and the term can mean several different things. I’d recommend this blog post by Ian Lewis to understand all the different meanings. I think this VMware definition is probably the most succinct for IT admins (and this post):
A container has a lifecycle which is typically tied to the lifecycle of the process that it is designed to run. If you start a container, it starts its main process and when that process ends, the container stops.
Why are people using containers?
Let’s be honest, you’re probably looking at containers because your developers want to use containers. One big similarity between VMs and containers is their purpose: both are architectural elements on which applications are deployed. The difference is how two are architected.
While VMs virtualize the entire server, containers are lightweight, abstracting applications away from the operating system. Because of the way containers are packaged and run, they are perfect for running applications based on microservices. Here is the Wikipedia definition for microservices:
Microservices are a software development technique … that structures an application as a collection of loosely coupled services.
Applications built with the microservices methodology deconstruct the all the services needed to run that app. If they are built on containers, they may be built so one container hosts a single service. So instead of one large monolithic application, where you need to update the entire product to update one part of the app, you can update each service as its needed.
Why containers are for IT admins
Containers are an important space for the IT admins. If your apps are refactored to a microservices architecture, there are all sorts of operational complexities are introduced. The different parts of the app – the services – may be simplified, but the system itself may be more complex. And at the end of the day, these are still apps. How is data integrity and compliance managed? What are the expectations around uptime, and what are the architectural techniques required to meet those expectations?
And what about security? How will you secure these systems and the data?
Our responsibility as IT admins is to provide the environment our devs need to create the apps that drive our businesses. Container technology is maturing, and it is time for us to get our heads around it.
What is your experience? Are your admins asking to use container architecture in production? What have you done to uplevel your skills?
These are some of the other sites I used for this blog post.
Differences between containers and VMs
VMware Integrated Containers [github]
Linux Journal – everything you need to know about Linux Containers
Sysadmin’s guide to containers