In previous posts, we have discussed what a container is. But how do developers turn containers into a containerized application? Let’s discuss more today around containerized application design.
If you’re a developer or an SRE, the contents of this post may seem obvious to you. But to traditional operations folk, there are nuances and differences between hosting and managing traditional apps and the requirements for doing the same with containerized apps. So, we’re playing catch-up. Dev friends, if you see errors, or see ways to explain these concepts better, please let us know in the comments!
Containerized Application Definition
The beauty of containers is that they isolate applications from the operating systems underneath them. As we discussed in this post, this isolation practically eliminates the issues we have traditionally seen when apps are moved from dev into production. Google says that the key to making this work is a “hermetic container image that can encapsulate almost all of an application’s dependencies into a package that can be deployed into a container.” This idea of a hermetically sealed container eliminates host OS dependencies, and this idea is what shifts a data center’s focus from machines to applications.
How are Containerized Applications Designed?
We know what a legacy application looks like, and how to design an architecture for it. What does a containerized application design look like? RedHat has published a great native cloud container design whitepaper, in which they describe several principles for designing containers:
- Single concern principle: Every container should focus on 1 concern and do it well
- High observability principle: Every container should provide health-check APIs
- Life cycle conformance principle: The application has a way to read events from the platform
- Image immutability principle: Containers should be immutable (not change) across environments
- Process disposability principle: Containers should be ephemeral and able to be replaced by another container at any time)
- Self-containment principle: Containers should contain everything it needs at run-time
- Runtime confinement principle: Containers should declare resource requirements and share that with the platform
APIs make move data center focus to applications
If these are the design principles for the containers that make up a containerized application, it’s easy to see why APIs are so important. APIs connect the hermetic containers in an application to each other, and the application to the platform on which it is hosted. The APIs are what keeps the application alive and healthy.
If these architectural designs are followed, the containerized application will have the ability to communicate all sorts of telemetry to and from the containers in the architecture as well as the platform, creating an ability to finely tune both in ways that are not possible with the current disparate management, application, operating system, and virtual machine tools we currently use.
This gives ops the ability to focus less on tuning machines or VMs to specific application requirements (and resource babysitting!), and more on rolling out innovative hardware, OS, and security operations to support a true symbiotic environment. All without disrupting application development or production.
More reading material
I relied heavily on the Google research paper Borg, Omega, and Kubernetes to write this post. If you’re just beginning your learning path into containerized application architecture, I highly recommend reading this paper.
This post is the latest in series of posts on containers and containerized applications. I’d love your thoughts on these.