by Melissa Palmer
In our recent article titled Everything You Need to Know about Getting Started with Docker, we looked at the basics of Docker and how to get a quick environment up and running using Boot2Docker. This was a nice start, but really only touches lightly on how Docker should be evaluated.
Docker is beoming one of the hot topics of the industry. Despite all the positive things that Docker brings, it was lacking in some of the features that the community was looking for. Luckily, the community has responded in the best way along with the team at Docker, by creating some excellent additional tools to work with your Docker implementation.
The advantage of Docker is its ability to run multi-container deployments, but what about orchestration and management? This is where Docker Swarm and Docker Compose come in.
First we had the introduction of Docker Machine, a single command line to allow for deploying Docker on multiple destination platforms. In the announcement of the beta (https://blog.docker.com/2015/02/announcing-docker-machine-beta/), there were already a significant set of supported targets:
- Amazon EC2
- Microsoft Azure
- Microsoft Hyper-V
- Google Compute Engine
- VMware Fusion
- VMware vCloud Air
- VMware vSphere
Using Docker Machine means that you can provision your Docker application to any of these providers, and ensure consistency for your containerized application. A variety of examples have been provided at the Github page for the project to help you get started with both local and cloud targets using Docker Machine (https://github.com/docker/machine/blob/master/docs/index.md).
Swarm is a native clustering platform for Docker which allows for cluster deployment, discovery, and management. This extends the Docker framework to provide developers and operations teams with a way to begin to scale applications on the Docker container platform. This is much like what Mesos does for Kubernetes, which we will tackle in other post in the near future.
You can see from the Docker Swarm docs page (http://docs.docker.com/swarm/) that installation is as simple as a single command: docker pull swarm
Once installed, you are ready to spin up your first Docker Swarm. Deploying Swarm nodes would be done to create targets for management, and as soon as you have your nodes up and running, the Swarm master would be configured to discover them. Discovery backends include the native discovery service, the popular etcd from CoreOS, Zookeeper from Apache, Consul from Hashicorp, as well as a static list of IPs.
This opens the possibilities for you to use a variety of options for managing your Swarm nodes and Docker clustering. Beyond the front end, there are also target providers already moving ahead with great support of Docker and Swarm. For example, Microsoft Azure provides supported docs to integrate on their public cloud platform (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-docker-swarm/).
As always, the inclusion of a RESTful API (http://docs.docker.com/swarm/API/) makes it possible to orchestrate in many different ways using your tool of choice. Whether this will be a part of a smaller scale orchestration framework, or a complete Continuous Integration/Continuous Deployment (CI/CD) platform. The good thing is that this is also native to Docker which helps to ensure that it should evolve in tandem with the rest of the Docker ecosystem.
Docker Compose was another excellent tool which was announced, which allows for a complex Docker application topology to be defined in a single file. You can then use Docker Compose to start, stop, and rebuild your services using a simple command set. The simple and popular YAML file standard is used to define the application environment which can be seen in the YouTube video below.
Using the simple small YAML file, we see that a web application is deployed, connected to a Redis environment, and the application is started with the necessary ports exposed externally. The presumed next step would be to also attach this external port to a load balancer which would provide a scale-out web application front end and you are now fully orchestrated on Docker.
There is a great video that helps to illustrate the workflow of creating a Swarm, and then using Compose to define and manage from there. This video is show below and is a good start to what will inevitably become much more development on this platform. These are simple examples, but you can see how flexible the platform can be.
If Docker isn’t already in your lab environment, it definitely should be. Containers are becoming a potential boon for developers, and operations teams everywhere. Now is the time to evaluate your business use-case against these great tools to see how you can best leverage them.
Hi, this is very helpful, thank you so much!
Currently when I create new Digital Ocean host machines I need to pass in a fresh access-token every time with the `–digitalocean-access-token=token://` and I notice in your screencast above that you have some way of not doing that manually each time. Could you point me in the direction of how to create a similar setup?