The IT industry isn’t that old, so we don’t have a very long history. Even with the short history, we’ve been able to detect repeatable patterns in technology. The Negroponte Switch (about how wired signals become air-born, and air-born signals become wired), and Moore’s Law (computing power doubles every two years) are two examples. With the advent of cloud computing, we may think that some concepts like container technology are brand new, and will require a huge learning curve to come up to speed. But when we take the time to look back into history, there is usually a pattern we can observe that gives us an anchor from which to solidify our understanding of the “new” topic.

 

Let’s start first with virtualization. Founded in 1998, VMware made server virtualization a common part of any modern data center. But it wasn’t the beginning of server virtualization.

  • 1972: IBM actually was the first to create a virtual machine for mainframes. According to the Everything VM blog, the first mainframe to support virtualization was the IBM CP-67, and the operating system that enabled this was called CP/CMS. The first stable release was in 1972.
  • 1987: SoftPC made it possible to run DOS applications on UNIX workstations.
  • 1997: Apple created a MAC emulator for Windows.
  • 1998: VMware released Workstation.

The rest is history. The concept of hardware virtualization is over 45 years old. Think about the things you do now to enable the flavor of virtualization you prefer in your datacenter…they are really all the data hygiene principles you apply to applications that run on physical servers. We just tweak what we’ve always done with physical servers to work for virtual machines.

Let’s apply this idea of revisiting history to understand a hot new datacenter topic: container technology. Container security company Aqua has a nice blog post about the history of containers.

  • 1979: The UNIX system call chroot was introduced. chroot changes the apparent root system for a current running process and its children. This gives you a way to securely run a specific process in a specific location.
  • 2000/2001: FreeBSD Jails (2000) and Linux VServer (2001) were the next steps in the evolution, going beyond isolating processes to file system virtualization (complete with IP addresses for each jail).
  • 2004: Solaris Containers were the first container technology introduced, in Solaris 10. Solaris Containers are a combination of system resource controls and zones, the zones acting as a way to virtualize many operating system instances on a single server. The zones share the operating system’s resources. Other container types quickly followed that were released as part of the official Linux kernel.
  • 2006: Google developed Process Containers in 2006, which was renamed cgroups (Control Groups) when it was merged into Linux.
  • 2008: LXC, introduced in 2008, was the first Linux container manager. It combined the idea of cgroups with namespace isolation, giving an app complete isolation with OS level virtualization.
  • 2011: Warden is CloudFoundry’s version of LXC.
  • 2013: LMCTFY is an open-source version of Google’s stack which became the Open Container Foundation.
  • 2013: Docker

Container technology has only been around for a little over a decade , but the underlying concept is almost 40 years old. If you’ve ever written any UNIX or Linux scripts, you’ve probably used the chroot command, and quite possibly the other container specific commands. You probably understand server virtualization, so the concept of operating system virtualization isn’t foreign to you. Posts like this InfoWorld post on containers can help you with vocabulary.

Dig into our shared technology roots to build on what you already know. Don’t let the hype hold you back!