You may have heard about high-performance computing (HPC), but have you ever wondered if you can virtualize HPC workloads?
It is an interesting question. Data Scientists build HPC applications to do frame rendering that creates realistic animations, numerical simulation that helps build state-of-the-art jets, even modeling and simulations that create the financial models that our economies have come to rely upon. These applications are architected differently than the Tier 1 apps we have grown used to virtualizing over the past 10 years.
Traditionally, these workloads have been run on huge, bare metal clusters, with the goal of squeezing every ounce of performance out of the hardware in order to run these performance-intensive programs. Many companies are running some of these workloads in the public cloud, which means that they can be virtualized. But can HPC workloads be virtualized on premises?
What is an HPC workload?
This is how the National Institute for Computational Sciences defines HPC:
High-Performance Computing,” or HPC, is the application of “supercomputers” to computational problems that are either too large for standard computers or would take too long.
The article explains that HPC programs are split up into threads (smaller programs), that correspond to each core (individual processors). The cores communicate with each other to piece the larger program together.
These are some of the ways HPC workloads can be defined:
- MPI (message passing interface) apps: MPI allows developers to make the best use of distributed or shared memory, or even a combination of both.
- GPU apps: GPU apps take advantage of accelerator cards that allow the offloading of the most intensive portions of the HPC program.
Let’s map this to a real world end product. This article explains how HPC apps made the animation for the Pixar film Brave very realistic. This is how HPC works to make wind blowing look realistic in an animated film:
Instead of animating the effect of the wind on a few main objects in the frame, animators for current films can write algorithms specifying how grass, trees, leaves, and almost anything, including background objects, should behave. Once the simulation algorithms are written, it’s just a matter of having enough servers to simulate all of the different effects on the many objects in every frame of the movie.
In the movie Moana, a complex shot took 3 days to render a shot that would have taken 30 years to render without HPC.
What would it take to virtualize HPC workloads?
We may not be data scientists – the ones writing the algorithms that work together to simulate wind blowing through a scene in an animated film. But we do understand how to architect, manage, secure and maintain the best architecture for all sorts of applications. Maybe it is time to dig into how HPC applications are architected, and how to virtualize HPC workloads.
VMware announced a solution at VMworld that specifically addresses the question of virtualizing HPC infrastructure. Imagine managing these applications with familiar tools –vSphere and vCenter. To do that, we as traditional infrastructure people must learn more about these applications. So can you virtualize HPC workloads? It’s obvious that they can be virtualized, the next question is what is the best way to do get it done?
But that’s for another post.