Today server virtualization is advanced compared to its early days of inception over a decade ago, but the best part is that are our monitoring capabilities. When setup correctly we can have a proactive monitoring design that actually allows your Virtualization Team the ability to sleep at night. No longer being woken up by untimely and unnecessary alarms.
Server Virtualization Today
The adoption of server virtualization in the enterprise today is very high. There are even some organizations that have a virtualization only strategy. For me, I never thought I would say this, but with the proper design almost any application can run on a virtual server. There are still cases where I could argue “unnecessary complexity” depending on the application itself, but these are few as of today.
So am I fan, yes! Assuming the proper design has been deployed, and the correct application considerations/research has been done.
The Proactive Reality
Within server virtualization today we still care about Windows server monitoring and host monitoring. The evolution of the monitoring solutions available today allows for this to work really well when designed properly. Host level monitoring is also cinch with the right tools, and many of the bottlenecks that we were not able to trace with the tools of the past are now available.
One of the key challenges that still exists though is organizations do not dedicate the right resources to ensuring that their monitoring solutions are deployed properly. So in many cases, this still leaves us without properly monitored environments.
As history repeats itself here are two common monitoring problems we still face today:
- Leaving all alerting enabled, creating an environment where no one pays attention to their monitoring system alerts. In this case, there are just too many to keep caring what they are about.
- Monitoring is disabled, so much that we don’t get alerted when needed
So, how do we get past this?
The key is to implement your monitoring solution by dedicating a resource to your monitoring project. Monitoring needs proper attention to configure it correctly, it’s not just a magical tool that knows what your environment needs. A correctly setup monitoring tool can be “unicorns and rainbows”, but not without the right attention.
Also, be sure to work with the rest of the information technology team to determine their needs for the applications they support. One size doesn’t fit all in monitoring for servers regardless of whether they are virtual or physical footprints. Here is some additional detail that provides some important information about setting up monitoring correctly.
The Top 5 Performance Metrics to Monitor Now in your Virtualized Environment
We’ve already discussed how important it is to monitor our virtual environment, but now let’s cover some of the key metrics to setup in your monitoring tool for your virtualization solution.
- Storage latency can and will impact the performance of the applications you run in your virtual environment. The downhill impact of storage latency is end-user performance issues within that application which in today’s workplace is typically not tolerated. Understanding storage latency can ensure that your virtualization team can proactively review and modify the storage design before the organizational users notice.
- Disk I/O latencies can also impact your application performance. By monitoring these metrics your technical team gains insights into application impact. Allowing any possible issues to be researched and acted on in a timely manner.
- Host level CPU/Memory within virtualization environments should be monitored to determine what the overall virtual server environment is using. This becomes important when planning the overall size of your environment. For example, if I am an SMB with two hosts supporting my virtual server environment. Then the CPU and Memory consumption should ensure that if a single host fails all virtual servers can function on one single host without any problems.
- Guest Level CPU/Memory are also critical metrics to your virtual server environment. Each application that is run on a virtual server has its own requirements for CPU and memory. Understanding if enough CPU and/or memory has been allocated to the system is extremely important to application performance and functionality.
- Network Card and Network Traffic should be monitored from the host and virtual guest level. Monitoring the network card can be indicative of the functionality of the virtual network card at the guest level, also ensuring that it’s “connected” as expected. Secondly, it’s not just up to your network team to monitor the network, so make sure you are monitoring host level network traffic to the degree allowed by your monitoring tool. It might just save you a phone call to the network team, and provide you some insights to solve your problem.
Organizational leaders expect that we can respond to active issues within our virtualized environments. Acting fast to alerts, and fixing active performance issues quickly. Our technical teams NEED to understand why an application and server is running slow, have the ability to diagnose why there was a spike in CPU or memory, and ensure that our data store capacity is kept at optimal levels.
It is necessary to move to a proactive state, and stop be reactive through firefighting the issues in your virtual environments. When you move to a proactive state, this is where success begins.
Next there will be one more post in this series covering the future state of monitoring. What can we imagine server virtualization monitoring can do for us in the future?