There are two schools of thought when it comes to virtualization management. One that sees it as an art, and the other as a science – but knowing how much of one or the other virtualization management is, well, that depends upon your vantage point.
You’d probably say it’s a science if you could see the entire environment and understand every variable involved in optimizing performance in your virtual data center. Allocating this much memory, that much CPU, these gigabytes of storage and those Mbps of network throughput adds up scientifically to a balanced virtual environment.
However, it’s not always possible to see all the variables right from the start. For example, it takes a while to gauge the cumulative effects of multiple virtual machines (VMs) on a host, and the work being done in each VM (database, virtual desktop infrastructure, security) affects the resources it needs. So until you have all those variables lined up as a science, you have to rely on art.
In the previous post, I described the variables of VM density and VM sprawl. This time I’ll cover more of the variables in the art of optimizing your virtualization management.
You rarely think about bandwidth on a physical server. It has a 100Mbps or GigE adapter that it never comes close to maxing out in everyday use. But if you have 10 or 20 VMs running on that physical server, you can also have a lot of competition for the attention of that adapter, especially if it’s pushing results from SQL queries and hosting remote desktop sessions.
Your storage configuration can make or break virtualization performance and availability. So the art that will get you to optimization science is to un-squeeze storage connections by adding networking, physical disks and processing until you’ve reduced competition on your adapter.
Another variable to consider is vSphere. It’s a good idea to apply resource reservations, limits and shares to individual VMs and resource pools. When you apply them in multiple locations at once, you’re likely to complicate your resource calculations and the effectiveness of vSphere Distributed Resource Scheduler (DRS) load balancing.
Resources in constrained pools are divided at the resource pool first, and constraints are enabled only once there’s contention. Suppose you have 3,000 shares across two pools: a test resource pool with four VMs and 1,000 shares and a production resource pool with 50 VMs and 2,000 shares. Your test pool won’t be a problem until there’s contention in the production pool. When there is contention, one-third of your vSphere cluster’s resources will be distributed among the four test VMs, while the 50 production VMs must share the other two-thirds. That’s lopsided resource management.
Exercising a far lighter touch than vSphere’s hard constraints, is where the art to optimizing virtualization management lies. You’re better off avoiding reservations, limits and shares until you understand completely –and can continuously monitor – their effects on the entire environment.
For more concepts and strategies, download our new guidebook, An Expert's Guide to Optimizing Virtualization Management. It not only provides you with concepts and strategies, but will help you identify areas in your virtualization environment where you can find and recover lost ROI.