Author: Thomas Bryant III, Vizioncore Senior Product Architect
Over the course of the next few weeks, I'm going to be building out a few vFoglight servers running the latest version of our product, 5.2.6, and the next release 6.0. We will be looking at performance as we load up VMs from 10 to 100 to 1000 to beyond … plus it's all going to be running in a VM.
Before we begin to install the product, the typical thing to do is to size out your environment and architect a solution based on what you know, and some of what you don't know.There are a series of questions you have to ask:
The first three are all up to your environment, but the third question plays in to the last one as well, around how much data do you want to keep? It's important because you want to size your database appropriately for growth. By default, vFoglight doesn't expire data. We do, however, age data, while maintaining its historical importance. I'll get into that in a minute, but let's look at data growth and why we want to be sure we plan appropriately.
Consider that for a single VM, there are over 200 statistics to collect. Most, if not all, have some importance, so as your environment grows there are going to be a lot of data points. For example, a Resource Pool has its own CPU/Memory limits, reservations and values. Those are on top of the rollups you need for things like CPU %Used for the Resource Pool, which would be the sum of all the VMs inside it.
So back to the number, we want to ensure that we have accurate numbers and plan for growth so we can deal with the large volume of data coming in.To help control the data flowing in we age data gracefully, which is configurable as well.
The system defaults to aging data as follows:
But wait! If you roll up data, it loses its historical importance and becomes worthless over time, right? After all if you average data, it will lose peaks and valleys and turn into a gentle curve! Normally, that is true in most cases. However for vFoglight we do something quite different for metrics, which is one of the really powerful portions of the tool.
For virtually every metric, we store:
This means that while we roll up data, you may not be able to determine the peak in CPU utilization was at 2:24 or 2:30, but you could see that the Maximum value from 2:15 - 2:30 was 95% vs. the average which may just be 30%. This ensures that even if the data is rolled up time and time again, you still have the same importance without having to take up all of that space and affect performance.
So how big is a VM, ESX host, etc…? On average, each object is going to take up about 1MB a day of data over the course of a year. This grows at first rapidly, due to the raw data highlighted above, to around 30MB after 3 days and then gradually climbs to roughly 356MB over 365 days.This means, that if you add up all the objects you can easily find the size of a database & plan for growth.
For example, with 100 ESX Hosts running 2000 VMs you can expect around 15G of data in the first 3 days and growing up to about 146G over the course of a year.For this I would then want to add in growth rates and a buffer to ensure to ensure plenty of room, 10% is a typical number that customers use, and you have your database.
The next part of puzzle is how BIG does the vFoglight Management Server have to be? I'll cover that next as we start testing out different size environments.
To be continued...
Thomas Bryant IIIVizioncore Senior Product Architect