More from InterOp: George Pradel's Second Session

I captured the following notes during George's presentation on "Backup and Data Protection in a Virtual Environment". This session ran on Wed at InterOp. Check out the pics of the session at the end of the post!

A lot of folks go to virtualization to take advantage of the encapsulated file – it has the whole virtual machine in it. But, then they continue to use the same management policies and tools that they’ve always used and they don’t take advantage of that encapsulated file. It kind of undermines many of the reasons that they went to virtualization in the first place.

That’s the basis for today’s presentation and a lot of what we’re going to talk about today.

The drivers of virtualization adoption have changed. Now, according to FOCUS, DR actually exceeds server consolidation. This doesn’t really surprise me a lot. Thank goodness, PtoV software has really come a long way. Getting servers into consolidated virtual environments is really a lot easier than it used to be. But DR and backup has climbed to the top as the biggest issue to solve in these environments.

According to Gartner, today we have about 16% of workloads running in virtual machines. It took approximately 6 years to get to this point. Gartner predicts this to increase to 50% of server workloads by the end of 2012. This is an exponential growth trend, which is an acceleration in the adoption of virtual servers and virtual technologies.

Why this rapid growth? There is a shift happening from simple server consolidation to better data management and systems management being the big goal. There is a lot more capability built into the hyper-visor layer than ever before. Network capabilities are maturing. Managing at the image level includes protection of more types of data. The room for creativity in terms of how you manage and organize this environment is huge. That’s also where some of the biggest challenges are, because the flexibility of the environment challenges us to reinvent some of what used to be best practices. This includes best practices for data protection and management.

Ironically, one of the big drivers to virtual adoption – which is to improve data protection – is now also cited as the biggest obstacle to adoption, again according to the FOCUS team’s data. Why? Backup processes don’t work that well to begin within the physical world. When you take those backup processes into the virtual world, it’s a real mess: they slow virtual servers to a crawl, challenge the server I/O and CPU capacity, tend to collide across VMs on the server, and saturate the network with backup traffic.

As one noted Analyst stated, “The one time that you needed all of that extra server capacity was to support your backup process.”

So, how can you solve this? What’s needed is a new method for protecting data which takes advantage of that encapsulated image – the VMDK file on VMware systems – and uses it to perform data protection and management rather than continuing to use your traditional file-based backup approach.

Details on the Problems with High-Density VM Backup

· What happens if the backup jobs try to run concurrently?

· When all of the backup agents for a group of systems are suddenly grouped together on a single virtual machine, what’s to prevent them from trying to run all at the same time?

· How do you schedule this? Manually? Or, would it be better to have the backup system know how to balance the backup workload across VMs on the system?

· What happens to VM performance levels during the entire time that backup jobs are trying to run – on any of the guests on the server?

· Are all of your VMs in your dev/test environment being protected?

· Are these challenges limiting your VM deployment options? Your VM density?

Recall that VMs tend to propagate in these environments. It’s a whole lot easier to create a new VM than it used to be to cost-justify, purchase, configure and deploy a new physical system. So, even if your environment doesn’t start with a high VM density the chances are that it will soon.

Using Image-Based Backup to Solve the Problems with VM Backup

An image encapsulates every type of data that you need in a virtual machine. This means System State, OS and application configurations, Registry keys, file systems and application data.

Capturing the image as the backup captures every type of data that you need all at one time. This eliminates extra steps. It’s agentless – so it’s less expensive, less burdensome to manage, and less disruptive. End-users and applications can continue to operate against the snapshot of the data while the image is read for full, incremental and differential backups.

Using an image-based approach is faster and less risky. There are fewer moving parts and pieces. Not only is the image frozen at a point in time for the application to ensure that the application data is consistent to a point in time, but in fact all of the data from the entire system environment is taken from that same point in time. This dramatically improves the recoverability of the data in the image.

Moreover, individual objects including files and email messages can also be recovered from the image. Images can also be moved transparently between different types of hardware.

These concepts are really talking about moving from a traditional, backup 1.0 approach to an image-based, or Backup 2.0 approach for protecting data in a virtual environment. Backup 2.0 approaches have been designed to take advantage of the original reason that you chose virtual server technology in the first place, which is to take advantage of the encapsulated virtual machine images.

About the Author