Data Protection Instructions We Love: "Don't Lose Any Data and Don't Run Out of Room.” [New E-book]

When your data protection instructions are “Don’t lose any data and don’t run out of room,” you can find yourself in what we call the secondary storage squeeze.

On one hand, IT execs tell their backup administrators not to lose any data, which means that they want all of it backed up all the time. On the other hand, they tell them not to run out of room, which means that they want to be sure there’s always enough storage to hold it all.

  

The first part of the squeeze is the tightrope-walk over whose data gets to stay on high-availability primary storage. Michael Grant covered that in his blog series on business expectations and backup data retention policies (lyrics by Adele, for added color). In this post I’ll cover the second part of the squeeze, in which admins have to back up a fast-growing balloon of primary data within available time limits.

Secondary storage = Business continuity insurance against data loss

When your boss says “nightly backups,” she means that she wants them finished by the next morning. But as your users generate more and more data, your backup window shrinks and you run the risk of creeping into nightly-and-part-of-the-day backups. You can try adding hardware and capacity to the primary storage environment, but eventually you run up against limits of network throughput and the need to look at secondary storage devices more seriously.

“Secondary storage” refers to external devices not connected directly to production servers. It provides near-real-time access to data backed up to slower disks, or to a backup replica. The access is faster than to cloud backup or off-site tape and it’s an excellent way to ensure that your business can restore without data loss in case of a disaster.

But you’ll still need a trick or two if you want to protect that growing balloon of data in a shrinking backup window.

Backup appliances and deduplication: Almost a miracle

The combination of deduplication and backup appliances reduces the space required to store the data and the network bandwidth needed to send it. It’s almost a miracle, except for some technical and business limitations.

On the technical side, deduplication algorithms are resource-intensive. They parse the data, determine which blocks are duplicates and replace them with pointers. Even when this work takes place before the data goes across the network, there are trade-offs between beefing up the resources on the sending machine and adding bandwidth on the network.

On the business side, one size of backup appliance does not necessarily fit all. Companies connect their enterprise applications to secondary storage devices running a variety of protocols, and they want backup appliances and deduplication that fit what they have in place.

So it becomes another symptom of the squeeze.

The Secondary Storage Squeeze: What’s the Best Way to Handle It?

There is a way out of the squeeze. I’ll describe it in my next post.

Meanwhile, have a look at our new e-book titled The Secondary Storage Squeeze: What’s the Best Way to Handle It? The e-book goes into more detail on primary/secondary storage, backup appliances, dedupe and the technical and business problems on the road to handling the squeeze.

If “Don’t lose any data and don’t run out of room” is part of your job description as a backup administrator, you’ll find more useful insights in the e-book.

   
   

Anonymous