Working out the potential return on investment on a new server or application is relatively straightforward – if it enables you to do more in less time, profits are likely to go up, making it worth buying. If there are no obvious benefits, however, and no immediate gains, then the computation is far more challenging. This has always been the case in the realm of backup and recovery, where the chief benefit is the ability to maintain operations (measured in uptime), rather than enhance the bottom line. The trouble is, although essential to business continuity, backup and recovery systems are often neglected because they do not directly generate revenue or reduce costs. However, we can calculate what downtime means to your organization:
The total cost of eight hours of downtime in this particular example is $62,568. And if the downtime affects a customer-facing website or application, these numbers don’t even begin to calculate the costs of customer frustration, a flood of calls to your customer support teams, or giving your customers an opportunity to consider alternatives. The outcome of your calculation will be different, but the key principles of how to calculate the value of DR are the same.
While defining downtime in general is relatively simple – the time during which one or more resources (in this case related to your IT environment) are unavailable – the root cause of downtime can take many forms. Some scenarios, such as natural disasters, power outages, equipment changes or maintenance, can be planned appropriately, while others cannot. This lends an element of uncertainty to planning, yet identifying the causes of downtime is critical to establishing an intelligent and detail oriented plan to reduce downtime. The majority of organizations have established plans or procedures to recover from things like natural disasters, power outages and even malware and malicious attacks.
Yet many sources report the number-one cause of downtime is human error. (The Uptime Institute, for example, reports that human error is the cause of more than 70 percent of data center downtime). The take away here is: “Sweat the small stuff.” You are probably prepared for a natural disaster, but are you prepared for the contractor that will inadvertently rub against a meekly-protected “kill switch,” shutting down the data center? Or are you prepared for animals chewing through cords? Or police shutting down the block and denying access to your racks? These are all scenarios we have seen, and they are just the tip of the iceberg of the potential list of possibilities that can leave you stranded.
To balance an ROI equation – even the hypothetical one posed here – we also need a solution side of the equation. There are many different kinds of technological solutions to consider for reducing downtime and maximizing your data protection, backup and recovery environment – server backup software, backup appliances, physical machines, virtual machines, the cloud. Nearly every organization has different needs, and needs different capabilities. One size rarely, if ever, fits all in this world, but there are things to consider when looking at solutions:
So given everything you’ve read up to now, what can you, an IT leader, do today to help reduce downtime and ensure your team is in the best position to succeed? Approach your data protection environment as you would any major system and lay out a clear path for improvements. Objectively and meticulously assess your environment; create a definitive plan with specific and reachable goals; and execute that plan.