True story: I knew of a bank once whose disaster recovery plan involved finding the physical backup tapes that had been carted to a sister branch two weeks prior and then trying to restore them. Problem was: no one checked the tapes to ensure the backups were working. Once located and installed, they discovered the backups would work, but only about 15% of the time. That debacle resulted in the hiring of a new IT director, who quickly went to work developing a comprehensive DR plan for the bank, which included automation, replication, and aggressive testing.
Just in time, it turns out, because right after the DR plan was up and running, the hotel next to the bank caught fire and the city shut down power to the entire block. For two weeks. Resulting in the first (and successful, I might add) test of the new IT director’s disaster recovery plan.
We’ve been talking a lot about disaster recovery here in our blog Nine Steps to Building a Business-Oriented Disaster Recovery Plan because we’ve heard countless stories of unforeseen disasters, DR not taken seriously, and too many IT teams woefully unprepared for an unplanned event. In truth, disaster recovery planning isn’t just about human error, big storms, and mighty disasters, really. Disaster recovery planning is about the planning part of the equation.
Let’s take a look on the flip side, and see how one organization did it right.
Tasmania Fire Service (TFS) is responsible for fire suppression and control throughout the state of Tasmania, Australia – a geography where bushfires spike during summer months and close to 11 percent of the population live close to bushlands. TFS lives with risk every day. But the IT team knew that its critical data had to be anything but at risk, so it embarked on a well-thought out DR plan that articulated needs, set out key goals, and then put a workable plan in place that was right for their situation.
Here’s what they did:
Because TSF knew from first-hand experience that education and preparation were keys to preventing and, fighting fires in Tasmania, they took the same approach to their data protection plan - mapping out the education and preparation that was necessary for their IT support group.
And that bank I mentioned earlier? Because they too had put their DR plan into effect, not one penny of productivity or revenue was lost as a result of the local hotel fire. When asked about what he’d say to a colleague about DR planning, the bank IT manager told me, “I’d ask my colleague - ‘what would you do if your data center was without power for two weeks without notice?’ or, ‘What would you do if you couldn’t physically be on premises for two weeks?’”
Good question to ponder. What WOULD you do?