This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Cloud Archive

I am trying to backup my local Rapid Recovery instance to Azure BLOB Storage. I am running Core 6.1.3.100. I have connected the Cloud Account, but am confused by the different options and what they will do. My objective is to have continuous protection on the machines locally, with snapshots hourly for 1 day, daily for 1 week, weekly for 1 month, and monthly for 6 months. I then want to have 1 backup on Azure that is the latest weekly. My concern is what is it going to do on a weekly basis, update the image that is there, or completely resync it? My repository is close to 4TB right now, I'm trying to minimize both bandwidth and storage cost.

  • Hi cbruscato:

    You have two options to consider for keeping data in the cloud:

    1. Archiving which can be classic (one time) or incremental and has the main characteristic that it grows always. Cloud archives can be attached to the source core as read only repositories for convenient data retrieval.

    2. Replication for which we recommend our own Rapid Recovery Virtual appliance in the cloud and as a main characteristic the data is rolled up according to your desired retention policy. You can get more information here: https://support.quest.com/technical-documents/rapid-recovery/6.1.3/replication-target-for-microsoft-azure-setup-guide/

    Archiving is suitable for long term data retention (such as end of fiscal year financial records) or in the case of incremental archives, for scientific research data that needs to be documented every step.

    However, since you are interested in archiving, you may take into consideration the following example as a way for limiting costs. Assuming a one year retention policy, you need to create 3 buckets. First year archive incrementally to bucket #1, second year archive to bucket #2 third year archive to bucket#3 and remove the content of bucket #1. This is how you will always have 1year+ of data available for recovery. (This example was geared toward simplicity; you can get same results with 2 buckets only or delete data our of the retention policy when you have 1 full year of backups etc.)

    If you want to go this way, there are 2 things to consider:

    1. Initial data (current data in your repository). If you have a really good WAN provider, you could do it over the wire. At theoretical speeds you may consider 24hours/TB for a 100Mb/s connection. If you cannot achieve a satisfactory transfer rate, you should consider using the Microsoft Azure Import/Export service. More information here: https://support.quest.com/rapid-recovery/kb/214778

    Please note that the drives you send to Microsoft need to be BitLocker encrypted. BitLocker is not available for all Windows operating systems (for instance it is not available for Windos 7 Professional but is provided to Windows 7 Enterprise and Ultimate). However, it can be enabled on servers that do not have the TPM chip such as virtual servers. If in doubt how to do it, please open a ticket with us.  Additionally, we have scripts that simplify the process of enabling BitLocker and creating and managing Azure buckets.

    2. The Retention Policy on your source core. If you do incremental archiving and the recovery points on the source core have already been consolidated so they cannot be connected to the recovery points already archived you may have the surprise of a larger job than expected as the necessary recovery points to fill the gaps need to be sent to the archive. To avoid surprises, I would suggest experimenting locally before sending your data to Microsoft. 

    Hope that this helps 

  • Please explain the recycling options on the cloud archiving option if you would. I just setup my new DL4300 appliance. I setup a Azure Cool Tier Blob account with container and setup a weekly archive job. I selected Recycle Action: Erase Completely; and I selected Build recovery points chains. I assume rather than incremental this will keep my Azure storage from forever growing and also give me weekly offsite disaster recovery?