How to Seed Replication to a Remote DR Series Appliance

Quest’s DR Series Appliance is a remarkable device. It has amazing deduplication and compression, source-side and inline deduplication capabilities, is certified to work with most major backup software vendors, plus many more incredible features. It also has WAN-optimized replication that lets you send deduplicated and compressed data from one DR container to another. It even gives you the ability to throttle your replication bandwidth and schedule when data is sent over the WAN. This optimized replication will allow you to keep redundant copies of data in separate DR’s while putting minimal stress on your network. However, even with deduplication and compression, replicating an initial set of backup data from one unit to another can still take a while and consume your available bandwidth during that period. So what should you do if you can’t afford to lose that bandwidth but you need to replicate? Well, there are two possible solutions.

The first solution is to bring your remote DR onsite so that the two DR units can replicate over a much larger network. This is a good and reliable solution, especially if you have not already sent the remote DR to its final destination. It allows the data transfer to put less stress on your network, speeds up the rate of data transfer, and allows you to verify that both units are set up correctly when they are both in close physical proximity to each other. However, if this is not a practical course of action, for whatever reason, then there is still another solution.

The second solution involves seeding the data from your local DR to a removable USB hard drive (mounted as a CIFS share on the DR) then physically moving that hard drive to the location of the second DR and ingesting it into the remote device. Please read the entire process before attempting. The process is very simple and is described below:

  1. First you must prepare the device for seeding. Gather the following information:
    1. The names of the containers you want to replicate from the local DR system
    2. The name or IP address of the client machine on which the USB device is attached
    3. The name of CIFS share that the USB drive is exported as
    4. Any relevant credentials for both DR systems and clients
    5. A password to use for encrypting the data on the device
  2. Next, create a seed export job by entering this command in the DR CLI (accessed by using PuTTY or another third party CLI tool):
    1. seed --create --op export [--enc_type <aes128 | aes256>]
    2. The data is encrypted by default but the encryption type is optional.
    3. You will then be prompted for a password to continue the export job
  3. After the seed job is created, you must add the container being replicated to the job. If you are seeding multiple containers, you can execute the command multiple times.
    1. seed --add_container --name <container name>
  4. You must then reference the USB device to which you are copying the data with the following command:
    1. seed --add_device --server <server name> --volume <volume name> --username <user name> [--domain <domain name>]
    2. The parameters of this command are:
      1. Server Name – the name of the client which exports the device as a CIFS share
      2. Volume Name – the name of the export share on the target server. Case sensitive.
  • User Name – credentials to access the share. Password is required separately
  1. Verify that the job was configured correctly with the following command:
    1. seed –show
  2. When you are ready to begin, start the seed process with the following command:
    1. seed –start
  3. You can monitor the progress of the seed with the following command:
    1. seed –stats
  4. If the USB device fills up before the export job is finished, the status will show as ‘Paused (Device full)’.
    1. You must remove the device with the following commands:
      1. “seed –stop” then “seed –remove_device”
    2. Then add a new device with free space using the following command:
      1. seed –add_device
    3. When you are ready to continue, enter the following command:
      1. seed –start”
    4. You can also stop the export process and remove the device for any other reason by using the “seed –stop” and “seed –remove_device” commands” as well
  5. If the data that you are seeding is completely gathered and the seed job is finished, you are able to delete the seed job by using the following command:
    1. seed –remove_device

There are some important things to know about the export seeding process:

  • If you run ‘seed –delete’ then all of the info about the seeding job will be removed from the DR system. If a new job is created for the same container, then it will have to copy all the data all over again.
  • You can still back up into the DR during the data export process, even into the container being exported. The DR can function normally while you are seeding a container for replication
    • Replication can also be enabled like normal to the same DR system or different DR systems
  • Seeding does NOT delete the data on the device. Data on the device can also co-exist with seed data. It is optional to delete the data on the device to free up space for accommodating seed data if desired.
  • You are not required to seed the entire amount of data. You can choose the amount of data to be seeded depending on factors like time, available space, etc. The residual data will reach the target container during replication re-sync.

Once you have seeded the data to an external USB drive and physical moved the drive to the location of the remote DR, you can import the data using the process described below:

  1. You can create a seed import job using the following command:
    1. seed --create --op import
    2. You will need to input the same password you specified during export
  2. Reference the device from which the seed data is to be read by using the following command (only CIFS shares are recognized):
    1. seed --add_device --server <server name> --volume <volume> --username <user name> [--domain <domain name>]
    2. The parameters of this command are:
      1. Server Name – the name of the client which exports the device as a CIFS share
      2. Volume Name – the name of the export share on the target server. Case sensitive.
  • User Name – credentials to access the share. Password is required separately
  1. When the device is added, ensure the configuration details are correct with the following command:
    1. seed –show
  2. When you are ready to begin the seeding process, enter the following command:
    1. seed –start
  3. You can monitor the seeding process with the following command:
    1. seed –stats
      1. If the seed status shows as ‘FINISHED’ then the data has been completely read from the currently attached device.
      2. You can remove the device after entering the ‘seed –stop’ command. You can then attach the next device.
    2. You can stop the seeding for any reason with the ‘seed –stop’ command as well
  4. To remove the currently attached device, enter the following commands:
    1. ‘seed –stop’ and then ‘seed –remove_device’
  5. Once the data has been completely read from all devices and the import process is complete, you can enter the following command to remove the seed job completely:
    1. seed --delete
  6. Once the import process is finished, you must perform replication re-sync between the two DR containers. Once replication reaches the ‘INSYNC’ state, run the following command to remove any data that is left unaccounted from seeding:
    1. seed –cleanup

There are some important things to know about the import seeding process:

  • The order in which you attach the seed drives makes no difference. The data will organize itself as necessary.
  • If the data changes on the source container, it won’t cause an issue. The new data will be replicated during re-sync
  • You must only run the ‘seed –cleanup’ command AFTER the replication is established between the source and target container. Otherwise, the entire imported data set could be deleted.
  • If any of the devices were corrupted or lost during import, that data will be sent from the source to the target container during replication re-sync.
  • The data will not automatically be deleted off of the seed device once it is imported into the DR. Thus, you could use the same drives to seed data onto multiple DR devices. However, it is recommended to delete the data on the seed device once it is no longer needed.

The final step in the process is to re-sync replication between the source and target DR devices. To do that, complete the following steps:

  1. Once the seeding import is complete, two possible scenarios exist:
    1. The target container already exists
    2. The target container doesn’t exist and a new replication pair must be created.
  2. If the target container exists then you must initiate replication re-sync between the source and target DR containers. Since the data is already present on the target, the re-sync will complete quickly after transferring the namespace with any other new or changed data. Complete the following steps:
    1. Enter the command:
      1. replication --resync --name <name> --role <source | target>
    2. Wait for replication state to come to ‘INSYNC’
  3. If the target container does not exist, complete the following steps:
    1. Create a container on the target DR Series system
    2. Enable replication between the source and target containers with the following command:
      1. replication --add < … >
    3. Once the replication status changes to ‘INSYNC’, then both the source and target have the same data.
    4. Enter the following command on the target to remove unwanted data created by seeding with the following command:
      1. seed –cleanup

 

All information in this blog post was based off of the August 2014 Whitepaper written by Dell Engineering entitled “Seeding from a Dell™ DR Series System to an External Device”

About the Author
Sam.Peck
Sam Peck is a Sales Engineer for Quest's Data Protection products covering Upstate New York, Vermont, New Hampshire, and Maine. He has been with the organization for almost five years and has held...