This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Creating a lot of jobs for archive to tape

Hi!

We recently installed a second Smartdisk running in another server location and thus we have the following setup:

  • All jobs run backups to a deduplicated Smartdisk in server location A
  • All jobs have a phase 2 where the backups are data coped to a second deduplicated Smartdisk in server location B

This all works like a charm, in fact backups have never worked this good! We have the Smartdisk in server location B loaded with quite a lot of disk which means that we can have some 3 to 4 weeks of backup jobs available at any time.

 

Earlier we had the phase 2 making backups to tape but that is not possible any more due to phase 2 being data copy for Smartdisk no. 2 and as far as I know you cannot have a phase 3 in a job.

Now to the question. How would I create archive backup jobs to tape for all my backups jobs without having to create the archive jobs one by one?

Any suggestions?

 

 

  • Currently, the only way to do this is the method described. Where you basically would have to create a 3rd backup (Standalone Data Copy) to Tape. This would involve waiting until the Phase 2 is complete, and then creating a data copy from Disk to Tape. This is assuming you want to run the Phase 2 right away before the tape copy. It technically does not matter when you run the copy to tape, but you cannot run both at the same time, which you probably already know.

    However, if you are pretty handy with scripting, you could actually have your backup run as it is, but;

    Option 1:
    Phase 1 -> Then use a post script instead to trigger a data copy which can then include a Phase 2 in it.
    So, it would run like this: Backup->Post Script kicks off DataCopy/Duplicate->Duplicate kicks off a Phase 2 in itself after its datacopied and goes to tape.

    Option 2:
    Phase 1 to Phase 2 (Disk to Disk)
    Schedule a nightly data copy to run every night to backup the latest backup to tape

    Option 3:
    Similar to option 1, but do a Disk to Disk first like in Option 2, but have a post script in the job to Kick off a data copy to tape, but it would have to fail since it would try to run right away, but you can schedule a retry for it to run, say in 2 hours after.
    ---------------------------------------------------------------------------------------------------------------------------------------

    Out of all these options though, I wouldn't ONLY really advise on using Option 2. The reason why is because it is the easiest of the 3 options, but the main reason is just due to the nature of SmartDisk and Dedupe.
    SmartDisk doesn't dedupe on the fly, so you do not want to datacopy anything before it has the chance to dedupe. During dedupe, it'll calculate and save that HASH value that the Optimized Replication will use for comparison.
    This way, it will run from NVSD 1 to NVSD 2, the Data Copy will be Scheduled, but by later in the day, it will already have finished deduping and already know what it REALLY needs to copy to tape - this way it just doesn't send any excess data over the WAN.

    If the data copy was run immediately after the Phase 2 finished, it will copy ALL the data that is already on the 2nd NVSD Server over the WAN . It is much better to wait for dedupe to fill in the hash values, then have optimized replication serve the information it can inform NVSD that it doesn't need to send huge chunks of data over again.

    So, I would go with the option of leaving the Phase 1 and Phase 2 as it is, but schedule a data copy job to run every night copying those backups you want over to tape.

    Thanks,
    Andre
  • Thanks!


    Well, actually we only need to go to tape about two times per month so scheduling it is quite good for us I believe.

    Is there anyway I can schedule 100 jobs through a script or a via some other similar automated process- Or do I need to use the gui? It's gonna be a lot of clicking ... :)
  • Do you mean to create 100 different jobs, or create 100 jobs of one instance?
  • That is quite alot. Are these all for data copy type jobs though?
    Either way, you can use CLI, you would be using the command 'nvjobcreate'
    But since this uses existing Backup Sets, you would need to have also the "backup selection sets created"

    Once those are created, you can then use "nvjobcreate" with a parameter file and have all those sets built into the param file. You then would run "nvjobcreate" and it would read the param file and create the jobs.

    Here is a link to the CLI Guide and "nvjobcreate" initially.
    There is also a section on "nvsetcreate" in there as well if you wanted to create sets using the CLI as well.
    support.quest.com/.../16
  • Yes they will be. We are on capacity edition so we only backup the data we need. Therefore it is quite hard for us to use policy jobs since the backup selections differs a lot between hosts. So we have about 100 different jobs and thus we need 100 different data copy jobs.

    I will check it out, thanks!