Rapid Recovery Repository Volumes

I am using Rapid Recovery. 

My repository is currently made up of 4 6 TB volumes which are full. 

I am planning to triple the size of my storage and want to expand my repository. 

I was told that I could not simply add new volumes to the repository, as that would not balance the data from the currently full volumes, that I would need to archive the current repository,

Delete the current repository and recreate a larger repository. 

I have read that best practice is to use 6 TB volumes. 

Is there a limit to how many 6TB volumes I can use. 

What is the downside of using 10 TB volumes? 

Thank you, 


  • There's a difference between 'you can't' and 'perhaps it isn't the best idea.'

    You can keep extending your repo, however if you are moving to difference storage mediums, or different speed of disks, it's best to not do that. Even if they aren't 'bad' disks, or bad storage, if they are noticeably different speeds/caliber, not the best idea. For an example, probably not best to have a repo that spans a QNAP with 5.4k disks, then crosses to a EQL with 7.2k disks, and then extends to an all flash Nimble. Can it work? yes. Best idea? no. 

    I have a couple questions, or 1 really. Are these volumes tied together in a RAID and you have a repo that sits on this 1 volume that you've exposed to the core? Or do you have 1 repo, but it sits on a number of extents (different volumes)? Either way, the 'best' thing you could do is have 1 great big volume, and put your repo on it, rather than have 2,4,10 different volumes all attached to make up a repo. 

    RR does not 'balance' the repo across disks no. You tell RR 'here is a 4TB disk' it consumes the whole thing from the get go. Regardless of reason, that is how it works. 

    You asked about repo limits as far as disks are concerned, I think the limit of extents is something like 256, assuming they are mounted disks (not unc/cifs paths). Again, no one one would recommend you doing that, but the core would let you (going back to the first line of this reply, you can do it, but should you). As long as the 'new' volumes are mounted volumes and not cifs shares you can extend the repo yes (unless RR now does allow you to extend to cifs shares, which if it does, I would HIGHLY not recommend you doing that): support.quest.com/.../how-to-extend-a-dvm-repository

    As for using 10TB disks over 6TB, I'd always go bigger as having too much storage is better than not having enough. I have literally dozens of cores using Seagate EXOS 16TB disks, they are perfectly fine. 

    If you have the luxury of using a temporary VM, or another workstation/server as a core, your best bet in moving from 1 repo to another (if it's a full move and you can have both online at the same time) is to temporarily set up a 2nd core, put the 'new' repo on the 2nd core, and replicate the data from the original core to the new one (make sure you specify the same retention policy). Then, once the replication is in-sync, shut down the core services and remove the old repo, and attach the new one to the old core, and march on. This again, is if you are moving from old to new storage and they are entirely different. 

  • Phuff,

    Thanks for your response: We have a Dell Storage Array that is broken into 6 TB volumes and mapped to the RR server. 

    I am preparing to triple the size of the data store (RAID10)  with 7 k disks, and allocate 50 TB to the RR. So you would you recommend using one 50 TB volume or 5 * 10 TB Volumes. 

    Thank you, 


  • Sure thing. As long as the OS that you have RR installed upon supports a volume that size, then 1 lump sum would be ideal. That size (50TB) is still well within most OS supported volumes sizes. If we're talking 50TB, and its all going back to the same Dell backend storage, you betcha, 1 big volume is the way to go. 

  • Phuff,

    Side question:

    We have a Dell PowerStore data center for our vSphere cluster and a Dell Compellent Data Center for our backups. 

    It recommends that the RR Repository be directly connected volumes, not  Volumes accessed through the VM. Is this important.

    It also seemed to recommend that we do not put the RR Repository on the same Data Center as the vSphere. Is this still and issue or could we expand the PowerStore and use a Volume from the PowerStore. 

    Just curious for future planning. 

    Thank you, 


  • Mostly it is personal preference. I know that's a opinion, however if you've got Compellent and PowerStores and at least 1Gb, if not 10Gb uplinks, and assuming your VMware is solid, it's personal preference. I have seen (and done) repos that are connected via iSCSI through the VM itself, I myself prefer to do the iSCSI connection to the host and the present the repo as a .vmdk to the OS. Both work, no real downfall to doing 1 over the other. You can, expose the repo in the same DataCenter, really it's about security and design, trying to limit your expose if you were to have a failure anywhere. If its all tied to VMware and you lose your VMware, how do you get it all back. If you get crypto-locked, how do you get it all back. Again, modern day, you're VMware backend probably isn't going to be ransomed, not usually, the guest OS might be. Typically I do have my RR cores running as VMware VMs, as to take advantage of hotadd during the backups. I have the prod VMs on 1 datastore, and then I have the repo on another datastore. The reason I personally like doing this, I trust that the ESXi hosts, through iSCSI, can effectively handle the i/o traffic between the storage > host > VMs. If I was running on less than adequate gear, I might feel differently, however with decent gear, there is no real reason NOT to do this. I also prefer this method as you can 'see' it all from vSphere/vCenter, rather than have to 'go into' the VM running the core to see/manage the iSCSI initiator. However it's is largely personal preference today, unless you are dealing with less than desirable gear, then you might see a performance hit somewhere, otherwise it's Coke vs. Pepsi.

    For the repo, keep it iSCSI or directly attached though. Mapped drives, UNC paths, CIFS shares, those are the repos with problems more times than not. 

    If you have vSphere and you attach a 50TB LUN and put a repo on it, and then in 2 years you get a new SAN and put another 50TB LUN within vSphere, adding another 50TB .vmdk to the RR VM and then extending the repo to that LUN via drive 'F' (saying the original maybe E), that would work. If I pull up one of my client vCenters, and pull up the datastores, I have 1 datastore (LUN) for every client repo. Now, I would recommend separate LUNs for separate repos, only for VMware pitfalls of at times having a locked .vmdk or something. Better to only have 1 .vmdk/lun/VM affected if you have a .vmdk locked or something. Rare as it may be, it really sucks having a 60TB LUN with 10,20,30 .vmdks, and you have a locked/phantom 10TB .vmdk that will never delete and you have to recreate the LUN to get that data back (not an RR problem, but a life/VMware one that tho it is infrequent, it can happen).

    Again, if we were/are talking about workstations, laptops, desktops running vSphere (not server gear), and Buffalo storage, or Western Digital, or other home grade stuff, then yeah, all bets are off and it's roulette. 

  • PHuff,

    Thanks for the input. 

    I hope you have a great 2022. 


  • Anytime, likewise. Cheers to you Larry.