"DvmFSIncomplete" error while attaching to an existing repository


We are receiving a "DvmFSIncomplete" erorr while attempting to attach a copy of an existing repository to a new core installation (full DR test)

Here is the stack trace of the error:

Server side:

Replay.Core.Contracts.DedupeVolumeManager.DedupeVolumeManagerException: File system is incomplete

Error checking the file system: '50e0190e-ddd3-45b3-8cf1-efae9e549cf1', data path: '\nas01.mydomain.local\RapidRecoveryLive', metadata path: '\nas01.mydomain.local\RapidRecoveryLive' - Error(s) 'DvmFSIncomplete' at Replay.Core.Implementation.DedupeVolumeManager.Dvm.<>cDisplayClass46_0.b0() at System.Threading.Tasks.Task.Execute()

Not sure if this could have an impact, but the new core install is at 6.5 while the repository was connected to a core running version 6.4. 
I've made sure the filesystem is not readonly.
I've looked through the KB but could not find any reference to the error in question.
Any ideas?
  • The questions I have are: 

    1) Does it still mount? 

    2) You mention this is a copy of your repo. Did you just copy/paste your repo from one storage unit to another or use xcopy or something of that nature? If so, that raises a number of other questions.

    If you want to move your data from 1 repo to another, or keep the Copies of the RPs, either replication or archive would probably return a higher probability of being free from error? Although the copy/paste of the data should work, if all the data (data and meta) are copied over and all parts of RR are stopped while the transfer is being done, this would be the first time I've read about how copying and pasting a repo didn't provide the intended result. 

    Also, if at all possible, if your NAS has the ability for iSCSI, use iSCSI to mount the storage to your Rapid Recovery server. Due to the dedupe engine of RR, mounting volumes (directly attached or iSCSI) are infinitely more forgiving than ones attached via UNC path. Just thinking out loud sorry, when I see the path make my mind go back to that. Cheers. 

  • Thanks for the quick response!

    No, the repo does not mount.

    The files were created by executing a backup/restore using the Synology "Hyper Backup" utility. The logs indicate they were successful and did not contain any erros or warnings.  I suspect the backup files are in an inconsistent state as the backup was taken on a repo in active use, but the error message is not clear so I was wondering if there was more to backup than juste the repository files. 

    The live repo has been attached via a UNC path for some time now and has not caused any errors so far.

  • Gotcha. I am familiar with this backup, the Synology one. Especially with the RR core service running and active, I would not expect that to work for duplicating the repo for RR. There are ways around this, both local and cloud. The local way, as you're already pointing it to the NAS already would be a scheduled archive. However, especially as you already have the storage, replication is without a doubt your way to go. You could use an old PC, server VM, and just create a new repo on that Synology and replicate the data over to it. Offsite, you can go build/maintain your own, or honestly that is where folks like me come in (just being honest with you). Then you can have the data duplicated and off the network. There are ways, both local and onsite, though I can't speak for Quest, I'm sure that they would agree that the Synology backup utility would not be the way to go my friend. 


    If you want/need a dialog shoot me a message or we can continue this thread if you like. Cheers. 

  • Decided to give replication a try as you suggested (but using the Synology replication service, not RR). This seems to work correctly as I was able to mount the replicated repo on the new RR core and restore a couple of VMs successfully. Thanks again for your help!

  • Glad to hear it. I personally never tried the Synology replication service, only as there was one built into RR (which I'm sure they'd recommend using) but now I want to try that as well. Glad it worked, cheers.