This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Virtual standby to ESXi 6 host failed -

I recently moved from using the agent to an agentless deployment (yep, archived all my old recovery points first!), and have run into a few hiccups on the way.  The initial backups all work well, replication works well, but the virtual standby's that I'm creating have been running into odd errors.  Anyone else see this?

 

Anyone else run into this?  A google of the error doesn't return much of anything.  The Core's subsequent attempt to export was very odd.  Originally it only attempted to export the 9GB of changed data, which surprised me, given the failure on the initial export, but then when the status got to 8.2/9GB, then it changed to 8.8/9.6, and kept rising until it got to 18.83/19.82GB at which time I got the "post-processing" message, which ultimately resulted in the same error messages.

Here's the stack trace from the original error:

Server side:

Replay.Core.Contracts.Export.ExportException: One or more dependent export tasks have failed. ---> System.AggregateException: One or more errors occurred. ---> System.AggregateException: One or more errors occurred. ---> Replay.Common.Contracts.TevoLib.TevoLibraryErrorException: TevoDismountVolume failed with error -2147024893 (0x80070003 - The system cannot find the path specified) at Replay.Common.NativeWrapper.TevoLib.TevoLibraryErrorException.Throw(String functionName, Int32 errorCode) at Replay.Common.NativeWrapper.TevoLib.TevoLibWrapper.DismountVirtualDiskVolume(VolumeName volumeName) at Replay.Core.Implementation.Mounts.LocalMount.DismountInternal() at Replay.Core.Implementation.Mounts.LocalMount.Dismount() at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.<>cDisplayClassa.b7(Exception e) at System.AggregateException.Handle(Func`2 predicate) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.PostProcessExportedVolume(ExporterSourceVolume exporterSourceVolume, IPAddress replayServerAddress, IDiagnosticContext context)

--- End of inner exception stack trace --- --- End of inner exception stack trace --- --- End of inner exception stack trace --- at Replay.Core.Implementation.Export.ExportJob.ExportTask() at System.Threading.Tasks.Task.Execute()

 

Replay.Common.Contracts.TevoLib.TevoLibraryErrorException: TevoDismountVolume failed with error -2147024893 (0x80070003 - The system cannot find the path specified) at Replay.Common.NativeWrapper.TevoLib.TevoLibraryErrorException.Throw(String functionName, Int32 errorCode) at Replay.Common.NativeWrapper.TevoLib.TevoLibWrapper.DismountVirtualDiskVolume(VolumeName volumeName) at Replay.Core.Implementation.Mounts.LocalMount.DismountInternal() at Replay.Core.Implementation.Mounts.LocalMount.Dismount() at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.<>cDisplayClassa.b7(Exception e) at System.AggregateException.Handle(Func`2 predicate) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.PostProcessExportedVolume(ExporterSourceVolume exporterSourceVolume, IPAddress replayServerAddress, IDiagnosticContext context)

Close
  • I should add that my set up is that we have a "Core1" running Windows 2012 that receives the initial backups and then replicates to a secondary core running on Windows 10 Pro which does the virtual standby's.
  • Maybe some help.

    The 0x800xxx error is a disk error in windows but altogether with the "cannot find path" combined with ESXi leads directly to this Dell/AA KB-

    support.software.dell.com/.../154612

    I am a AA customer however I apparently do not have the privileges to read it. Maybe someone here can help with that.

    I will say that exporting standby to a Vbox machine fails with similar looking errors when the protection interval (time) is changed. Starting over with a new one time export fixes this.
  • They're suggesting the following, which may be, but I doubt. When I was backing these systems up using the agent, then the Virtual standby worked fine, but I began having the issues when I converted to using the agentless backup.

    That said - I found a new symptom - when setting up my initial virtual standby it gave me an error that the boot disk was on disk2000, or some such thing. I'll have to see what I can find out about that one.


    "Cause
    Filesystem corruption more than likely exists on the production protected server. Investigation of the System Event logs on the Core may reveal disk and ntfs errors referencing the AppAssure mount point.

    Source: Ntfs
    ID: 55
    Date: 06/23/2015 17:09:04
    Description: A corruption was discovered in the file system structure on volume C:\ProgramData\AppRecovery\ExportMounts\AgentName_247168f0-3e65-4dc4-a378-c49f1ae26ea7.

    The Master File Table (MFT) contains a corrupted file record. The file reference number is 0x9000000000009. The name of the file is "".
    Resolution
    On the Agent machine, schedule a chkdsk /f /r on the volume showing failure in the logs. Once complete continue to monitor the virtual standbys on the Core.
  • The initial error I get on export is:

    "The system volume is located on Disk2000. After export please change the boot device order to be able to boot from the exported machine. Wish to continue?"
  • Have you tried completely starting over? That is to say remove the VM from the host and delete all the files pertaining to it. Remove the machine from the VM standby schedule on the core, etc?

    It sounds similar to the problem I was having. It seems like once you change anything (in your case removing the agent) the exports fail because certain parameters change.

    In my case even going back to the original schedule still failed. I had to remove everything and start over and then it worked.
  • When you say remove the VM from the host - you mean completely delete the production VM, or just basically start again with my Cores?
  • Yes remove everything from everywhere and start completely over.

    I only use Vbox on Linux and HyperV on 2012r2 so I'm not sure how ESXi works. In my instance the VM would still spin up but apparently the drives became corrupt once the exports started failing. I first tried just deleting the vhd's but the exports continued to fail.

    Once I deleted the machine from the host, manually removed all the config files and removed the export process from the core (essentially starting completely over) it again worked.

    I understand this could be a huge hassle for a large VM so its probably a last resort but I thought it may help.
  • And just to clarify my setup is a core running on 2012r2 (in house) that is backing up a physical server running 2008r2 (in house) and exporting it as a VM to a remote DR location running Vbox5.
  • Hi kenton.doerksen:
    Disk2000 makes me think of the hardware version for the VM. I would suggest the following steps:
    1. Upgrade the core to RR 6.1 (the replication target first) if you did not already done it as some VM export issues were addressed (including compatibility with ESXi/VC 6.0+)
    2. Check the hardware version of your VMs and upgrade as needed. This is an inconvenient operation as the virtual machines need to be turned off so I would try with a test VM first.
    If this seems as an overkill, it may be a good idea to open a support ticket and have one of our engineers take a look at your environment.
  • Sounds good. I'll do the 6.1 upgrade first (I was running 6.0.2.144). All my VM's are running VM version 11 (for ESXi 6.0+).