Standby VM Fails to Export After 6.3 Upgrade.

My first backup and export cycles after upgrading to 6.3 failed with the following messages-

Error
One or more dependent export tasks have failed.
TevoGetFlagsInOfflineVolume failed with error -2147024893 (0x80070003 - The system cannot find the path specified)

Server side:

Replay.Core.Contracts.Export.ExportException: One or more dependent export tasks have failed. ---> System.AggregateException: One or more errors occurred. ---> System.AggregateException: One or more errors occurred. ---> Replay.Common.Contracts.TevoLib.TevoLibraryErrorException: TevoGetFlagsInOfflineVolume failed with error -2147024893 (0x80070003 - The system cannot find the path specified) at Replay.Common.NativeWrapper.TevoLib.TevoLibraryErrorException.Throw(String functionName, Int32 errorCode) at Replay.Common.NativeWrapper.TevoLib.TevoLibWrapper.GetFlagsInOfflineVolume(String mountPoint) at Replay.Common.Implementation.Virtualization.P2VPostprocessor.UpdateMetadataFile(String mountPoint, Guid driverId, FileSystemType fileSystemType, UInt32 restoredEpochNumber, UInt32 highestEpochNumber) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.UpdateMetadataFile(IVolumeImage volumeImage, String mountPoint) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.TryPostProcessVolume(IPAddress replayServerAddress, IVolumeImage volumeImage, ExporterSourceVolume exporterSourceVolume, MountFlags mountFlags) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.PostProcessExportedVolume(ExporterSourceVolume exporterSourceVolume, IPAddress replayServerAddress)

--- End of inner exception stack trace --- --- End of inner exception stack trace --- --- End of inner exception stack trace --- at Replay.Core.Implementation.Export.ExportJob.ExportTask() at System.Threading.Tasks.Task.Execute()

Replay.Common.Contracts.TevoLib.TevoLibraryErrorException: TevoGetFlagsInOfflineVolume failed with error -2147024893 (0x80070003 - The system cannot find the path specified) at Replay.Common.NativeWrapper.TevoLib.TevoLibraryErrorException.Throw(String functionName, Int32 errorCode) at Replay.Common.NativeWrapper.TevoLib.TevoLibWrapper.GetFlagsInOfflineVolume(String mountPoint) at Replay.Common.Implementation.Virtualization.P2VPostprocessor.UpdateMetadataFile(String mountPoint, Guid driverId, FileSystemType fileSystemType, UInt32 restoredEpochNumber, UInt32 highestEpochNumber) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.UpdateMetadataFile(IVolumeImage volumeImage, String mountPoint) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.TryPostProcessVolume(IPAddress replayServerAddress, IVolumeImage volumeImage, ExporterSourceVolume exporterSourceVolume, MountFlags mountFlags) at Replay.Core.Implementation.Export.VirtualMachineExporterWindows.PostProcessExportedVolume(ExporterSourceVolume exporterSourceVolume, IPAddress replayServerAddress)

  • This seems to have worked. VM exported and booted. Tonight it will replicate out to my other core and I imagine fix the other copy offsite.

    Thanks again for the help. Maybe it will help someone else down the road. I'll mark it solved.

  • Hello,

    I looked into your account issues and believe we've resolved the issue with your support entitlement.  You should have no more  issues submitting support tickets and your support is current.  Please let me know if you continue to encounter any issues and I'll look into the situation right away. My apologies for any inconvenience. 

  • I typically check this manually (or I thought I was) by booting the standby VM. I noticed none of these will run but older ones will so something definitely went pear shaped post upgrade. I'll run that new base image and see what happens. If it still acts up ill take it offline and check disk the OS volume. I'm exporting a good VM now from pre-upgrade date just so I have something should check disk finish it off for whatever reason.

    Thanks again, much appreciated.

  • I've forwarded your issues to the global support manager for Rapid Recovery. We'll get you straightened out. Sorry it's been such a rough time.

  • That totally depends on what you installed and when. We released 6.2.0 on 4/10/2018 and found a critical defect. So we pulled that agent build and notified everyone. Then we released 6.2.1.99 on 8/7/2018 and then found an issue with it so we released 6.2.1.100 on 8/31/2018 and issued another notification. If you installed 6.2.0 or 6.2.1.99 and they had been running, it's possible that the defect we had found could have caused this sort of issue.

    It could also be total coincidence and there is actually some sort of issue with your production OS volume. If the NTFS file system has an issue it's possible the system could self heal but the backup pickup that bad NTFS data and include it in the RP chain causing it not to work properly. This is why we recommend having the "Check Integrity of Recovery Points" setting enabled. That way each night your recovery points are being checked to ensure they are mountable and the root of each volume can be browsed.

  • I'll try that. Thanks a bunch for checking this out.

    If you have any contacts with support I'm really having a time with them. I called over the Winter to check on the status of my support (because I could not find it listed anywhere in my panel or elsewhere) and they told me we were good until Nov. When I put in a ticket the other day they said I don't have support. I then ask them to check on that and Evan M E-mails me and says I'm all set with support. I log in to find my account connected another customer in another State which is where I still am now. I replied to his E-mail but still no update.

  • What was the old version? 6.2.1.100. It's 6.3.0.5309 now

  • What version of the 6.2. agent? 6.2.0, 6.2.1.99, or 6.2.1.100?

  • Yep. Change the schedule for just the OS volume to something different than the data volume. Then force a base image with just the OS volume selected. You have to make sure the volumes show up as grouped separately. So there should be a ">" symbol next to the OS volume. Like this: