This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Virtual Standby Exports Suddenly Fail

Hi All,

I have two machines that I export offsite as standby VM's and they both have suddenly stopped exporting. None of the permissions have changed and the firewalls are down. I did just do a round of windows updates the day before but I was still exporting right up until this afternoon. I realize that this may not be a RR issue per-se but any ideas would be appreciated. I am unable to even connect far enough to add a new machine to export. I can remote desktop the machine and ping it.

Exception chain:

 

  • Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
  • Cannot obtain access to a remote machine 'offsite server' This error could be caused by one of the following 
    • The user name or password is incorrect.
    • The specified user does not have administrator privileges on the remote machine.
    • The specified user does not have access to the remote machine via Windows Management Instrumentation.
    • Cannot connect to the Hyper-V server 'offsite server' using the specified credentials. Please make sure the user name and password are correct
  • Hi Corrigun:

    The message "access is denied" raises a connectivity question mark which needs to be addressed first before going into more depth. Since replication goes on port 8006, the simplest test is:

    1. stop the core service on the source core

    2. run a telnet client pointing to the target core on port 8006

    If you get a blank CMD console that cannot be closed via CTRL-C, it means that you have connectivity.

    3. stop the target core service and run telnet again from the target.

    If you have connectivity it means that someone has hijacked your port 8006 (possibly a recently deployed Apache TomCat server)

    4. Telnet again from the Target to the Source (both core services stopped)

    Same as above if you have connectivity it means that someone has hijacked your port 8006 

    5. Start the Source core service (so you have a listener on port 8006) and repeat the test the other way around.

    If you have connectivity (and you did not at #4) it means that the connectivity is not your culprit and more troubleshooting is needed.

    If you determine that there is something listening on port 8006 on either side and you cannot shut down that application, you can change the replication port as shown below. Please note that the replication port is recorded on the source core only (so there is nothing to do on the target).

     

    Hope that this helps.

  • Hi Tudor,

    If the ports on the core were being used by another process wouldn't the protected machines lose connectivity? The backups are working, only the offsite VM exports are failing.

    I will try this regardless but it seemed to me that the offsite machine had the issue.
  • Hi Corrigun,

    Have you double checked the user account that was used when setting up virtual standby to ensure it is not locked or the password has not been changed? Those would cause access denied.

    The other thing to check on is the WMI troubleshooting in this article - msdn.microsoft.com/.../aa394603(v=vs.85).aspx. It lists that error code specifically with steps for how to resolve. It's very possible that Windows Update changed some WMI permissions and now your user doesn't have remote access to make the connections/changes that it needs for export.

    -Tim
  • Just to follow up with everyone. Thanks for the suggestions.

    It does not seem to be a permission or WMI thing although I did add the admin accounts to try and remedy the situation. It's not a blocked port or firewall issue either as far as I can tell.

    The definitely happened after a large batch of updates to two new 2012r2 servers that had RR connections prior.

    I ended up deleting both standby VM's and recreating them. To get the first reconnected I had to use a DOMAIN\USER format in the export wizard. For the second VM (same core exporting to the exact same hypervisor) I had to use the servers IP address as opposed to the machine name.

    I have no explanation as to why this happened or how but I would say it is safe to say as a heads up to future users that they may have to try combinations of machine name or IP and USER or DOMAIN\USER to make a connection depending on the machines they are using.
  • After a few successful exports I'm back to square one with a new error-

    "an existing connection was forcibly closed by the remote host..."

    Starting over again.

    EDIT: We have a bad fiber switch. Not sure how this would present as a permissions issue but fingers crossed.

  • It appears that a bad switch was the cause of my timeout and permission issues. I am still seeing odd behavior in transfers but I have started a new thread. I will flag this as solved.