Transfer Failed: Call to service method timed out

Hello, 

For the past few days, we cannot back up due to this error that I can't get around. Both core and agent servers are running 6.7. I have tried to modify the time-out settings in the agent server to 20 min but still nothing. I also tried to repair the agent and installed a patch per Quest's support but still running into the same problem. I appreciate any help on this. 

Below is the stack trace:

Server side:

Replay.Agent.Contracts.Transfer.TransferFailedException: The transfer failed: 'Call to service method *link* GET failed: Service method call timed out after 00:09:59.9924681' ---> System.AggregateException: One or more errors occurred. ---> WCFClientBase.ServiceMethodCallTimedOutException: Call to service method **link** GET failed: Service method call timed out after 00:09:59.9924681 ---> Microsoft.Http.HttpStageProcessingException: GetResponse timed out ---> System.TimeoutException: GetResponse timed out ---> System.Net.WebException ---> System.Net.WebException: The request was aborted: The request was canceled.
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at Microsoft.Http.HttpWebRequestTransportStage.HttpTransportAsyncResult.PopulateWebResponse(HttpTransportAsyncResult self, IAsyncResult result, Func`3 getResponse)

--- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   at Microsoft.Http.AsyncResult.End[TAsyncResult](IAsyncResult result, Boolean throwException)
   at Microsoft.Http.HttpWebRequestTransportStage.EndProcessRequestAndTryGetResponse(IAsyncResult result, HttpResponseMessage& response, Object& state)
   at Microsoft.Http.HttpStageProcessingAsyncResult.FinishRequest(IAsyncResult result)

--- End of inner exception stack trace ---
   at Microsoft.Http.AsyncResult.End[TAsyncResult](IAsyncResult result, Boolean throwException)
   at Microsoft.Http.HttpClient.EndSend(IAsyncResult result)
   at System.Threading.Tasks.TaskFactory1.FromAsyncCoreLogic(IAsyncResult iar, Func2 endFunction, Action1 endAction, Task1 promise, Boolean requiresSynchronization)

--- End of inner exception stack trace ---
   at WCFClientBase.ClientBase.HandleServiceMethodCallException(Uri uri, String method, TimeSpan elapsedTime, Exception e)
   at WCFClientBase.ClientBase.<>cDisplayClass81_1.b1(Task`1 t)

--- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   at Replay.Core.Implementation.Transfer.TransferJobHandler.TransferHandler.ExecuteParallel(Action`1 action)
   at Replay.Core.Implementation.Transfer.TransferJobHandler.TransferHandler.TransferTaskInternal()
   at Replay.Core.Implementation.Transfer.TransferJobHandler.TransferHandler.TransferAgentTask()
   at Replay.Core.Implementation.Transfer.TransferJobHandler.TransferHandler.TransferTask()
   at System.Threading.Tasks.Task.Execute()

Parents
  • A couple of things to look at. On the client/agent machine open an elevated command prompt and enter in a command of:

    vssadmin list writers 

    If any writers say failed or waiting for completion the writers are currently hung, more than likely waiting for a reboot. 

    Look at when the job failed within RR, get the date and time, and then open up Windows Event viewer on the agent machine. For the system logs look at the time that the backup started and failed. Are there errors? Are there volsnap errors? 

    You can also narrow down the amount of VSS overhead as a test, and disable/exclude all the extra VSS writers. Fro m the RR GUI you choose the agent > settings > and then exclude the VSS writers. 

    There's many things that can go astray with VSS, this is a solid way to start looking into it. 

Reply
  • A couple of things to look at. On the client/agent machine open an elevated command prompt and enter in a command of:

    vssadmin list writers 

    If any writers say failed or waiting for completion the writers are currently hung, more than likely waiting for a reboot. 

    Look at when the job failed within RR, get the date and time, and then open up Windows Event viewer on the agent machine. For the system logs look at the time that the backup started and failed. Are there errors? Are there volsnap errors? 

    You can also narrow down the amount of VSS overhead as a test, and disable/exclude all the extra VSS writers. Fro m the RR GUI you choose the agent > settings > and then exclude the VSS writers. 

    There's many things that can go astray with VSS, this is a solid way to start looking into it. 

Children
No Data