Live restore large volume to new OS on different server?

Here's the scenario, I want to see if I'm thinking straight here:

I have a file server with a very large data partition of almost 5 TB.  It's running Windows Storage Server 2008 and getting low on physical space so I want to move the data to something more modern, specifically Server 2012 R2. The server name needs to stay the same as the old one because our document management software depends on UNC paths and it's not easy to change.

My plan is to create a new 2012 VM on different hardware, turn off the existing 2008 server, name the 2012 server the same as the old 2008 one and assign the same IP, and install the RR agent.  Then I'd kick off a Live Recovery for the data partition, thereby keeping downtime to a minimum so users can access the data even while it takes days to restore.  Is there any reason that would not work?

Assuming that plan works, what happens after the restore? Does RR take a full base image of this huge server, or is it smart enough to know only the C: drive changed and the D: drive is just an incremental backup?

Thank you in advance for any thoughts and advice.

Parents
  • Another option would be to use robocopy (or something similar) with the /r option (I think) this allows you to run the copy several times and it skips what is can not get and only transfers what has not been done already. For example 

    robocopy /r #1 runs for 24 hours and gets 80% (it skips files that are in use)

    robocoopy /r #2 tries to copy any new files and the 20% it missed in the first run. Lets say it gets 10%)

    now you stop access to the main server (so files are not in use) and run robocopy #3

    You don't have issues with deduplication, or middle men (RR) and you don't have to worry about disk size mismatches or other issues. Run robocopy as many times as you want till you get the smallest amount of data left

Reply
  • Another option would be to use robocopy (or something similar) with the /r option (I think) this allows you to run the copy several times and it skips what is can not get and only transfers what has not been done already. For example 

    robocopy /r #1 runs for 24 hours and gets 80% (it skips files that are in use)

    robocoopy /r #2 tries to copy any new files and the 20% it missed in the first run. Lets say it gets 10%)

    now you stop access to the main server (so files are not in use) and run robocopy #3

    You don't have issues with deduplication, or middle men (RR) and you don't have to worry about disk size mismatches or other issues. Run robocopy as many times as you want till you get the smallest amount of data left

Children
No Data