Ransomware protections

like most companies we are trying our best to NOT get infected by Ransomware, but....  assuming we do get infected.

I understand that RR core service locks the backup files from being encrypted, that's a great start.  but if it's human operated and they get into the core server... game over.  Are there any built in replication type protections that people are using?   what are people doing so they can sleep at night?  my first step was to remove both source and target RR servers from the domain and give them strong local admin passwords.  I see that these servers have Admin share and c$,d$ shares turned on, do I need those turned on for RR to function?    thanks!   Quest could put together a simple document like, "10 things to do to help prevent backup encryption". that's what I need!

  • There was a document out here for Quest just for this. I'll see if I can find it, there use to be one. 

     

    Either way you're on the right path. Keep the cores off the domain, replicate your RPs, and don't replicate within the domain. Keep a replication off your network if at all possible, so you have an onsite copy, and an offsite copy. You can also setup your archives to keep a copy offline too. 

  • And here is another:

    support.quest.com/.../common-recommendations-regarding-rapid-recovery-and-ransomware-infections

    Both links are good.  is correct in his recommendations.  We went the extra step of disabling RDP and only allowing access to our cores through IPMI on a separate network.

     One thing we did not do is setup archives to keep an offline copy as you sugegst.. Can you detail how you did this or point me in the direction of documentation on this?  This is something I will configure as soon as I know how to do it. Thanks!.

  • Hello, just to go through our experiences with Rapid Recovery and Ransomware, seeing that we were hit a couple of years back.

    1. We were told when the Rapid Recovery system was installed by Quest that the Repositories couldn't be infected with ransomware, this is categorically false.  Our Repositories did get encrypted by the ransomware and Rapid Recovery became immediately useless as it had wiped out our backups.

    2. We were told during setup of the Rapid Recovery boxes to connect the Cores to the domain because it'd make restores easier when copying files from mounts back over to our domain servers.

    3. We believed we were fine because we had 2 Rapid Recovery cores at 2 different sites replicating to each other.

    4. We did have basic offline backup that was essentially robocopy scripts making backups of our file servers onto usb hard drives, this is what saved us.

    We spoke to Quest who sent an engineer onsite who setup the Rapid Recovery boxes off the domain and we locked the firewalls right down.  We also purchased a tape drive that connects to a virtual Netvault server, Rapid Recovery will run on archive at night and then Netvault copies it to the tape and we take the tape offsite.

  • Simon, thanks for sharing your experience.  it does seem the most important things are to 1) keep RR off the domain, 2) strong local pw 3) keep RR OS patched.  I recall too that RR being on the domain was totally fine!  those days are over. oh well.  Do you think if your RR was not on the domain it still would have been compromised?  I like your new plan (archive).  I plan on steps 1,2,3 that I listed above and locking down the server to only necessary ports.  We also have a target core offsite (it's also off domain with it's own strong password).  

  • I think it definitely would have helped.  We did also have an up to date antivirus on our servers, including backup servers, that just ignored the ransomware.  Strange thing was, the client antivirus on all of our clients by the same antivirus company protected the workstations.

    Your planning should be around 2 areas, 1 is how to make it difficult for the servers to be infected and 2 is how to recover once the servers are infected.  The ransomware wiped out probably about 50 of our servers plus NAS boxes connected to the network, the NAS raid was corrupted but we managed to rebuild the raid array somehow, can't remember how, and managed to get some data back that way.

  • You can either manually do this, or setup a scheduled archive: https://support.quest.com/rapid-recovery/kb/186241/how-to-schedule-an-archive#:~:text=On%20the%20button%20bar%20of,list%2C%20enter%20the%20required%20information.

    You can set it up to a resource of your own, or to a cloud provider: https://support.quest.com/rapid-recovery/kb/185733/how-to-add-a-cloud-account

    The offline archive, you can restore directly from now too, you don't have to re-import it into the repo any longer (as you use to have to do in the past). 

  • Correct. There's always that desire of 'putting things on the domain' either by want or because of internal policy. I recall in years past working for companies that had policies of 'it's plugged into our switch its on the domain.' However magically once a cloud provider is introduced that IS NOT plugged into the same switch, those desires/policies disappear. Though that is an extra cost, that is one of the advantages of cloud data protection providers, they are nearly always NOT on your network. Again that generally an extra cost, however some can get affordable and cater to your needs. 

  • @phuff  Thanks for the reply but we are already pushing an archive to AZURE. This archive runs regularly but it is not "offline" per se. It is offsite in AZURE but any one that gains access to the admin console could 'delete" it at any time.   What I want is an archive that is literally offline as in inaccessible even from the console until required.

    Thanks!

  • @simon peart

    Your config sounds more like what I am thinking needs to be done as well. But that is introducing a second backup system into the environment and at considerable cost.

    Thanks!