This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Best Practice To Free Up Space Post Upgrade

Hi All,

I just upgraded a file server and obviously RR took a new snapshot. My question is what is the recommended method for gaining some space back on the repository? I have another protected server to upgrade soon and I need the room for yet another new snapshot.

Do I shorten the retention period temporarily? I use the default retention schedule. If I delete my original snapshot won't it kill the whole old chain?

Thanks

  • If you delete a base, it will also delete every INC from the base forward to the next base. So not a good option. You could archive the base and its INC before deleting but due to performance issues, this is not something I would typically conciser 

    Shortening the retention may buy you a little space, but probably not enough.

    We go through this a lot and here are my thoughts. Everything you can think of is a band aid to try and fix a broken aspect of the product. There is really only 1 option in my mind

    Add more space to the Repo. This could be either a new repo or add an existing extent to the current repo. 

  • I have another repo that replicates from this one. If I wipe out the base on one repo would it leave the other one alone? One would have all new points and the other all the old.

  • If you're upgrading the OS on an agent, a base image is definitely expected. That's something we designed into the software to ensure integrity of the backups after the OS upgrade. So I'm not at all surprised you got a base image.

    As for how to deal with the used space in the repository, I'd recommend using archiving to cloud storage. You can use S3/Glacier or Azure Warm/Cool storage or even Google Cloud storage to easily archive out the data and store it long term. Depending on the storage tier you choose and how often you need access to the archived data, you could easily store a significant amount of data securely in the cloud for a very reasonable price. Then depending on how long you need to keep that data for, you could purge it and stop incurring cost.

    As an example, Google gives out $300 in free credit when you set up an account. If you create a bucket and use nearline storage as the default it's $0.01 per GB. Let's say you need to archive 10 TB of actual used repo space. So you archive that to the Gcloud bucket and it's going to cost you just over $100 per month. You could essentially store 10 TB in Google for 3 months for free.

    Azure gives out $200 free with a new account and has similar pricing to Google. AWS has credit promotions too. If you're just looking for a short term place to park data, that seems like the best option to me.

    If you need to store that data for a long period of time you'd need to run a cost comparison of cloud versus local storage. I'd bet that the break even point is somewhere in the 24 month range.

  • The recovery point chains on the two cores can be different. As long as the replication is in sync currently, deleting the old base image and incrementals from one core would not remove it from the other.

    The only way it could cause an issue is if you break replication after the delete and reconfigure it. In that case if the core with the old base image chain is the source core, it will try to replicate all the missing data over to the target core so that both have the same data. But as long as you leave the replication configured it won't be an issue. 

  • Hi Tim,

    I'm forbidden by policy to store data in the cloud.

  • Well, then I guess you're stuck with local storage.

  • So if I archive that off to a NAS it will remove them from the repository? Do I need to pick a time line or just include the old base image and it will take the rest? Everything up to the new base would be what I need pretty much.

  • Archive does not delete anything from the repository. It makes a copy of the recovery points. So your steps would be:

    1. Archive the recovery points in the date range that you want to delete. The create archive wizard is pretty straightforward.
    2. Validate the archive is successful and usable with the "check archive" function in the core
    3. Use the delete range function to remove the recovery points that you archived.