This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Difference in recovery points between core and replication server

I have a core server and replication one running 6.1.1.137

i started the replication process but not all recovery points are copied.

 

Here is the lsit of Main core server

 

This is my Replication 

 

 

How can i have the same recovery points on both?

Parents
  • The recovery points on the two cores to me looks like the difference in how the retention policy is applied. Are the two cores in separate timezones?

    The compression rate difference is going to be a function of your dedupe cache sizing. By default RR uses a 1.5 GB dedupe cache. This cache is capable of 100% dedupe for about 500 GB of unique data. After you have reached 500 GB of unique data the dedupe cache is maxed out and you start to get duplicate blocks of data written to the repository. So on the source side the data is deduplicated as it is stored in the repository. When you replicate, that same dedupe process is used again to improve the dedupe even more on the target core. So what this tells me is that your dedupe cache was not large enough to get efficient dedupe on the source core, which allowed replication to save you even more repository space.

    In this situation I would recommend increasing your dedupe cache size on both the source and target cores. We have a KB regarding how to size the dedupe cache properly here - support.quest.com/.../134726. Generally we recommend 1 GB of dedupe cache per 1 TB of protected data. In your situation you only have 1.5 TB of protected data so a 1.5 GB dedupe cache would be our recommendation. But we have real world data proving that you actually need more dedupe cache. So I'd probably set the dedupe cache to 3 or 4 GB on each core. Please note that the increase in dedupe cache directly increases the amount of RAM consumed by the core since the dedupe cache is loaded in RAM.

    The other thing to note as you increase the dedupe cache is that it doesn't fix the data in the repository already. It will only impact the new data coming into the core. So it will probably take a few months of backups and rollup removing duplicate data before it's affects are truly seen. The other option would be to increase the dedupe cache and then run the repository optimization job which processes all of the data in the repository again and with the increased dedupe cache size should decrease the overall repository usage. However that job takes a significant amount of time and while it is running it blocks other jobs from occurring so it may not be functional for you based on your time constraints.
Reply
  • The recovery points on the two cores to me looks like the difference in how the retention policy is applied. Are the two cores in separate timezones?

    The compression rate difference is going to be a function of your dedupe cache sizing. By default RR uses a 1.5 GB dedupe cache. This cache is capable of 100% dedupe for about 500 GB of unique data. After you have reached 500 GB of unique data the dedupe cache is maxed out and you start to get duplicate blocks of data written to the repository. So on the source side the data is deduplicated as it is stored in the repository. When you replicate, that same dedupe process is used again to improve the dedupe even more on the target core. So what this tells me is that your dedupe cache was not large enough to get efficient dedupe on the source core, which allowed replication to save you even more repository space.

    In this situation I would recommend increasing your dedupe cache size on both the source and target cores. We have a KB regarding how to size the dedupe cache properly here - support.quest.com/.../134726. Generally we recommend 1 GB of dedupe cache per 1 TB of protected data. In your situation you only have 1.5 TB of protected data so a 1.5 GB dedupe cache would be our recommendation. But we have real world data proving that you actually need more dedupe cache. So I'd probably set the dedupe cache to 3 or 4 GB on each core. Please note that the increase in dedupe cache directly increases the amount of RAM consumed by the core since the dedupe cache is loaded in RAM.

    The other thing to note as you increase the dedupe cache is that it doesn't fix the data in the repository already. It will only impact the new data coming into the core. So it will probably take a few months of backups and rollup removing duplicate data before it's affects are truly seen. The other option would be to increase the dedupe cache and then run the repository optimization job which processes all of the data in the repository again and with the increased dedupe cache size should decrease the overall repository usage. However that job takes a significant amount of time and while it is running it blocks other jobs from occurring so it may not be functional for you based on your time constraints.
Children
No Data