This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

recovery point check has failed

Hi all,

I'm looking for some information on how to handle recovery point check failures. We see them rarely but when we do we are not sure exactly what steps we should take.

When this error comes up does that mean just the most recent snapshot is bad or is the entire chain right back to the base image bad? Will the next snapshot fix the issue and we just have to make a note that on this particular date the snapshot is bad? Does appassure mark that snapshot as bad?

Its rarely the data partitions that are bad like C: or D:, its almost always the EFI partition, the recovery partition or the system volume. In our case we don't plan on booting these images, just restoring data so do we even have to worry about those?

Any guidance is appreciated.

Parents
  • Hi ajns:

    Adding to fredbloggs reply.

    The rule of the thumb to determine if a recovery point is OK, is being able to mount it and read it. Sometimes the issues you see during recovery points checks are related to the load on the core (if too much is going on, the operation may time out before one or more partitions are mounted). If only a partition does not mount, the rest of the recovery point is OK. If a RP fails nigfhtly jobs checks but the next one is successful, it usually means that the chain is still OK. I have had a few cases where the some recovery points, although were backing up fine,  were not mounting due to the partition type that hosted it. Changing the partition type to the regular data partition type (ebd0a0a2-b9e5-4433-87c0-68b6b72699c7) solved the issue.

Reply
  • Hi ajns:

    Adding to fredbloggs reply.

    The rule of the thumb to determine if a recovery point is OK, is being able to mount it and read it. Sometimes the issues you see during recovery points checks are related to the load on the core (if too much is going on, the operation may time out before one or more partitions are mounted). If only a partition does not mount, the rest of the recovery point is OK. If a RP fails nigfhtly jobs checks but the next one is successful, it usually means that the chain is still OK. I have had a few cases where the some recovery points, although were backing up fine,  were not mounting due to the partition type that hosted it. Changing the partition type to the regular data partition type (ebd0a0a2-b9e5-4433-87c0-68b6b72699c7) solved the issue.

Children
No Data