In the earlier blogs in this series, I discussed the differences between High Availability and Disaster Recovery; and how SharePlex can be used in place of RAC for High Availability.   In this blog, I show how SharePlex can be used to complement RAC and enhance your High Availability environment.

Use Cases for Oracle RAC

Oracle RAC (Real Application Cluster) can be used to support many different scenarios.   Some of these are detailed below.

High Availability

Oracle RAC is often used to provide seamless high availability for mission-critical applications.   Nodes in a RAC cluster can be configured to support failover of an application from one node to another in the event a node fails or becomes inaccessible.

Workload Isolation

RAC can also be used for workload isolation.   Each node in a RAC has its own buffers and cache; so, it’s possible to isolate certain workloads.  For example, reporting can be isolated from your OLTP environment.

Workload Balancing and Distribution

If your workload exceeds the capacity of a single instance of Oracle; RAC can be an effective way to scale out or add additional resources.  With load balancing in either hardware or software, RAC allows transaction workload to be distributed among all or some of the nodes in the environment

RAC Shortcomings

For all its benefits, there are a few shortcomings with RAC.  While all of these can be mitigated, the cost of mitigation, such as redundant power or redundant networks can be prohibitive.

Single Database

Each RAC has a single database.   If that database becomes unavailable, the entire cluster is unavailable.  Also, because all the data is stored in that single database, it cannot be optimized for different workloads.  For example, the same index added to support reporting may adversely impact update or insert performance.

Instance Coupling

Because of the dependencies between instances; all nodes in a RAC must be at the same major version, and, in some cases, zero downtime patches and upgrades are not possible.

Block movement between nodes

With only a single database, database block consistency must be maintained across all nodes.  This can increase the network traffic between nodes, and slow access as in-memory blocks are reconciled across nodes.

Infrastructure Distance Limitations

Because the nodes all share the same database, all nodes and the storage for the database typically reside in the same datacenter, sharing infrastructure like network, power and cooling.   Failure of any of these infrastructure components could render the entire RAC unavailable.

The SharePlex Solution

All the risks discussed above can easily and cost-effectively be mitigated by using SharePlex to complement your RAC environment.  Here’s how SharePlex overcomes each of the shortcomings mentioned above.

Single Database

SharePlex keeps replicas of your database synchronized, in near real time.   This eliminates the database as a single point of failure.  Should your source database become unavailable, your applications can “fail over” to the target database and continue operations.  Using “reverse replication”, you can capture transactions applied on the target database, to be applied to the original source database as soon as it is available.  This ensures complete continuity of operations with little or no data loss. 

The target database can also be configured independent of the source.  For example, the database block size can be different to help support reporting.   Additional indexes could also be added to the target with no impact on source performance.

Instance Coupling

Since the SharePlex target database is a separate database instance, there’s no tight coupling between the source and target.  In fact, the target can run on different, perhaps less expensive, hardware and can even be on a different version of Oracle.

Block Movement Between Nodes

Since the SharePlex target database is separate, and not coupled to the source; there is no contention between instances for use of database blocks.

Infrastructure Distance Limitations

Since SharePlex replicates changes from the source to the target and is always asynchronous, there are no distance limitations.  The target can be in the same data center, across town, or halfway around the world.

Configuring SharePlex in a RAC Environment

For this example, I’ll assume the system is configured as in this diagram:

We use SharePlex to replicate between Silo A and Silo B to provide recoverability in the event either Silo is unavailable.   In a RAC environment, SharePlex runs on one node of the cluster.  So, we’ll also need to provide failover for the SharePlex processes between the instances in each Silo.

Configure SharePlex for RAC

To configure SharePlex for failover between nodes in a RAC environment, you’ll need take a few additional steps, including setting up a shared directory to be used for the SharePlex variable directory, and setting up a Virtual IP and Hostname that can move between RAC nodes.    Details for this setup are covered in the SharePlex Installation and Setup Guide for an Oracle Source.

Once you’ve completed the steps outlined in the guide, you should be able to set up CRS resources to start and stop SharePlex and move the process between nodes.

Configure Silo Failover

Once you have SharePlex replicating between the two silos, you’re ready to configure your applications to move between silos in the event the entire silo becomes unavailable.   As discussed in my earlier blog, this can be done using scripts to move your applications from one silo to another, or hardware or software load balancing.

Once you’ve configured your failover processes, don’t neglect testing everything, preferably in the same environment that you’ll be using for production.   Proper testing will ensure that your processes will work when they’re needed.

I hope this series of blogs exploring how SharePlex can help you enhance your High Availability and Disaster Recovery processes has been helpful.  If you’d like to learn more about SharePlex and discover how it can work in your environment, please see our website.






Related Content