SharePlex Tuning 101 - A Journey, Not A Destination

In recent SharePlex blogs, my colleagues and I have discussed using the Analyze Config command to help gather information to tune SharePlex, and the importance of tuning the target database.  In this blog, I’ll attempt to put it all together.  This won’t be a “cookbook” with specifics for any particular environment or use cases, I’ll leave that for future blogs.  The point of this blog is to help you develop a high-level understanding of what you need to consider as you embark on your tuning odyssey.

Set SLAs

The whole point of tuning any IT system is to achieve performance levels that are acceptable to the customer.   That means the first thing we have to do is establish Service Level Agreements that clearly define how to measure performance and what levels are acceptable.  With SharePlex, the most common performance measure is latency, the time between when a transaction is committed on the source system, and when the results of that transaction, the changes, are visible on the target system.  

Tuning “Depends”

The first thing to remember about tuning is that “it depends”.   One example of “it depends” would be the specific use case.   If you’re using SharePlex to replicate data to a decision support system, you probably want the most up-to-date data possible.  Many of our customers have latency SLAs for these sorts of systems in the 5 to 10 minute range.  On the other hand, if you’re doing a migration, as long as the system eventually catches up, you might not care if you had latencies of 30 minutes or more during the initial load or times of high transaction volumes.

So I Have My SLA – Now What?

Once you understand your SLAs and measures, you can begin to improve performance by making specific changes to reduce bottlenecks.   Two things to remember here are to make changes one at a time, and to make sure you have ways to measure the effect of your changes.   In the sections below, I’ll cover some of the high-level areas to consider.

Tune the Target Database

One of the biggest impacts on latency is tuning of the target database which my colleague, Mike Shurtz, writes in his blog. The target database needs to be tuned to accept the level of changes and transaction mix that will be generated from the source.   An extreme of example of this comes from a customer that an 8Tb database, processing up to 500Gb of transactions per hour. When we started our tuning exercise, the source database had data spread across over 1000 data files on 200 logical disk volumes.  The target only had 160 data files on 1 logical disk volume.   Adding volumes and data files dramatically improved performance.

If you’re posting to a non-Oracle database, an additional factor to consider is the ODBC driver you’re using.  For some databases there are multiple ODBC drivers available.   Make sure you’re using one that has been certified to work with SharePlex

Tune the Network

Often, the network between source and target can contribute to latency.  If it takes a byte 500 milliseconds to get from the source to the target, the latency on the target will be at least that ½ second plus whatever time it takes for the target database to write the transaction.  SharePlex has the ability to compress network traffic, which may be useful if you have an inherently slow network.

Add POST Queues

After you’ve addressed the target database and the network; the next most common contributors to latency are sheer transaction volume, and the presence of long-running transactions.  You can use the Analyze Config command, detailed in an earlier blog, to get a complete breakdown of transaction volume by table and by operation.  Often, putting tables with high volumes into separate POST queues can improve performance.  Details on how to set up named POST queues can be found in the SharePlex Administrator’s Guide.

Tune Memory

As systems get busier, SharePlex may need to rely on its queues, especially on the source system, as buffers for especially busy times.   If SharePlex is forced to use disk for queues, this can have an impact on performance.   If you have additional memory available, you can increase the size of the CAPTURE and EXPORT queues by using the SP_QUE_Q_SHMSIZE parameter, which is documented in the SharePlex Reference Guide.

Beyond the Basics

I hope this review of the basics for tuning SharePlex has been informative.   Because tuning is so site and use case specific, we have developed some specific Support offerings to assist if you need additional tuning.

SharePlex Health Check

The SharePlex Health Check is designed to help you better understand your SharePlex environment. During the health check, our support engineers provide a technical assessment of your SharePlex deployment to identify and prioritize system improvements. We will share expert knowledge with your IT staff members to make sure everyone is aware of the full potential of your solution and, ultimately, to ensure that your deployment is working as efficiently as possible.

Post-Performance Tuning Service

With the SharePlex Post-Performance Post- Queue Tuning Service, our technical experts provide basic tuning of the post process of your SharePlex solution and make recommendations for best practices to improve your SharePlex operations. You’ll see optimal performance gains, be able to better manage your application data and address real-time production issues.

Post-Performance Load-Splitting Tuning Service

Complex queue architecture can create challenges. With the SharePlex Post-Performance Load-Splitting Tuning Service, our technical experts will assess whether your system is properly configured and tuned for load splitting and make recommendations using load-splitting best practices for your specific environment. You’ll then have the proper setup to achieve optimal throughput for real-time SharePlex replication and be able to better manage your application load and reduce latency.

About the Author
Clay.Jackson
Clay Jackson is a Database Systems Consultant for Quest, specializing in Database Performance Management and Replication Tools. Prior to joining Quest, Jackson was the DBA Manager at Darigold. He also...