What level of supplemental logging is required by SharePlex and how much overhead does it impose on the database?

SharePlex requires minimal database level supplemental logging. This level adds row ID information to the redo logs that eliminates the need for SharePlex to build and maintain a row ID map. Since supplemental logging’s introduction, SharePlex activation performance is substantially improved since it is no longer necessary to analyze the tables for chained rows or build the row ID map.

To check to see if supplemental logging is enabled in the database –

SQLplus > select supplemental_log_data_min from v$database;

It has been asked if it is possible to enable supplemental logging on just the tables that SharePlex is replicating. Minimal supplemental logging must be enabled at the database level before supplemental logging can be enabled at the table level. To enable minimal supplemental logging -

SQLPlus > alter database add supplemental log data;

Some customers have expressed concern that supplemental logging generates overhead on the database. Though Oracle should have the last say on how much overhead supplemental logging imposes on the database, minimal level logging has Oracle add head row ID information for each change operation which is about 20 bytes. Overhead is more likely to increase on archive storage since the data is already available to Oracle. In general, if you generate 100GB redo log per day, it might increase to 101 – 103 GB per day with minimal supplemental logging enabled. Performance impact has been found to be minimal, <1%.

The overhead on table level (Unique Key, Primary Key) is dependent on the key itself. If the key uses one numeric column then it is cheaper than if the key is a composite of 15 columns with char(2000) for example. The latter would be more expensive from a space perspective. There will be some key logging, but that would depend on the definition of the table.

Quest recommends that both primary and unique key supplemental logging be enabled because SharePlex can post the modifications to the target more efficiently by having the key information from the redo logs rather than having to retrieve the information from the database which would impose database overhead.

Authored by Susan Wong, Sr. Systems Consultant

Anonymous