In my earlier blog, we discussed various business uses for turning Oracle database transactions into “business events”, and how SharePlex can be used to capture Oracle Change Data and write that data to a Kafka message queue.

Confluent has several Kafka broker offerings, including both Cloud and on-premise, and makes it easy to set up a full Kafka environment.  In this blog, we’ll look at how to connect SharePlex to Confluent.

Set up Confluent and collect information

In order receive or consume the data coming from SharePlex, you’ll need a Confluent cluster. Please see your Confluent documentation for information on how to create one either in a cloud or on-premise.   Any cloud host platform will be OK be sure you have network connectivity to that host from your Oracle source system.   After your cluster is set up, find the hostname of the Bootstrap server under settings, you’ll need this for the SharePlex setup

After you set up your cluster, you’ll need to obtain API Keys.   You can add a key from the Data Integration menu.  As you complete the Add Key process, note the Key AND Secret.   You’ll need these to set up SharePlex.

Finally, define a topic, which you’ll use as your SharePlex topic.

Configure SharePlex

Initial setup

Install the SharePlex binaries on your source system.

Make sure the SharePlex source server has network connectivity to your Confluent boostrap server, using the hostname you saved earlier .

Start sp_cop in the background from the Unix command line or start the SharePlex Windows Services.

The following commands will be issued through the SharePlex Command Line Interface, sp_ctrl.   Start sp_ctrl from the Unix command or click on sp_ctrl in Windows.

Create a config file

Create a config file to replicate your source tables to a Kafka target.  Since the post process (which is a Kafka producer).  The routing map should reflect the hostname of the SharePlex server, not the Kafka broker. You can use the “copy config” and “edit config” commands in sp_ctrl to create the file.  You can use the “expand” keyword and wildcards to select multiple tables, and the “not” keyword to exclude tables

Here’s an example config file:

datasource:o.p19c
#source tables      target tables           routing map
qarun.tab1994         !kafka                    splexsource.mydomain

 

Save the file, then use the “verify config” command to be sure the file is formatted correctly.

Issue a “stop post” command to keep post from starting before we complete the setup, then use the “activate config” command to activate the config file

Configure the Kafka target

The SharePlex kafka producer, or post process, is configured with the “Target” command.  Make sure post is stopped by using the “status” command before you proceed.

Enter each of the following commands as shown. Use the names you selected in the Event Hub setuo

Set the Kafka version

Target x.kafka set kafka api.version.request = true

Set the Kafka broker (Hostname)

Target x.kafka set kafka broker = <Bootstrap Server Name>

Set the Kafka encryption method and username

Target x.kafka set kafka sasl.mechanisms = PLAIN
Target x.kafka set kafka sasl.username = <API Key>

Set the Kafka password (note the quotes around the password)

Target x.kafka set kafka sasl.password = “<API Secret>”

Set the Kafka security protocol

Target x.kafka set kafka security.protocol = SASL_SSL

Set the Kafka topic name

Target x.kafka set kafka topic = <your topic>

Verify the Kafka settings

Use the command “target x.kafka show” command to list all of the settings for the Kafka target

Start Post

Use the “start post” command to restart the post process.   You can verify that post is running by using the “status” command.

Replicate and verify data

At this point you can make changes to your source table.  After you have performed a few operations, make sure the post queue is empty using the “qstatus” command in sp_ctrl.  If there is any data remaining in the post queue, correct any issues with connectivity or names before you proceed.  You can use the logs found in the <vardir>/log directory for troubleshooting.

Verify the data in Confluent

The easiest way to see the data being sent to Confluent is to use the Cloud Command Line interface, ccloud.    You can find information on downloading ccloud on the Confluent web site

Download ccloud for your operating system, and sign in using your Confluent credentials.    You can then use the “Kafka Topic” commands to view your data.

I hope you find this blog useful and that you’ll be able to replicate your Oracle changes to a Confluent Kafka Cluster.  To request more information, or a free trial of SharePlex,  click here.

Related Content