Book Image

IBM InfoSphere Replication Server and Data Event Publisher

By : Pav Kumar-Chatterjee, Pav Kumar Chatterjee
Book Image

IBM InfoSphere Replication Server and Data Event Publisher

By: Pav Kumar-Chatterjee, Pav Kumar Chatterjee

Overview of this book

Business planning is no longer just about defining goals, analyzing critical issues, and then creating strategies. You must aid business integration by linking changed-data events in DB2 databases on Linux, UNIX, and Windows with EAI solutions , message brokers, data transformation tools, and more. Investing in this book will save you many hours of work (and heartache) as it guides you around the many potential pitfalls to a successful conclusion. This book will accompany you throughout your Q replication journey. Compiled from many of author's successful projects, the book will bring you some of the best practices to implement your project smoothly and within time scales. The book has in-depth coverage of Event Publisher, which publishes changed-data events that can run updated data into crucial applications, assisting your business integration processes. Event Publisher also eliminates the hand coding typically required to detect DB2 data changes that are made by operational applications. We start with a brief discussion on what replication is and the Q replication release currently available in the market. We then go on to explore the world of Q replication in more depth. The latter chapters cover all the Q replication components and then talk about the different layers that need to be implemented—the DB2 database layer, the WebSphere MQ layer, and the Q replication layer. We conclude with a chapter on how to troubleshoot a problem. The Appendix (available online) demonstrates the implementation of 13 Q replication scenarios with step-by-step instructions.
Table of Contents (12 chapters)
IBM InfoSphere Replication Server and Data Event Publisher
Credits
About the Author
About the Reviewer
Preface

DB2 replication sources


In this section, we cover the various DB2 objects that can be used as replication sources, such as XML data types, compressed tables, and large objects.

Replicating XML data types

From DB2 9.5 onwards, we can replicate tables, which contain columns of data type XML, and an example is shown in the Unidirectional replication for an XML data type section of Appendix A. We can set up unidirectional, bidirectional, and peer-to-peer replication.

From DB2 9.7 onwards, in unidirectional replication, we can use XML expressions to transform XML data between the source and target tables. Examples of supported and unsupported XML expressions are shown next.

Supported XML expressions include XMLATTRIBUTES, XMLCOMMENT, XMLCAST, XMLCONCAT, XMLDOCUMENT, XMLELEMENT, XMLFOREST, XMLNAMESPACES, XMLPARSE, XMLPI, XMLQUERY, XMLROW, XMLSERIALIZE, XMLTEXT, and XMLVALIDATE.

Unsupported XML expressions include XMLAGG, XMLGROUP, XMLTABLE, XMLXSROBJECTID, and XMLTRANSFORM.

For a complete up-to-date list, check out the DB2 Information Center at http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.swg.im.iis.repl.qrepl.doc/topics/iiyrqsubcxmlexpress.html.

Replicating compressed tables

From DB2 9.7 onwards, tables can have both the COMPRESS YES and DATA CAPTURE CHANGES table options set, which means we can now replicate compressed tables.

The issue with replicating a compressed table, is what happens if the compression dictionary is changed while Q Capture is down? Once Q Capture is started again, then it will try and read logs and records that were compressed with the previous compression dictionary, and not succeed. To address this, when a table has both the COMPRESS YES and DATA CAPTURE CHANGES options set, then the table can have two dictionaries: an active data compression dictionary and a historical compression dictionary.

Note

We should not create more than one data compression dictionary while Q Capture is down.

If a table is set to DATA CAPTURE NONE, then if a second dictionary exists, it will be removed during the next REORG TABLE operation or during table truncate operations (LOAD REPLACE, IMPORT REPLACE, or TRUNCATE TABLE).

Replicating large objects

If a row change involves columns with large object (LOB) data, Q Capture copies the LOB data directly from the source table to the send queue.

If we are replicating or publishing data from LOB columns in a source table, then Q Capture will automatically divide the LOB data into multiple messages to ensure that the messages do not exceed the MAX MESSAGE SIZE value of the Replication Queue Map used to transport the data.

If we are going to replicate LOB data, then we need to ensure that the MAXDEPTH value for the Transmission Queue and Administration Queue on the source system, and the Receive Queue on the target system, is large enough to account for divided LOB messages.

If we select columns that contain LOB data types for a Q subscription, we need to make sure that the source table enforces at least one unique database constraint (a unique index, primary key, and so on). Note that we do not need to select the columns that make up this uniqueness property for the Q subscription.

Other DB2 objects

In addition to the previous objects, let's look at some other DB2 objects and see if they can be used as a Q replication source:

  • What about views? Not at the present time.

  • What about DB2 system tables? No.

  • What about Materialized Query Tables (MQTs)? Yes as of DB2 9.7.

  • What about range-partitioned tables? Yes as of DB2 9.7.

  • What about hash-partitioned tables? Yes, see the Q replication in a DPF environment section.

So now let's move on to looking at Q replication filtering and transformations.