In this section, we cover the various DB2 objects that can be used as replication sources, such as XML data types, compressed tables, and large objects.
From DB2 9.5 onwards, we can replicate tables, which contain columns of data type XML, and an example is shown in the Unidirectional replication for an XML data type section of Appendix A. We can set up unidirectional, bidirectional, and peer-to-peer replication.
From DB2 9.7 onwards, in unidirectional replication, we can use XML expressions to transform XML data between the source and target tables. Examples of supported and unsupported XML expressions are shown next.
Supported XML expressions include XMLATTRIBUTES, XMLCOMMENT, XMLCAST, XMLCONCAT, XMLDOCUMENT, XMLELEMENT, XMLFOREST, XMLNAMESPACES, XMLPARSE, XMLPI, XMLQUERY, XMLROW, XMLSERIALIZE, XMLTEXT
, and XMLVALIDATE
.
Unsupported XML expressions include XMLAGG, XMLGROUP, XMLTABLE, XMLXSROBJECTID
, and XMLTRANSFORM
.
For a complete up-to-date list, check out the DB2 Information Center at http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.swg.im.iis.repl.qrepl.doc/topics/iiyrqsubcxmlexpress.html.
From DB2 9.7 onwards, tables can have both the COMPRESS YES
and DATA CAPTURE CHANGES
table options set, which means we can now replicate compressed tables.
The issue with replicating a compressed table, is what happens if the compression dictionary is changed while Q Capture is down? Once Q Capture is started again, then it will try and read logs and records that were compressed with the previous compression dictionary, and not succeed. To address this, when a table has both the COMPRESS YES
and DATA CAPTURE CHANGES
options set, then the table can have two dictionaries: an active data compression dictionary and a historical compression dictionary.
If a table is set to DATA CAPTURE NONE
, then if a second dictionary exists, it will be removed during the next REORG TABLE
operation or during table truncate operations (LOAD REPLACE, IMPORT REPLACE
, or TRUNCATE TABLE
).
If a row change involves columns with large object (LOB) data, Q Capture copies the LOB data directly from the source table to the send queue.
If we are replicating or publishing data from LOB columns in a source table, then Q Capture will automatically divide the LOB data into multiple messages to ensure that the messages do not exceed the MAX MESSAGE SIZE
value of the Replication Queue Map used to transport the data.
If we are going to replicate LOB data, then we need to ensure that the MAXDEPTH
value for the Transmission Queue and Administration Queue on the source system, and the Receive Queue on the target system, is large enough to account for divided LOB messages.
If we select columns that contain LOB data types for a Q subscription, we need to make sure that the source table enforces at least one unique database constraint (a unique index, primary key, and so on). Note that we do not need to select the columns that make up this uniqueness property for the Q subscription.