Book Image

IBM InfoSphere Replication Server and Data Event Publisher

By : Pav Kumar-Chatterjee, Pav Kumar Chatterjee
Book Image

IBM InfoSphere Replication Server and Data Event Publisher

By: Pav Kumar-Chatterjee, Pav Kumar Chatterjee

Overview of this book

Business planning is no longer just about defining goals, analyzing critical issues, and then creating strategies. You must aid business integration by linking changed-data events in DB2 databases on Linux, UNIX, and Windows with EAI solutions , message brokers, data transformation tools, and more. Investing in this book will save you many hours of work (and heartache) as it guides you around the many potential pitfalls to a successful conclusion. This book will accompany you throughout your Q replication journey. Compiled from many of author's successful projects, the book will bring you some of the best practices to implement your project smoothly and within time scales. The book has in-depth coverage of Event Publisher, which publishes changed-data events that can run updated data into crucial applications, assisting your business integration processes. Event Publisher also eliminates the hand coding typically required to detect DB2 data changes that are made by operational applications. We start with a brief discussion on what replication is and the Q replication release currently available in the market. We then go on to explore the world of Q replication in more depth. The latter chapters cover all the Q replication components and then talk about the different layers that need to be implemented—the DB2 database layer, the WebSphere MQ layer, and the Q replication layer. We conclude with a chapter on how to troubleshoot a problem. The Appendix (available online) demonstrates the implementation of 13 Q replication scenarios with step-by-step instructions.
Table of Contents (12 chapters)
IBM InfoSphere Replication Server and Data Event Publisher
Credits
About the Author
About the Reviewer
Preface

Q replication in a DPF environment


Q replication works well in a Database Partition Facility (DPF) environment, but there are a couple of design points to be aware of. Consider the sample configuration shown in the following diagram:

We have four servers called RED01, RED02, BLUE01, and BLUE02. We want to replicate from the RED side to the BLUE side. Each side will have four data partitions and a catalog partition with one DAS instance per box. The instance name is db2i001 and the database name is TP1.

The RED side is shown next. There are five database configuration files (DB CFG) and one database manager configuration file (DBM CFG). It is the "Detailed" table, which is replicated from the RED side to the BLUE side.

Note the following:

  • MQ installed on RED01

  • Replication control tables created on partition 0

  • Q Capture and Q Apply run on RED01

  • EXPLAIN tables defined on RED01

Tables with referential integrity

The first design point deals with tables, which have referential integrity. We need to ensure that all related parent and child tables are on the same partition. If we do not do this and start replication, then we will get ASN7628E errors.

Table load and insert considerations

If we want to load from the application, then the staging table should be partitioned similarly to the detailed table so that we can make use of collocation (therefore, we need the same partition group and same partition key).

If we want to insert from the application, then the staging table should be defined on partition 1 ONLY. If we are using INSERT, then we would use INSERT/SELECT/DELETE to transfer data from the staging table to the detailed table. We also need to perform simple housekeeping tasks on the staging table, for example regular online reorganizations.

An example of an INSERT/SELECT/DELETE statement is shown next:

with fred (id,name,trans_date) as
(
select id,name,trans_date from stag_tab
(delete from stag_tab where trans_date = current timestamp -10 minutes)
)
select count(*) from fred
(insert into det_tab select id,name,trans_date from fred);

Every time we run the above SQL, it will move records that are more than 10 minutes old from stag_tab to det_tab.