We'll pick up from the previous recipe and configure the target host to also capture and deliver changes to the same set of tables on the source host.
Repeat steps 1 to 14 followed by steps 1 to 7 from the recipe Creating One-Way Replication (Simple). The rest of the steps prepare the target host to capture changes and deliver them to the applier on the source host.
The steps for bidirectional replication are as follows:
Enable supplemental logging on the target database in order to capture appropriate database changes.
In our previous recipe, we didn't have to enable supplemental logging on the target because it was not subject to propagating changes. However, in a two-way replication, we propagate in both the ways, as follows:
SQL> select supplemental_log_data_min from v$database;
We will get the following output:
SUPPLEME ----------------- NO
The next set of commands to be executed is as follows:
SQL> alter database add supplemental log data; SQL> select supplemental_log_data_min from v$database;
We will get the following output:
SUPPLEME ----------------- YES
Create a TNS entry in the database home so that the extract can connect to the ASM instance using the following command:
$ cd $ORACLE_HOME/network/admin $ vi tnsnames.ora
Add the following TNS entry:
ASMGG = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC) (key=EXTPROC1521) ) (CONNECT_DATA= (SID=+ASM) ) )
Save the file and exit.
Create a user
asmgg
with the sysdba role in the ASM instance using the following command:$ sqlplus sys/<password>@asmgg as sysasm
The output for the preceding command will be as follows:
SQL*Plus: Release 11.2.0.3.0 Production on Thu Nov 15 14:24:20 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Automatic Storage Management option
Then we execute the following two commands:
SQL> create user asmgg identified by asmgg ;
The output for the preceding command will be as follows:
User created.
and
SQL> grant sysdba to asmgg ;
The output for the preceding command will be as follows:
Grant succeeded.
Let's add supplemental logging to the tables using the following commands:
$ cd /u01/app/oracle/goldengate $ ./ggsci GGSCI> add trandata scott.tcustmer
The output for the preceding commands will be as follows :
Logging of supplemental redo data enabled for table SCOTT.TCUSTMER. GGSCI> add trandata scott.tcustord
The output for the preceding command is as follows:
Logging of supplemental redo data enabled for table SCOTT.TCUSTORD. GGSCI> info trandata scott.tcustmer
The output for the preceding command is as follows:
Logging of supplemental redo log data is disabled for table OGG.TCUSTMER. GGSCI> info trandata scott.tcustord
Create the extract parameter file for data capture using the following command:
$ cd /u01/app/oracle/goldengate/dirprm $ vi ex01sand.prm
Add the following lines to the file:
EXTRACT ex01sand SETENV (ORACLE_SID="TGT101") SETENV (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db_1") SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8") USERID ogg, PASSWORD ogg TRANLOGOPTIONS EXCLUDEUSER ogg TRANLOGOPTIONS ASMUSER asmgg@ASMGG ASMPASSWORD asmgg -- Trail File location locally EXTTRAIL /u01/app/oracle/goldengate/dirdat/pr DISCARDFILE /u01/app/oracle/goldengate/dirrpt/ex01sand.dsc, PURGE DISCARDROLLOVER AT 01:00 ON SUNDAY TABLE SCOTT.TCUSTMER ; TABLE SCOTT.TCUSTORD ;
Save the file and exit.
Let's add the Extract process and start it by using the following commands:
$ cd /u01/app/oracle/goldengate $ ./ggsci GGSCI> add extract ex01sand tranlog begin now
The output for the preceding command is as follows:
EXTRACT added. GGSCI> add exttrail /u01/app/oracle/goldengate/dirdat/pr extract ex01sand megabytes 2
The output for the preceding command is as follows:
EXTTRAIL added. GGSCI> start ex01sand
The output for the preceding command is as follows:
Sending START request to MANAGER ... EXTRACT EX01SAND starting GGSCI> info all
The output for the preceding command is as follows:
Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING MANAGER RUNNING EXTRACT RUNNING EX01SAND 00:00:00 00:00:06 REPLICAT RUNNING RE01SAND 00:00:00 00:00:07
Next we'll create the data pump parameter file using the following commands:
$ cd /u01/app/oracle/goldengate/dirprm $ vi pp01sand.prm
Add the following lines to the file:
EXTRACT pp01sand PASSTHRU RMTHOST hosta MGRPORT 7809 RMTTRAIL /u01/app/oracle/gg/dirdat/pa DISCARDFILE /u01/app/oracle/goldengate/dirrpt/pp01sand.dsc, PURGE -- Tables for transport TABLE SCOTT.TCUSTMER ; TABLE SCOTT.TCUSTORD ;
Save the file and exit.
Add the data pump process and final configuration on the target host using the following commands:
GGSCI> add extract pp01sand exttrailsource /u01/app/oracle/goldengate/dirdat/pr
The output for the preceding command is as follows:
EXTRACT added. GGSCI> add rmttrail /u01/app/oracle/gg/dirdat/pa extract pp01sand megabytes 2
The output for the preceding command is as follows:
RMTTRAIL added. GGSCI> start pp01sand
The output for the preceding command is as follows:
Sending START request to MANAGER ... EXTRACT PP01SAND starting GGSCI> info all
The output for the preceding command is as follows:
Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING MANAGER RUNNING EXTRACT RUNNING EX01SAND 00:00:00 00:00:06 EXTRACT RUNNING PP01SAND 00:00:00 00:00:02 REPLICAT RUNNING RE01SAND 00:00:00 00:00:07
Next, we'll move on to the source server and create the
REPLICAT
parameter file:$ cd /u01/app/oracle/gg/dirprm $ vi re01sand.prm
Add the following lines to the file:
REPLICAT re01sand SETENV (ORACLE_SID="SRC100") SETENV (ORACLE_HOME="/u01/app/oracle/product/11.1.0/db_1") SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8") USERID ogg PASSWORD ogg DISCARDFILE /u01/app/oracle/gg/dirrpt/re01sand.dsc, APPEND DISCARDROLLOVER at 01:00 ReportCount Every 30 Minutes, Rate REPORTROLLOVER at 01:30 DBOPTIONS SUPPRESSTRIGGERS DEFERREFCONST ASSUMETARGETDEFS MAP SCOTT.TCUSTMER , TARGET SCOTT.TCUSTMER ; MAP SCOTT.TCUSTORD, TARGET SCOTT.TCUSTORD ;
Save the file and exit.
Now we're ready to complete our two-way replication by adding the Replicat process to apply the incoming changes.
Add and start the Replicat using the following commands:
$ cd .. $ ./ggsci GGSCI> add replicat re01sand exttrail /u01/app/oracle/gg/dirdat/pa checkpointtable ogg.chkpt GGSCI> start re01sand
The output for the preceding command is as follows:
Sending START request to MANAGER ... REPLICAT RE01SAND starting GGSCI> info all
The output for the preceding command is as follows:
Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING EX01SAND 00:00:00 00:00:07 EXTRACT RUNNING PP01SAND 00:00:00 00:00:03 REPLICAT RUNNING RE01SAND 00:00:00 00:00:06
Next let's validate that an insert will propagate from
hostb
tohosta
.Perform the following actions on
hostb
:SQL> insert into scott.tcustmer values ('Tony','Ontario Inc','Toronto','ON') ; SQL> commit ; SQL> select * from scott.tcustmer;
The output for the preceding command is as follows:
CUST NAME CITY ST -------- ------------------ ---------- ---- WILL BG SOFTWARE CO. SEATTLE WA JANE ROCKY FLYER INC. DENVER CO Tony Ontario Inc Toronto ON
Moving on to
hosta
we should execute the following commands to ensure our changes have been received and applied to the database:SQL> select * from scott.tcustmer ;
The output for the preceding command is as follows:
CUST NAME CITY ST -------- ------------------ ---------- ---- WILL BG SOFTWARE CO. SEATTLE WA JANE ROCKY FLYER INC. DENVER CO Tony Ontario Inc Toronto ON
Hence, we conclude that the insert was received and applied.
Once again, we needed to add supplemental logging at the target host to both tables scott.tcustmer
and scott.tcustord
in order to add additional data in the redo stream. This was not necessary when these tables were subject to delivery only.
At hostb
we already had a manager and Replicat process. We needed to configure an extract for data capture, to start scanning the online-redo logs and write out committed transactions to the trail files. We've kept the same two-letter prefix pr
as we did on hosta
. Remember that the letters can be any two arbitrary letters. The name of the data capture extract is also the same as in hosta
. I did this just for simplicity sake. It does not have to be the same prefix. You ought to come up with a naming standard in your own organization for naming extracts and/or replicats.
Now that we've started capturing data, we need a pump to ship it to hosta
. Again, I've chosen the same pump name for illustration purposes. Here we need to be a little more careful in choosing the remote trail name prefix as follows:
add rmttrail /u01/app/oracle/gg/dirdat/pa extract pp01sand megabytes 2
I've chosen the prefix "pa". You must ensure that you don't clobber any files on the remote host with an already used prefix as this may corrupt the trail files on the remote host. Point being, make sure you always choose unique prefixes for pumps, specifically if multiple pumps are shipping trail files to the same directory location.
Finally, back on hosta
, the only process missing is the replicat which completes the multi-master implementation.
In this illustration of multi-master implementation, you need to consider the possibility of collisions, such as the same record being inserted or deleted simultaneously at both sites as it may violate constraints. In a real-world example, you may use a sequence to generate the Primary Key on the source and a differing sequence on the target so that collisions are minimized. Another alternative would be to use range partition on a numeric data type value based on different ranges between the source and target to avoid collision. And finally, if you know that the application code has the ability to ensure that a business rule between the source and target would never collide, then the implementation is trivial as the application will decide and manage the conflicts. Another important design factor is to use the Primary Key or Unique Key constraints for all objects being replicated; otherwise OGG will use all table columns to determine the uniqueness.