Oracle Database Guard 12c Implementation with RAC (Step-by-Step)

Introduction:

This is a real data guard implementation case for one of my clients.
Data Guard is implemented between two clusters that reside on two different supercluster machines.
The below procedure applies between any two clusters, it doesn’t matter if it is a supercluster or Exadata or normal servers.

I tried to put as much details as I can, so as this blog serves as complete guide for the data guard implementation.
Sure, I masked any client-specific info.

0. DG Environment:

1

To set the oracle environment:

2

Listener Remarks:

The local listener “LISTENER” is defined on both the public IP & VIP of its node.
Local database instance is registered with the Local Listener against the VIP automatically.

We will create a used-defined listener on port 1538 to use a dedicated network interface to isolate the data guard traffic from user traffic.

Client has a redundant network interface on each server, that they want to use for the DG traffic
The problem is that they don’t have the required VIPs to fully implement a new network and a new scan listener.

After discussion with the client, they asked me to:
Create a local DB listener on each node using the replication (DG) physical IP and it will be dedicated for the DR traffic.
This will be a DB listener and not using GI features (VIPs, Load Balancing and failover)
This listener will use port 1538 on all nodes

Data Guard Implementation Procedure:

1. Install Oracle Database software binaries on the target server.
2. Configure TNS name entries/listeners on the primary & target servers (for both source and target databases, e.g prod & proddr):

To define the new database listener:

Login as oracle user
Set the DB environment

3
4

Add the following TNS entries to the tnsnames.ora under the DB OH on the primary and dr:

5

Add static registration entries to the listener configuration file (using grid user) on the primary & target servers:

==> In our case it will be oracle user not grid user

On the primary server:
————————
# must be logged in as oracle user

cd $ORACLE_HOME/network/admin
cp -p listener.ora listener.ora.bkup.`date +%y%m%d_%H%M%S`

vi listener.ora

6
7
8
9

Please note that I read that these static entries are not mandatory with 12c , but I added them as we usually did with the old versions.

## Prepare for the Standby Instance on the Primary server:

4. Enable FORCE LOGGING on the primary database

SQL> select FORCE_LOGGING from v$database;
FORCE_LOGGING
—————————————
NO

==> ALTER DATABASE FORCE LOGGING;

5. Enable Archive Log Mode for the primary database

10
11

5(a). Increase online redo log file to 3 groups (instead of 2) (if applicable)

12

==> I asked client to increase the size and number of the redo log files on the primary

The new setup is as follows:

13

6. Create standby redo log file (counting one more that the online redo logs) on the primary database:

14

7. Enable automatic standby file management on the primary database:

show parameter standby_file_management
alter system set standby_file_management=’auto’ scope=both;

Prepare the Standby Instance on the target server:

Login as oracle user

8. create adump directory

mkdir -p /u01/app/oradr/admin/PRODDR/adump

9. set the environment

15

10. create initial pfile on the first node only:

16

11. copy password file from source

Note that in 12c, the password file is now stored by default under ASM.

To know the password file name and location:

17
18
19

12. startup in nomount state

sqlplus / as sysdba
SQL> startup nomount

13. Duplicate the source database PROD to the standby PRODDR(on the target server):

==> It is highly advisable to generate a pfile from spfile on the primary and review all parameters that need modification at the DR side to avoid any run time errors
==> The other option is to exclude the “spfile” from the rman duplicate script and create a pfile manually from the primary spfile and move it to the standby database , modify it and use it to start that database.

20

==> or you can put the duplicate comamnd in a script and run it on the background as follows:

cd /export/home/oradr/backup
nohup duplicate_prod.sh &

==> Completed successfully ( 150 GB in 27 min)

13 (a) convert the db to RAC , if required

21

14. Create spfile in ASM for the standby database and register it in cluster (if in a cluster env.):

22

# Create cluster resource

23

# Check cluster resource

24
25

Check that the database is registered with the local and remote listeners on each node:

26a

15. Configure Data Guard Broker parameters on primary and standby:

27

16. Create Data Guard Broker Configuration

28
29
30

17. Configure Maximum Availability Protection Mode

31

18. Test switchover and switch back

32

19. Create High Availability Service (on the Primary)

33

20. Create Trigger to start the high availability service for the primary database

34

You will find that this service now is registered with the scan listeners on the primary cluster:

35
36
37

21. Modify the High Availability service to handle failover properly

38

22. Add entry for the High Availability service in the tnsnames.ora on the client side:

39

23. Connect using PRODFO tns entry, then switchover, and test again

Observer Configuration:

Assuming that you want to configure Fast-Start Failover for your DG Broker configuration, then you should install the observer component on a third server (or client), preferably in a third site. Observer is a client component ( Client Admin-Type Install or Oracle Server Install)

24. Prepare observer startup script

40

25. Start the observer

41

26. Enable Fast Start Failover in Data Guard Configuration

42

{Note: fast start failover can’t be enabled unless flashback is on ==> alter database flashback on;}

43

Database Links Analysis:

– Review the database links from our database to the other databases and from other databases to this database.
– Any DB Link that is pointing to a DB in a DG environment should be modified to use the HA service of that DB.
– All tns entries defined as “Host” in the DBA_DB_Links should exist in the tnsnames.ora across primary and standby servers.
– Any DB Link from a DG DB to any other non-DG DB will not be impacted after switchover.
– Any DB Link from a DG DB to any other DG DB will be impacted after switchover and should be modified to use the DG service.
– Any DB Link from a non-DG DB to any DG DB will be impacted after switchover and should be modified to use the DG service.

set lines 555 pages 222
col owner for a20
col DB_LINK for a20
col USERNAME for a15
col HOST for a220
col CREATED for a12
select owner,db_link,username,host from dba_db_links;

Donate to support our blogging work

$1.00

Thanks
Ahmed

One thought on “Oracle Database Guard 12c Implementation with RAC (Step-by-Step)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s