DB2 and SAP DR Using DS8300 Global Mirror V1.1
DB2 and SAP DR Using DS8300 Global Mirror V1.1
DB2 and SAP DR Using DS8300 Global Mirror V1.1
Version: 1.1
Date: December 8, 2009
Copyright IBM Corporation, 2009
INTRODUCTION................................................................................................................................ 3
2.
TRADEMARKS................................................................................................................................... 3
3.
ACKNOWLEDGEMENTS ................................................................................................................ 3
4.
FEEDBACK ......................................................................................................................................... 3
5.
VERSIONS ........................................................................................................................................... 3
6.
7.
CONFIGURATION............................................................................................................................. 4
8.
8.1.
9.
10.
11.
11.1.
11.2.
11.3.
11.4.
11.5.
11.6.
1. Introduction
This paper documents a DR demonstration executed in a PoC for an IBM Customer. The configuration
we used was based on the customer requirements:
DR must use disk-based data copy process, not application or DB data copy process
Production site has DB2 HADR cluster
The DR site is mirrored from one of the DB2 HADR databases, and the process must work
whether the mirrored DB is currently HADR Primary or Standby
HA is automated, and DR is a declared event
The end-users must login to DR SAP using same SAPGUI configuration as used for Production
SAP
Order entry transaction workload is running at the time of simulated disaster
RPO and RTO Objectives
And also based on the following project constraints:
DR configuration and testing must be done quickly without impacting other PoC demonstrations
The two DS8300 Disk Systems for Global Mirror source and target are in the same data center
The LPARs for the production and DR LPARs run in the same CECs and share VIO servers
The configuration shown in this test is not offered as a best practice for DR in general, but was chosen to
demonstrate DR using DS8300 global mirror with the configuration requirements and constraints listed
above.
2. Trademarks
DB2, AIX, DS8000, Tivoli, TotalStorage are registered trademarks of IBM Corporation.
SAP is a registered trademark of SAP AG in Germany and in other countries
3. Acknowledgements
Thank you to Dale McInnis and Liwen Yeow for their guidance on configuring DR with DB2 HADR.
4. Feedback
Please send comments or suggestions to Mark Gordon ([email protected]).
5. Versions
7. Configuration
Figure 1 summarizes the HADR and DR configuration used in the test.
Global Mirror with Practice Copy was configured to mirror the LUNS of a DB in an HADR cluster
(hadrdba) to LUNS on a second DS8300 system. In addition, the SAP filesystems (/sapmnt,
Copyright IBM Corporation, 2009
Page 4
As shown in Figure 2, the SAP ASCS ran as a two node Tivoli SA MP cluster (ASCS with enqueue
replication) on hadrascsa and hadrascsb. The SAP central instance ran on hadrappsrvra. NFS file
services were also provided by hadrappsrvra for SAP shared filesystems and DB2 shared archive log
filesystem. Tivoli SA MP controlled the DB2 HADR cluster on hadrdba and hadrdbb.
Copyright IBM Corporation, 2009
Page 5
8.1.
DR LPAR
As part of the preparation of the local rootvg in the DR LPAR, several steps were needed to copy
configuration over from the production HADR cluster:
Create sap and db2 users on DR LPAR with same UID and GID as on hadrdba/hadrdbb.
Set passwords for users, and remove ADMCHG flag for users from /etc/security/passwd.
Copy home directories for sap and db2 users to DR system.
Copy /etc/services from hadrdba
Create script import_for_dr.sh (v. Figure 21) which uses PVIDs to import the volume groups
needed for DB2 and SAP on DR.
Create fsck.sh which performs fsck on all imported filesystems, before they are mounted.
Create ifconfig_aliases.sh to set alias IP addresses for source systems on DR Ethernet interface.
Figure 4: drdiskremoval.sh
11.2.
Again, this is a test process which was used in order to not suspend the global mirror session between
the source and target volumes.
Login to the IBM TotalStorage Productivity Center for Replication using the following http address:
o https://tpcr.ip.add.ress:3443/CSM
o Login: Administrator
o if logon was successful, you should be presented with the following screen where Sessions
link should be selected
Figure 9 shows how to initiate the FlashCopy process between I2 (Target volumes) and H2 (Practice
volumes). Click on the yes to continue
Figure 11 shows the next step is preparing, then as shown in Figure 12 prepared.
The timestamp on the H2 <- I2 role pair is updated when the flash is done.
11.3.
Change directory to directory where scripts are located and execute script to add the disks. MAKE
sure you do this on both VIO servers.
# cd /poc_directory
# ./drdiskadd.ksh (shown in Figure 17)
BEFORE you bring up the DR lpar, since we are simulating a disaster and will reuse the IP addresses
of the production SAP and HADR systems in the DR LPAR, you must shutdown all the other HADR
lpars (that is the LPARs for the source HADR DB2 SAP system) on p570A. Shutdown the production
lpars for hadr primary, standby, ASCS, and SAP application server. They are hadrdba,
hadrappsrvra, and hadrascs. From the HMC, get a console window for each and perform a graceful
shutdown. hadrdba needs to go down first, then hadrascs because they are NFS clients of
hadrappsrvra.
# shutdown -F
11.4.
After boot of the DR LPAR, use lspv to display the disks. Note that initially, all the PVs copied
from the source DB2 and SAP systems are not in VGs, since the VGs are not configured in the
baseline rootvg in the DR LPAR.
On the Global Mirror source system (hadrdba) we determined the VG associated with each PV. This
information was used to create import_for_dr.sh shown in Figure 21, which imports the volume
groups, so that the same volume group names are used on the DR system that were used on the source
system, hadrdba.
Figure 21 is the script run on the DR LPAR to import the volume groups into the DR LPAR.
Note the message about loglv00 on hdisk27 was changed. As part of testing this process, we check
the LVs defined on hdisk27, and the /etc/filesystems file, to make ensure that there are no filesystems
that reference loglv00 from hdisk27. This check is not included here.
A few of the mount points in the VGDAs were not correct for the DR system configuration, so we
changed them before mounting the filesystems. Figure 26 is the script that makes these changes.
Run chfs.sh.
Mount /usr/sap/PRD and create subdir for mount point:
Use mount a to mount the rest of the filesystems. Note that the 0506-324 error message is normal
for filesystems that are already mounted.
Use df command to check that all the necessary filesystems are mounted.
Since this system uses a local rootvg, and has its own hostname, we will change the hostname from
DR to be the same as the copied system - hadrdba. We did this to restart DB2 without making
Copyright IBM Corporation, 2009
Page 22
This DR demonstration takes functions that ran on three different LPARs (ASCS, Application Server,
and DB Server) and consolidates them together into one LPAR. In order for these three functions to
run without additional configuration changes, we set aliases for all the LPARs on the Ethernet
interface. We created a script, ifconfig_aliases.sh to do this. The interface names hadrdba, hadrdb,
ascs, and hadrascsa were already defined in the hosts file in the DR LPAR.
11.5.
DB2 Restart
We su to db2prd, to do the steps to startup DB2. Start DB2 with db2start. Since this is a copy of half
of a HADR cluster system, we get SQL5043N, which is normal.
Stop and start D2. Note that the SQL5043N message is now gone.
Since hadrdba might be either Primary or Standby role at the time of the failure, and there are different
restart actions depending on the role, we check the DB config for the PRD DB.
In this example, hadrdba was the standby DB, and so this copy is in Rollforward pending.
Since we have a single DR DB, we stop HADR db2 stop hadr on db PRD. Then, verify that the
role is now changed to STANDARD.
11.6.
SAP Startup
The next steps are done with the <sid>adm (here prdadm) userid on the DR LPAR (which now has the
hadrdba hostname).
Start the ABAP Central Services Instance with startsap ascs. Note that ascs is the alias of the IP
address on which the ASCS runs. The ifconfig_aliases.sh script shown in Figure 32 added the ascs
alias to the en0 interface.
Next, start the SAP Application Server instance with startsap hadrappsrvra DVEBMGS00. This will
start the DVEBMGS00 instance which originally ran on hadrappsrvra. The ifconfig_aliases.sh script
shown in Figure 32 added the hadrappsrvra alias to en0.