This document provides a comprehensive set of best practices and procedures when deploying Oracle Database 10g and the Automatic Storage Management (ASM) feature with EMC Symmetrix storage based replication technologies.
This document provides a comprehensive set of best practices and procedures when deploying Oracle Database 10g and the Automatic Storage Management (ASM) feature with EMC Symmetrix storage based replication technologies.
This document provides a comprehensive set of best practices and procedures when deploying Oracle Database 10g and the Automatic Storage Management (ASM) feature with EMC Symmetrix storage based replication technologies.
This document provides a comprehensive set of best practices and procedures when deploying Oracle Database 10g and the Automatic Storage Management (ASM) feature with EMC Symmetrix storage based replication technologies.
Copyright:
Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online from Scribd
Download as pdf or txt
You are on page 1of 22
EMC TimeFinder and SRDF
Best Practices for
Oracle Database 10g Automatic Storage Management
Svmmetrix DMX
Authors: Nitin Vengurlekar, Bill Bridge - Oracle Bob Goldsand - EMC Co-author: Ara Shakian - Oracle
Joint Engineering White Paper ppgfuyyufyupPPPaper
Abstract: The purpose of this document is to provide a comprehensive set of best practices and procedures when deploying Oracle Database 10g and the Automatic Storage Management (ASM) feature with EMC Symmetrix storage based replication technologies. 2/5/2006
2 TabIe of Contents
Introduction......................................................................................................... 3 ReIated Documents ............................................................................................ 3 OracIe Automatic Storage Management ........................................................... 3 OracIe Recovery Manager (RMAN).................................................................... 4 OracIe FIash Recovery Area .............................................................................. 4 EMC Consistency TechnoIogy........................................................................... 4 EMC TimeFinder Overview................................................................................. 4 TimeFinder Consistent SpIit .............................................................................. 5 EMC SRDF Overview.......................................................................................... 5 SRDF Protection Modes .............................................................................................................. 6 SRDF Synchronous Mode ..................................................................................................... 6 SRDF Asynchronous Mode................................................................................................... 6 Disaster Restart and Disaster Recovery........................................................... 6 Rebalancing and Consistency Technology.................................................................................. 7 Test Cases and Best Practices.......................................................................... 7 General Tests Configuration........................................................................................................ 7 Hardware ................................................................................................................................. 8 ASM Disk Groups / Mount Points......................................................................................... 8 ASM Instance Parameter FiIe.............................................................................................. 10 Database Instance Parameter FiIe...................................................................................... 10 Case 1: Oracle Database 10g Hot Backup with TimeFinder/Mirror........................................... 10 Case 2: Database Cloning with TimeFinder/Mirror.................................................................... 13 Case 3: Remote Database Cloning with TimeFinder/Mirror and SRDF.................................... 14 Case 4: Oracle Database 10g Hot Backup with TimeFinder/Mirror and SRDF......................... 16 Case 5: Restoring a Database on the Production Host ............................................................. 20
2/5/2006
3 Introduction The purpose oI this document is to provide a comprehensive set oI best practices and procedures when deploying Oracle Database 10g and Automatic Storage Management (ASM) with EMC Symmetrix storage based replication and Consistency Technologies. This includes EMC TimeFinder/Mirror and Symmetrix Remote Data Facility (SRDF-Asynchronous and Synchronous) , which have been validated in accordance with the Oracle`s Storage Compatibility Program (OSCP) already, and now being extended to include validation when using Oracle Database 10g Automatic Storage Management.
This paper will document the procedures and best practices Ior the Iollowing use cases:
- Oracle Database 10g Hot Backup with TimeFinder/Mirror Ior Database Backup - TimeFinder/Mirror Ior Database Cloning - SRDF and TimeFinder/Mirror Ior DR and Remote Database Cloning - Oracle Database 10g Hot Backup with SRDF and TimeFinder/Mirror Ior DR and Remote Database Backup
This document assumes the reader has a basic understanding oI Oracle Database 10g Automatic Storage Management and EMC TimeFinder/Mirror and SRDF technologies. ReIated Documents - Using Oracle 10gs Automatic Storage Management with EMC Storage Technologv - Using SYMCLI to Perform Consistent Splits with the TimeFinder Product Familv. - Understanding EMC Consistent Split with Oracle Databases - Oracle Database 10g Automatic Storage Management Best Practices
OracIe Automatic Storage Management Automatic Storage Management (ASM) is a storage manager that provides Iile system, volume management and clustering capabilities integrated into the Oracle Database 10g at no additional cost. ASM lowers your total cost oI ownership, increases storage utilization without compromising perIormance or availability. With ASM, a Iraction oI the time is needed to manage your database Iiles. ASM eliminates over provisioning and maximizes storage resource utilization Iacilitating database consolidation. The ASM selI-tuning Ieature evenly distributes the data Iiles across all available storage. It delivers high perIormance similar to raw, sustained over time, with the ease oI use oI a Iile system. ASM`s intelligent mirroring technology enables up to triple data protection, even on non-RAID storage arrays. ASM beneIits are:
SimpliIy and automate storage management Increase storage utilization and agility Predictably deliver on perIormance and availability service level agreements
ASM simpliIies storage management tasks, such as creating/laying out databases and disk space management. Since ASM allows disk management to be done using Iamiliar create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial decisions on provisioning. Additionally, ASM operations can be completely managed with 10g Enterprise Manager. ASM is a management tool speciIically built to simpliIy the job oI the DBA. It provides a simple storage management interIace across all server and storage platIorms. ASM provides the DBA Ilexibility to manage a dynamic database environment with increased eIIiciency. This Ieature is a key aspect oI Grid Computing. For more inIormation about ASM, please reIer to the OTN ASM homepage: http://www.oracle.com/technology/products/database/asm/index.html
2/5/2006
4 OracIe Recovery Manager (RMAN) Recovery Manager is Oracle`s utility to manage the backup, and more importantly the recovery, oI the database. It eliminates operational complexity while providing superior perIormance and availability oI the database. Recovery Manager debuted with Oracle8 to provide DBAs an integrated backup and recovery solution. Recovery Manager determines the most eIIicient method oI executing the requested backup, restore, or recovery operation, and then executes these operations in concert with the Oracle database server. Recovery Manager and the server automatically identiIy modiIications to the structure oI the database and dynamically adjust the required operation to adapt to the changes. OracIe FIash Recovery Area The Flash Recovery Area is a uniIied storage location Ior all recovery related Iiles and activities in an Oracle database. By deIining one init.ora parameter, all RMAN backups, archive logs, control Iile autobackups, and dataIile copies are automatically written to a speciIied Iile system or ASM disk group. In addition, RMAN automatically manages the Iiles in the Flash Recovery Area by deleting obsolete backups and archive logs that are no longer required Ior recovery. Allocating suIIicient space to the Flash Recovery Area will ensure Iaster, simpler, and automatic recovery oI the Oracle database. EMC Consistency TechnoIogy Beginning with EMC Solutions Enabler version 5.1, you can use the Enginuity Consistency Assist (ECA) Ieature to perIorm consistent splits on BCV pairs across multiple, heterogeneous hosts. Consistent split is an implementation oI instant split that avoids inconsistencies and restart problems that can occur iI you split a database-related BCV without Iirst quiescing the database. The diIIerence between a normal instant split and a consistent split is that during consistent split the database writes are held at the storage level Ior a very short time while the Ioreground split occurs, maintaining dependent-write order consistency on the target devices.
Consistency technology, whether applied to SRDF, TimeFinder BCVs, clones, or snaps, provides the capability to create an image oI one or more databases that are DBMS restartable copies. It does this by momentarily holding all write IO to the speciIied Symmetrix volumes while perIorming a split operation.
The resultant databases on the target volumes are in a data state that is equivalent to the state they would be in aIter a power Iailure. In an Oracle context, a more appropriate analogy would be that they look the same as iI all database instances perIormed shutdown abort simultaneously.
Since restarting an aborted instance does not in any way require the database to be in Oracle`s hot backup mode, we are able to provide customers with a way to create restartable database clones without requiring the user to place the databases tablespaces in hot backup mode. EMC TimeFinder Overview TimeFinder soItware works by creating multiple, independently addressable business continuance volumes (BCVs) Ior independent storage. The BCV is a Symmetrix device with special attributes created when the Symmetrix is conIigured. It can Iunction either as an additional mirror to a Symmetrix logical volume or as an independent, host-addressable volume. Establishing BCV devices as mirror images oI active production volumes allows you to run multiple simultaneous business continuance tasks in parallel. The principal device, known as the standard device, remains online Ior regular Symmetrix operation Irom the production server. Each BCV contains a unique host address, making it accessible to a separate backup/recovery server. When you establish a BCV as a mirror oI a standard device, that relationship is known as a BCV pair. Any time you split one oI the BCVs Irom the standard device, the BCV has the mirrored data Irom the standard device and so it is available Ior backup, testing, analysis, or snapping (making instant copies).
2/5/2006
5 TimeFinder consistent split is used to create valid point-in-time restartable images oI the Oracle database. These point-in-time restartable images are not valid Oracle backups; Oracle backups require additional procedures, such as putting the tablespace(s) into hot backup mode prior to splitting the BCVs. TimeFinder Consistent SpIit TimeFinder soItware provides a consistent-split implementation oI instant split that allows you to split oII a consistent, DBMS-restartable BCV copy oI your database without having to shut down the database or put the database Iiles into hot backup mode. It is able to do this by simultaneously holding all write I/O to database devices momentarily beIore splitting the BCVs. AIter mounting the BCVs to a host, a subsequent Oracle startup will perIorm instance crash recovery, ensuring the integrity oI the database image.
A point-in-time database image taken with a consistent split is not a valid Oracle backup without additional procedures such as putting the database in hot backup mode prior to the split. A consistent split is used Ior the purpose oI creating a restartable image oI the database at a speciIic point in time. For more inIormation about TimeFinder consistent splits, reIer to the white paper Using SYMCLI to PerIorm Consistent Splits with the TimeFinder Product Family (P/N 300-000-283). EMC SRDF Overview Symmetrix Remote Data Facility (SRDF) is a Symmetrix-based business continuance and disaster restart solution. In simplest terms, SRDF is a conIiguration oI multiple Symmetrix units whose purpose is to maintain real-time copies oI logical data volume in more than one location. The Symmetrix units can be in the same room, in diIIerent buildings within the same campus, or hundreds oI miles apart. SRDF provides data mobility and disaster restart spanning multiple host platIorms, operating systems, and applications.
The local SRDF device, known as the source (R1) device, is conIigured in a pairing relationship with a remote target (R2) device, Iorming an SRDF pair. While the R2 device is mirrored with the R1 device, the R2 device is write-disabled to the host. AIter the (R2) device becomes synchronized with its (R1) device, you can split the (R2) device Irom the (R1) device at any time, making the (R2) device Iully accessible again to its host. AIter the split, the target (R2) device contains (R1) data and is available Ior perIorming business continuance tasks through its original device address or restoring (copying) data back to the source (R1) device. Figure 1 shows a typical SRDF conIiguration.
Figure 1 Typical SRDF Configuration
2/5/2006
6 SRDF Protection Modes SRDF currently supports the Iollowing modes oI operations Ior database restart or database recovery solutions with Oracle Databases. SRDF Synchronous Mode In SRDF synchronous mode, every I/O Irom the production host is Iirst written to the local Symmetrix cache, and is then sent over the SRDF links to the remote Symmetrix unit. Once the remote Symmetrix unit reports that the data has reached its cache successIully, the I/O is acknowledged to the production host. Synchronous mode guarantees that the remote image is a complete duplication oI the source image. SRDF Asynchronous Mode Many SRDF customers use synchronous mode to protect data on a primary storage system. Synchronous mode creates a consistent copy oI data on the secondary storage system (R2), but carries a price, both in perIormance (response time on the R1 side host), and cost (high-capacity links). The main premise oI SRDF asynchronous mode is to provide a consistent, point-in-time image on the (R2) side, which is not too Iar behind the (R1) side, and that results in minimal data loss in the event oI a disaster at the primary site. Disaster Restart and Disaster Recovery Classical disaster recovery techniques have evolved over several decades. In most cases, a disaster recovery activity implies usage oI data tapes that have been stored oIIsite in a secure location. Full backup tapes oI disk data are usually taken periodicallyoIten during a low transaction period, such as a Saturday or Sunday night. During the rest oI the week, incremental (or tapes capturing changed disk data since the last Iull or previous incremental backup) are gathered and sent oIIsite. By being stored oIIsite, these tapes use geographic separation to guard against any local disaster, such as a Iire or a Ilood. In a disaster situation, the user must gather all the tapes and apply them in sequence. Considering the times associated with running the Iull and incremental tape backups Irom disk, packaging the tapes Ior oIIsite transport, gathering them in the event oI a disaster, and recovering them back to disk storage at the backup site, 48 hours may be considered a realistic expectation Ior the duration oI a disaster recovery activity. These activities are also susceptible to human error, such as the tapes incorrectly applied in the disaster recovery sequence, lost tapes, damaged tapes, incompatible tapes, etc. Once the remote site is running, a similarly lengthy outage (usually involving tapes) occurs to go home once repairs have been made at the original data center.
Disaster restart, on the other hand, does not use computer tapes. Rather, data is transported by communications links to remote data storage. The remote replica oI data serves as the restart point, and the user may restart the application using disk images at the remote site.
Oracle`s recommended disaster restart and recovery solution is Oracle Data Guard, which is a built-in Ieature oI the Oracle Database. However, Ior this document, the disaster restart and recovery discussions will Iocus on EMC SRDF.
A major issue is whether the data in the remote location is logically consistent. EMC SRDF synchronous mode and EMC SRDF asynchronous mode ensure the dependent write order consistency oI the replication by synchronizing each and every dependent I/O (SRDF synchronous mode) or by synchronizing delta sets oI data (SRDF asynchronous mode). In a true physical disaster at the source location, database restart operations can be completed at the remote site without the delays associated with Iinding and applying tapes in the correct sequence. Because the remote site has physical disk replicas, the go home activity is likewise very Iast and easy.
In addition to disaster restart beneIits, SRDF signiIicantly enhances disaster recovery operations by using Iast and reliable replication technology to oIIload the Oracle backup operations to a remote site and later return the restored data to the local site.
2/5/2006
7 When a disaster recovery solution is required, in order to create valid Oracle backups with any split mirror or snapshot technology additional procedures are required, such as to ensure that the database is in hot backup mode during the split or by using Oracle Recovery Manager (RMAN). ReIer to Oracle documentation Ior Iurther details regarding RMAN. Rebalancing and Consistency Technology ASM provides a seamless and non-intrusive mechanism to expand and shrink the diskgroup storage. When disk storage is added or removed, ASM will perIorm a redistribution (rebalancing) oI the striped data 1 . This entire rebalance operation is done while the database is online, thus providing higher availability to database. The main objective oI the rebalance operation is to always provide an even distribution oI Iile extents and space usage across all disks in the diskgroup.
It is considered a best practice to use ASM external redundancy Ior data protection when using EMC arrays. The Symmetrix will provide protection against loss oI media, as well as transparent Iailover in the event oI a speciIic disk or component Iailure.
The split operation oI storage based replicas is sensitive to the rebalancing process which may cause ASM disk group inconsistencies iI the disk group device members are split at slightly diIIerent times. These inconsistencies are a result oI ASM metadata changes occurring while a split operation is in process. . In addition Oracle provides tools and procedural steps to avoid inconsistencies when splitting storage based replicas, however these procedure can be simpliIied and streamlined with the use oI EMC Consistency Technology.
Since EMC consistent split technology suspends database I/O to preserve write order consistency, it also has the side eIIect oI preventing any ASM metadata changes during the split. PerIorming a consistent split will prevent ASM metadata inconsistencies during the replication process eliminating the otherwise extra steps or possible unusable replica iI ASM rebalance was active while perIorming a non-consistent split. Test Cases and Best Practices The Iollowing test cases and results show that Oracle 10g database and Automatic Storage Management can be deployed non-disruptively with EMC TimeFinder/Mirror and SRDF Iamily oI products. II a rebalance operation is triggered while a consistent split is being perIormed, any ASM metadata changes are held until the source and target are in a synchronous state. General Tests Configuration Host names: The term Production host reIers to the primary host where the source devices are used, and Target or Backup host reIers to the host where the BCV, R2 or Remote BCV (RBCV) devices are used.
Assumptions: The target host is conIigured with Operating System level, user and group id, Oracle binaries and directory structure similar to production and is also conIigured Ior ASM A copy oI the production init.ora Iiles Ior the ASM instance and the database instance were copied to the backup host (target host) and modiIied iI required to Iit the target host environment (speciIically ASMDISKSTRING contains the appropriate BCV, R2 or remote BCV devices). The copy oI the production orapwd Iile is available on the target host. The appropriate BCV, R2 or Remote BCV (whichever is appropriate Ior the test) are accessible by the target host and have Oracle permissions. An RMAN recovery catalog is conIigured and operational. The backup server (target host) has Oracle Net to the recovery catalog database. Flash Recovery Area is used
1 Note, a disk Iailure, will also trigger a rebalance activity iI ASM redundancy is not external. 2/5/2006
8 The target host has connectivity to a LAN based Tape Backup System (applicable to 'backup to tape scenarios below)
Test conditions were: OLTP load was running during the split Transaction integrity test (deIined by OSCP test kit) was running during the split. ASM rebalance was active during the split
Test success was measured by: ASM and database instance were opened successIully on target host without any errors reported. Transaction integrity test passed Rebalance operation continued automatically and completed successIully on target host. Database VeriIication utility veriIied the integrity oI all the data Iiles and no errors were Iound.
Normally, when consistent split it used, TimeFinder and SRDF commands are issued Irom a control host connected to the Symmetrix. However in the Iollowing test cases Ior the sake oI simplicity, unless speciIied otherwise, they were issued Irom the production host. Hardware Model OS Oracle Version Local 'Production Host SUN Solaris 2.8 10g Release 2 (10.2.0.1) 'Backup or Target Host SUN Solaris 2.8 10g Release 2 (10.2.0.1)
Name/Serial Number Type Enginuity Version Local Symmetrix 000187900754 DMX 800-M2 5671 Remote Symmetrix 000187900671 DMX 800-M2 5671
ASM Disk Groups / Mount Points In all cases, the databases were built using three distinct ASM diskgroups.
- The DataIile disk group that contained all data Iiles - The Flash Recovery Area (FRA) disk group that contained Iiles such as multiplexed control Iiles, backup sets, archive logs and Ilashback logs. Oracle recommends that archive logs be placed in the Flash Recovery Area. - The Online Redo disk group that contained online redo logs Ior the database
Database Instance Parameter FiIe (These parameters are speciIic to the test environment only) db_name = hrd10g control_files = +DATA_AREA/control_001 DB_RECOVERY_FILE_DEST = +RECOVERY_AREA LOG_ARCHIVE_DEST_1 = `LOCATION=USE_DB_RECOVERY_FILE_DEST'
Case 1: Oracle Database 10g Hot Backup with TimeFinder/Mirror While the Oracle database is in Hot Backup mode on the production host, a TimeFinder/Mirror consistent split is perIormed to create an image oI the active database which can be used to perIorm a backup to tape by oIIloading this process to a target or backup server..
Create Symmetrix Device Groups and Associate BCV devices Two device groups were created because hot (online) backup requires the archive logs to be split at a diIIerent time than the data Iiles: one Ior the database Iiles (DBFILESDG) and the other Ior archive logs (RECOVDG). In general it is best practice Ior online backups to only include the Oracle data Iiles. However with ASM it is possible that control, redo, temp and data Iiles may be all mixed together in small number oI ASM disk groups. Note that the Symmetrix device group should always treat ASM disk groups as a unit and include all members.
EstabIish (synchronize) The Device Groups Use Iull option only with the Iirst establish. Consecutive establish are done incrementally. Wait Ior the synchronization to complete beIore perIorming a TimeFinder Split.
# export ORACLE_SID=hrd10g # sqlplus / as sysdba SQL> alter database begin backup; 2/5/2006
11
Perform a Consistent SpIit Snapshot for Database FiIes
# symmir DBFILES_DG split -consistent
End Backup Mode
SQL> alter database end backup;
Switch Logs and Create backup ControIfiIes Create two copies oI the control Iile. One copy (controlstart) will be used to start up the database in mount mode on the target server. The second copy (controlbakup) is a valid controlIile copy that will be part oI the backup set used by RMAN.
SQL> alter system archive log current;
RMAN~ run Allocate channel ctlIile type disk; Copy current controlIile to RECOVERYAREA/controlIile/controlstart`; Copy current controlIile to RECOVERYAREA/controlIile/controlbak`; Release Channel ctlIile; }
Resynchronize the RMAN Catalog This adds the most recent archive log to the recovery catalog.
RMAN~ resync catalog;
Perform a Consistent SpIit Snapshot of the Recovery Area to capture the Archive Log
# symmir RECOV_DG split -consistent
BACKUP PROCEDURES
On the Backup host the snapshot can be used as a disk backup or a source Ior a tape backup. Some backup applications require the database to be mounted to perIorm backups.
Once the BCVs are split, check that: The BCVs on backup host have correct Oracle permissions ASM init.ora Iile parameter ASMDISKSTRING doesn`t exclude the path to the BCVs. ASM init.ora Iile parameter ASMDISKGROUPS contains the names oI the disk groups.
Start ASM Instance When the ASM instance is started, since the BCV physical names are included in the ASMDISKSTRING parameter, it will identiIy them as the disk groups Irom production database. Also since the ASMDISKGROUPS parameter contains the disk group names they will be mounted automatically.
# export ORACLE_SID=+ASM # sqlplus / as sysdba SQL> startup
2/5/2006
12 Mount Database Instance A database backup that was taken with hot backup mode is valid Ior backup only as long as it wasn`t open with resetlogs options or opened Ior read/write. For that reason it should be either mounted (pre-requisite Ior media recovery and many backup applications) or open read-only (aIter at least enough recovery was done to allow the database to open).
BeIore the database is mounted change the Backup database instance init.ora CONTROLFILE parameter to point to the copied controlIile. For example:
# export ORACLE_SID=hrd10g # sqlplus / as sysdba SQL> startup mount
Backing Up The Database Instance PerIorm a RMAN backup on the Backup host. The controlIile copy that was not used to mount the instance (controlbk) should be part oI the backupset. The controlstart controlIile should not be backed up, because once the database in mounted the SCN will be updated and is is inconsistent with production control Iile.
RMAN~ run ALLOCATE CHANNEL t1 TYPE SBTTAPE BACKUP FORMAT ctl d/s/p/t` CONTROLFILECOPY RECOVERYAREA/controlIile/controlbak`; BACKUP FULL FORMAT ctl d/s/p/t` (database); BACKUP FORMAT al d/s/p/t` (archive all); RELEASE CHANNEL t1 }
Note: The Iormat speciIier d is replaced with date, %t is replaced with a Iour byte time stamp, %s with the backup set number, and %p with the backup piece number. 2/5/2006
13
Case 2: Database Cloning with TimeFinder/Mirror While Oracle is open Ior read/write on the production host, a TimeFinder/Mirror consistent split is perIormed on an established TimeFinder/Mirror BCVs. This operation will create a restartable image oI the active database that can serve as a repurposed database. The Symmetrix devices included in the ALLDBDG device group match the ASM disk groups that contain redo logs, data Iiles and a controlIiles. Archive logs are not used with cloning; however it may be beneIicial to include the recovery area as well (especially iI Ilashback logs are active). II recovery area is to be made available to the target host as well then include the recovery area devices in the ALLDB Symmetrix device group so they will be part oI the consistent split, operation together with the database Iiles.
Create Symmetrix Device Group and Associate BCV devices Create a single device group Ior the Oracle data Iiles, control Iiles and Online Redo log Iiles because consistent split requires all the database Iiles to be split together.
EstabIish (synchronize) The Device Group Use Iull option only with the Iirst establish. Consecutive establish are done incrementally. Wait Ior the synchronization to complete beIore perIorming a TimeFinder Split.
Perform a Consistent SpIit Snapshot for Database FiIes
# symmir ALLDB_DG split -consistent
ON TARGET HOST: On the target host, once the BCVs are split, check that: The BCVs have Oracle permissions ASM init.ora Iile parameter ASMDISKSTRING doesn`t exclude the path to the BCVs. ASM init.ora Iile parameter ASMDISKGROUPS contains the names oI the disk groups.
Start ASM Instance When the ASM instance is started, since the BCVs are included in the ASMDISKSTRING parameter, it will identiIy them as the disk groups Irom production database. Also since the ASMDISKGROUPS parameter contains the disk group names they will be mounted automatically.
# export ORACLE_SID=+ASM # sqlplus / as sysdba SQL> startup
Start Database Instance Startup and recover the clone database. Once the clone database is recovered , it should be assigned a new DBID and re-started with resetlogs . The Iollowing steps illustrate these steps. # export ORACLE_SID=hrd10g
connect to RMAN
RMAN> startup mount RMAN> recover database; 2/5/2006
14 RMAN> exit
nid target=sys/manager1 optionally the db_name can be changed as well, see the Oracle Recovery Guide for details on the nid utility.
SQL> startup mount SQL> alter database open resetlogs
At the end oI this step the database is opened and available Ior user connections.
Case 3: Remote Database Cloning with TimeFinder/Mirror and SRDF
When using SRDF/S or SRDF/A Ior database protection there is an advantage Ior using remote BCVs. The remote BCVs allow SRDF to remain synchronized and maintaining database protection, while at the same time the remote BCVs can be split as a database clone Ior test, development or reporting. It is possible to use them Ior backup in combination with Oracle hot backup (as described in the next section). Also they can serve as a gold copy Ior enhanced protection in situations when SRDF is about to start Iailback operation and the remote site contains a valid image oI the database. It is a best practice to split the remote BCVs beIore synchronizing the SRDF Ior the possibility that beIore SRDF is Iully synchronized and the database regain protection, a second Iailure occurs (also reIerred as 'rolling disaster).
Note: The Iollowing solution addresses the use oI remote BCVs. However, to restart the database directly Irom the R2 devices, use the same steps as described below. The only diIIerence is that instead oI the remote BCVs it is the R2 devices that are used to start the ASM and database instances. II this wasn`t a planned Iailover and the R1 site is not accessible, issue a svmrdf failover command Irom the R2 site to make the R2 devices read-writable.
Create Symmetrix Device Group and Associate Remote BCV devices
Because consistent split requires all the database Iiles to be split together create a single device group Ior the Oracle data Iiles, control Iiles and Online Redo log Iiles. To the same device group add the remote BCVs (remote is indicated by using the rdI Ilag. That means that the BCVs are those attached to the R2 devices on the remote Symmetrix).
EstabIish (synchronize) SRDF and Remote BCV devices
Use Iull option only with the Iirst establish. Consecutive establish are done incrementally. Wait Ior the synchronization to complete beIore perIorming a TimeFinder Split. The synchronization oI SRDF and the remote BCVs can happen simultaneously. For SRDF/A, once the SRDF is in a consistent state use the Enable SRDF command to guarantee device level consistency.
For SRDF/S protection:
Note: SRDF/S is the default SRDF mode. If it was changed use: # symrdf -g ALLDB_DG set mode sync
Perform a Consistent SpIit Snapshot for Database FiIes
# symmir ALLDB_DG -rdf split -consistent
ON TARGET HOST:
On the target host, once the BCVs are split, check that: The BCVs have Oracle permissions. Note that iI the same host has both remote BCVs as well as R2 devices mapped to it then the R2 devices should not have Oracle permissions. The reason is that ASM writes disk group inIormation to the ASM members. As both R2 and Remote BCV contains the exact same Oracle inIormation ASM cannot diIIerentiate between them iI both have Oracle permissions. ASM init.ora Iile parameter ASMDISKSTRING includes the path to the BCVs. ASM init.ora Iile parameter ASMDISKGROUPS contains the names oI the disk groups.
Start ASM Instance When the ASM instance is started, since the BCVs are included in the ASMDISKSTRING parameter, it will identiIy them as the disk groups Irom production database. Also since the ASMDISKGROUPS parameter contains the disk group names they will be mounted automatically.
# export ORACLE_SID=+ASM # sqlplus / as sysdba SQL> startup
Start Database Instance Startup and recover the clone database. Once the clone database is recovered , it should be assigned a new DBID and re-started with resetlogs . The Iollowing steps illustrate these steps.
# export ORACLE_SID=hrd10g
connect to RMAN
RMAN> startup mount RMAN> recover database; RMAN> exit
nid target=sys/manager1 optionally the db_name can be changed as well, see the Oracle Recovery Guide for details on the nid utility.
SQL> startup mount SQL> alter database open resetlogs At the end oI this step the database is opened and available Ior user connections. 2/5/2006
16
Case 4: Oracle Database 10g Hot Backup with TimeFinder/Mirror and SRDF
When using SRDF/S or SRDF/A Ior database protection there is an advantage Ior using remote BCVs. The remote BCVs allow SRDF to remain synchronized and maintaining database protection, while at the same time the remote BCVs can be split as a database clone Ior test, dev or reporting. It is possible to use them Ior backup in combination with Oracle hot backup (as described in this section). Also they can serve as a gold copy Ior enhanced protection in situations when SRDF is about to start Iailback operation and the remote site contains a valid image oI the database. It is best practice to split the remote BCVs beIore synchronizing the SRDF Ior the possibility that beIore SRDF is Iully synchronized and the database regain protection, a second Iailure occurs (also reIerred as 'rolling disaster).
Create Symmetrix Device Groups and Associate Remote BCV devices
Two device groups were created Ior remote TimeFinder operations because hot (online) backup requires the archive logs to be split at a diIIerent time than the data Iiles: one device group was created Ior the database Iiles (DBFILESDG) and the other Ior archive logs (RECOVDG). In general it is best practice Ior online backups to only include the Oracle data Iiles. However with ASM it is possible that control, redo, temp and data Iiles may be all mixed together in small number oI ASM disk groups. Note that the Symmetrix device group should always treat ASM disk groups as a unit and include all members.
Note that remote BCVs are established with the R2 devices (remote is indicated by using the rdI Ilag. That means that the BCVs are those attached to the R2 devices on the remote Symmetrix).
EstabIish (synchronize) SRDF and Remote BCV devices
Use Iull option only with the Iirst establish. Consecutive establish are done incrementally. Wait Ior the synchronization to complete beIore perIorming a TimeFinder Split. The synchronization oI SRDF and the remote BCVs can happen simultaneously. For SRDF/A, once the SRDF is in a consistent state use the Enable SRDF command to guarantee device level consistency.
Note: In general, SRDF is required to include all data, control and redo log Iiles together to create a write order consistent and restartable image oI the database. In addition, control operations when in SRDF/A mode always have to include ALL the SRDF/A devices in an RDF group together. II (like in this example) TimeFinder operations require two device groups: DBFILESDG containing data Iiles and RECOVDG containing archive logs, and SRDF requires a diIIerent (larger) set oI devices to operate on, a device file is used Ior the SRDF control operations. The device Iile contains the list oI R1 devices including data, redo and control Iile devices. It may also contain archive log devices iI the SRDF is used to replicate archive logs. Otherwise it is possible to ship archive logs over the network to the remote host. When using a device Iile in SRDF control commands the Symmetrix ID and SRDF group are speciIied in the command line.
For SRDF/S protection: 2/5/2006
17 Note: SRDF/S is the default SRDF mode. If it was changed use: # symrdf -g DBFILES_DG set mode sync
# export ORACLE_SID=hrd10g # sqlplus / as sysdba SQL> alter database begin backup;
Perform a Consistent SpIit Snapshot for Database FiIes
Note: When using SRFD/A, since the R2 is always 2 cycles behind the R1, in order Ior the begin hot backup mark in the data Iiles to be included in the remote BCVs image (Ior backup and recovery sake) we use the SRDF checkpoint command that make sure the inIormation on the R1 has reached the R2 beIore we issue the remote BCVs split. # symrdf -sid 754 -rdfg 3 -file ./dev_srdf checkpoint
# symmir -g DBFILES_DG -rdf split -consistent
End Backup Mode
SQL> alter database end backup;
Switch Logs and Create ControIfiIes Create two copies oI the control Iile. One copy (controlstart) will be used to start up the database in mount mode on the backup server. The second copy (controlbakup) will be used as a component oI the backup set used by RMAN.
SQL> alter system archive log current;
RMAN~ run Allocate channel ctlIile type disk; Copy current controlIile to RECOVERYAREA/controlIile/controlstart`; Copy current controlIile to RECOVERYAREA/controlIile/controlbakup`; }
Resynchronize the RMAN Catalog This adds the most recent archive log to the recovery catalog. 2/5/2006
18
RMAN~ resync catalog;
Perform a Consistent SpIit Snapshot of the Recovery Area to capture Archive Logs
On the backup host, once the BCVs are split, check that: The BCVs have Oracle permissions. Note that iI the same host has both remote BCVs as well as R2 devices mapped to it then the R2 devices should not have Oracle permissions. The reason is that ASM writes disk group inIormation to the ASM members. As both R2 and Remote BCV contains the exact same Oracle inIormation ASM may conIuse between them iI they both have Oracle permissions. ASM init.ora Iile parameter ASMDISKSTRING doesn`t exclude the path to the BCVs. ASM init.ora Iile parameter ASMDISKGROUPS contains the names oI the disk groups.
Start ASM Instance When the ASM instance is started, since the BCVs are included in the ASMDISKSTRING parameter, it will identiIy them as the disk groups Irom production database. Also since the ASMDISKGROUPS parameter contains the disk group names they will be mounted automatically.
# export ORACLE_SID=+ASM # sqlplus / as sysdba SQL> startup
Mount Database Instance BeIore the database is mounted change the target database instance init.ora CONTROLFILE parameter to point to the copied controlIile. For example:
# export ORACLE_SID=hrd10g # sqlplus / as sysdba SQL> startup mount
Backup target Database Instance PerIorm a rman backup on the backup host. The previously backed up control Iile must be part oI the backupset because once the database in mounted the SCN will be updated and is no longer reIlects the initial state oI the control Iile.
RMAN~ run ALLOCATE CHANNEL t1 TYPE SBTTAPE BACKUP FORMAT ctl d/s/p/t` CONTROLFILECOPY RECOVERYAREA/controlIile/controlbak`; BACKUP FULL FORMAT ctl d/s/p/t` (database); BACKUP FORMAT al d/s/p/t` 2/5/2006
19 (archive all); RELEASE CHANNEL t1 }
Note: The Iormat speciIier d is replaced with date, %t is replaced with a Iour byte time stamp, %s with the backup set number, and %p with the backup piece number. 2/5/2006
20 Case 5: Restoring a Database on the Production Host II recovery time is critical, the recovery should be done on the production host. This method will provide minimal downtime, while protecting the gold copy (BCV) oI the database. This case will restore the database Irom Case 1. This technique is very Iast since only the changed tracks on the LUNs that make up the data Iiles are restored. Only the additional archive logs since last backup time need to be applied Ior recovery. It is recommended that the restore process is done using protect option. This ensures that any writes to the standard devices will not taint the BCVs. II corruption is reintroduced, or a mistake is made during the recovery procedure, the BCV can once again be used to perIorm a quick restore oI the production host`s database..
Shutdown the database instance on the production host
Note: In this case we only restore data Iiles image to production host. We do not want to overwrite the online redo logs (iI they still available on production) as they contain the last committed transactions. Also we do not want to overwrite the Flash Recovery Area, which contains recent archive logs. II Flash Recovery Areas was damaged as well then dismount and restore its ASM disk groups as well (iI online redo logs are damaged on production then recreate their diskgroup. The logs will be recreated when the database is opened with resetlogs).
Perform a TimeFinder Restore on the datafiIes diskgroup The Recovery Area must not be restored to the production server
# symmir -g DBFILES_DG restore
Mount the ASM Diskgroups
# export ORACLE_SID=+ASM # sqlplus / as sysdba
SQL> alter diskgroup DATA_AREA mount;
Startup the LocaI Database in MOUNT
# export ORACLE_SID=hrd10g # sqlplus / as sysdba SQL> startup mount
Perform complete or point-in-time recovery with RMAN. II you are perIorming incomplete recovery, then set the 'until time or 'until SCN markers.
2/5/2006
21 RMAN~ run SET UNTIL TIME `06-dec-05 13:00`; RECOVER DATABASE; }
Opening the Database AIter you are sure that all Iiles are correctly restored and recovered, you can open the database using the resetlogs option. AIter you are sure that all Iiles are correctly restored and recovered, you can open the database using the resetlogs option. The open with option resetlogs will create a new incarnation oI the database, which must be also registered in the RMAN.