Cookbook 11grac r1 Asm Aix5l San Storage Installation Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 393

DRAFT

COOKBOOK
Installation Guide
Oracle 1 1 g RAC Relea se 1
with Oracle A utoma tic
Storage M an agem en t (ASM)
on IBM
Sy stem p
and i R unni ng A IX 5L
wi th S AN Storage
Ve r s io n 1 .0
A p r il 20 0 8

ORACL E / IBM
Joint Solut io ns Cen te r
IBM Thierry Plumeau
Oracle/IBM Joint Solutions Center
Oracle - Frederic Michiara
Oracle/IBM Joint Solutions Center

11gRAC/ASM/AIX

[email protected]

1 of 393

This document is based on our experiences.


This is not an official (Oracle or IBM) documentation.
This document will be constantly updated
and were open to any add-on
or feedback from your own experiences,
on same or different storage solution !!!

Document history :
Version
1.0

Date
January 2008

- Creation

Update

April 2008

- Review

Who

Validated by

Frederic Michiara
Thierry Plumeau
Dider Wojciechowski
Paul Bramy

Alain Roy
Paul Bramy

Contributors :
o

Paul Bramy : Oracle Corp. (ORACLE/IBM Joint Solutions Center)

Alain Roy : IBM France (ORACLE/IBM Joint Solutions Center)

Fabienne Lepetit : Oracle Corp. (ORACLE/IBM Joint Solutions Center)

Special thanks to the participants of the 11gRAC workshop in Switzerland (Geneva) who used our last
draft coobook release, and helped us to discover some typos, and validate the content of the cookbbok.

Contact :
o

ORACLE / IBM Joint Solutions Center

11gRAC/ASM/AIX

[email protected]

[email protected]

2 of 393

1
2
3
4
5
6
7

The aim of this document............................................... .........................6


About Oracle clusterware................................................. .......................8
About Oracle Automatic storage management........................................15
About Real Application Cluster............................................................. ..17
About 11G RAC / ASM on AIX.......................................................... ........19
Whats new with 11g RAC implementation on AIX...................................20
infrastructure requirements.................................... ..............................20
7.1 General Requirements..................................................................................................... ...............22
7.1.1 About Servers and processors................................................................................. ................22
7.1.2 About RAC on IBM System p............................................................................ ........................23
7.1.3 About Network........................................................................................................... ..............24
7.1.4 About SAN Storage................................................................................................. .................26
7.1.5 Proposed infrastructure with 2 servers................................................................... .................28
7.1.6 What do we protect ?....................................................................................................... ........29
7.1.7 About IBM Advanced Power Virtualization and RAC.................................................... .............30
7.2 Cookbook infrastructure...................................................................................... ...........................35
7.2.1 IBM Sytem p servers.................................................................................................... ............36
7.2.2 Operating System............................................................................................................ ........44
7.2.3 Multi-pathing and ASM............................................................................ ................................45
7.2.4 IBM storage and multi-pathing........................................................................ ........................46
7.2.5 EMC storage and multi-pathing........................................................................... ....................53
7.2.6 HITACHI storage and multi-pathing........................................................... ..............................55
7.2.7 Others, StorageTek, HP EVA storage and multi-pathing....................................................... ....57

8 Specific cOnsiderations for RAC/ASM setup with HACMP installed ...........59


9 Installation steps........................................................ ..........................62
10 Preparing the system............................................... ...........................64
10.1 Network configuration........................................................................................ ..........................65
10.1.1 Define Networks layout, Public, Virtual and Private Hostnames............................................65
10.1.2 Identify Network Interfaces cards...................................................................... ....................68
10.1.3 Update hosts file................................................................................................................... .71
10.1.4 Defining Default gateway on public network interface...................................................... ....72
10.1.5 Configure Network Tuning Parameters................................................... ...............................74
10.2 AIX Operating system level, required APARs and filsets........................................ ......................76
10.2.1 Filesets Requirements for 11g RAC R1 / ASM (NO HACMP)....................................................77
10.2.2 APARs Requirements for 11g RAC R1 / ASM (NO HACMP).....................................................78
10.3 System Requirements (Swap, temp, memory, internal disks)......................................................79
10.4 Users And Groups....................................................................................................... ..................81
10.5 Kernel and Shell Limits......................................................................................... ........................85
10.5.1 Configure Shell Limits.................................................................................................... ........85
10.5.2 Set crs, asm and rdbms users capabilities................................................ ............................87
10.5.3 Set NCARGS parameter................................................................................... ......................87
10.5.4 Configure System Configuration Parameters.................................................................. .......88
10.5.5 Lru_file_repage setting.................................................................................. ........................89
10.5.6 Asynchronous I/O setting................................................................................................ .......91
10.6 User equivalences.............................................................................................. ..........................92
10.6.1 RSH implementation.......................................................................................................... ....93
10.6.2 SSH implementation...................................................................................................... ........94
10.7 Time Server Synchronization............................................................................................ ............98
10.8 crs environment setup......................................................................................................... .........99
10.9 Oracle Software Requirements......................................................................... ..........................100

11 Preparing Storage...................................................... .......................101

11.1 Required local disks (Oracle Clusterware, ASM and RAC software).................................. ...........103
11.2 Oracle Clusterware Disks (OCR and Voting Disks)........................................... ...........................105
11.2.1 Required LUNs......................................................................................... ...........................106
11.2.2 How to Identify if a LUN is used or not ?........................................................... ...................107
11.2.3 Register LUNs at AIX level.............................................................................................. .....110
11.2.4 Identify LUNs and corresponding hdisk on each node...................................... ..................111
11.2.5 Removing reserve lock policy on hdisks from each node ................................ ...................114
11.2.6 Identify Major and Minor number of hdisk on each node.................................................... .117
11.2.7 Create Unique Virtual Device to access same LUN from each node....................................119
11.2.8 Set Ownership / Permissions on Virtual Devices........................................................ ..........120
11gRAC/ASM/AIX

[email protected]

3 of 393

11.2.9 Formating the virtual devices (zeroing).................................................................... ...........122


11.3 ASM disks.................................................................................................... ...............................123
11.3.1 Required LUNs......................................................................................... ...........................125
11.3.2 How to Identify if a LUN is used or not ?........................................................... ...................125
11.3.3 Register LUNs at AIX level.............................................................................................. .....127
11.3.4 Identify LUNs and corresponding hdisk on each node...................................... ..................129
11.3.5 Removing reserve lock policy on hdisks from each node................................. ...................132
11.3.6 Identify Major and Minor number of hdisk on each node.................................................... .134
11.3.7 Create Unique Virtual Device to access same LUN from each node....................................136
11.3.8 Set Ownership / Permissions on Virtual Devices........................................................ ..........138
11.3.9 Formating the virtual devices (zeroing).................................................................... ...........141
11.3.10 Removing assigned PVID on hdisk................................................................................. ....142
11.4 Checking Shared Devices..................................................................................... ......................143
11.5 Recommandations, hints and tips................................................................ ..............................145
11.5.1 OCR / Voting disks.................................................................................. .............................145
11.5.2 ASM disks................................................................................................ ............................148

12

Oracle Clusterware (CRS) Installation................................................151

12.1 Cluster VeriFication utility............................................................................................ ...............152


12.1.1 Understanding and Using Cluster Verification Utility................................................... ........152
12.1.2 Using CVU to Determine if Installation Prerequisites are Complete.....................................152
12.2 Installation....................................................................................................................... ...........163
12.3 Post Installation operations............................................................................. ...............................4
12.3.1 Update the Clusterware unix user .profile........................................................ .......................4
12.3.2 Verify parameter CSS misscount.................................................................. ...........................5
12.3.3 Cluster Ready Services Health Check....................................................................... ...............6
12.3.4 Adding enhanced crsstat script ....................................................................... .......................9
12.3.5 Interconnect Network configuration Checkup................................................................ ........11
12.3.6 Oracle CLuster Registry content Check and Backup.............................................. ................12
12.4 Some usefull commands................................................................................. .............................19
12.5 Accessing CRS logs............................................................................................................. ..........21
12.6 Clusterware Basic Testing........................................................................................... ..................22
12.7 What Has Been Done ?............................................................................................. ....................23
12.8 VIP and CRS TroubleShouting......................................................................... ..............................23
12.9 How to clean a failed CRS installation.................................................................. ........................24

13 Install Automated Storage Management software.................................25


13.1 Installation........................................................................................................................ ............26
13.2 Update the ASM unix user .profile........................................................................ ........................36
13.3 Checking Oracle Clusterware................................................................................................ ........37
13.4 Configure default node listeners.................................................................. ................................38
13.5 Create ASM Instances................................................................................................. ..................48
13.5.1 Thru DBCA...................................................................................................... .......................49
13.5.2 Manual Steps........................................................................................................... ..............69
13.6 Configure ASM local and remote listeners.................................................................... ................74
13.7 Check Oracle Clusterware, and manage ASM ressources............................................. ................80
13.8 About ASM instances Tuning..................................................................................................... ....83
13.9 About ASM CMD (Command Utility).............................................................................. ................85
13.10 Usefull Metalink notes ................................................................................. .............................91
13.11 What has been done ?................................................................................................ ................91

14 Installing Oracle 11g R1 software.................................................... .....92


14.1 11g R1 RDBMS Installation..................................................................................... ......................94
14.2 Symbolic links creation for listener.ora, tnsnames.ora and sqlnet.ora.......................................102
14.3 Update the RDBMS unix user .profile...................................................................................... ....103

15 Database Creation On ASM ...................... .........................................104


15.1 Thru Oracle DBCA...................................................................................................... .................105
15.2 Manual Database Creation.......................................................................... ...............................117
15.3 Post Operations............................................................................................. .............................125
15.3.1 Update the RDBMS unix user .profile.................................................................................. .125
15.3.2 Administer and Check........................................................................................................ ..125
15.4 Setting Database Local and remote listeners.......................................................... ...................131
15.5 Creating Oracle Services................................................................................ ............................133
15.5.1 Creation thru srvctl command............................................................................ .................133
15.6 Transaction Application Failover.......................................................................... .......................159
11gRAC/ASM/AIX

[email protected]

4 of 393

15.7 About DBCONSOLE........................................................................................... ..........................167


15.7.1 Checking DB CONSOLE........................................................................................... .............167
15.7.2 Moving from dbconsole to Grid Control.......................................................... .....................171

16 asm advanced tools............................................................. ..............172


16.1 ftp and http access.................................................................................................. ...................172

17 Some usefull commands............................. .......................................173

17.1 Oracle CLuster Registry content Check and Backup................................................. ..................174

18 Appendix A : Oracle / IBM technical documents...................................175


19 Appendix B : Oracle technical notes.................................................. ..176
19.1 CRS and 10g Real Application Clusters................................................................................... ....176
19.2 About RAC ....................................................................................................................... ...........186
19.3 About CRS .......................................................................................................................... ........186
19.4 About VIP ............................................................................................................. ......................187
19.5 About manual database cration ................................................................................................ .187
19.6 About Grid Control .................................................................................................. ...................187
19.7 About TAF .................................................................................................... ..............................187
19.8 About Adding/Removing Node ..................................................................................... ..............187
19.9 About ASM .......................................................................................................... .......................187
19.10 Metalink note to use in case of problem with CRS .................................................... ..............188

20 Appendix C : Usefull commands...................................................... ....189


21 Appendix D : Empty tables to use for installatioN................................199
21.1 Network document to ease your installation......................................................... .....................199
21.2 Steps Checkout.................................................................................................................. .........201
21.3 Disks document to ease disks preparation in your implementation...........................................202

22 Documents, BOOKS to look at .......................................................... ..203

11gRAC/ASM/AIX

[email protected]

5 of 393

1 THE AIM OF THIS DOCUMENT

This document is written to provide help installing


Oracle11g Real Application Clusters (11.1) release 1 with
Oracle Automated Storage Management on IBM System p
and i servers with AIX.
Within the Oracle/IBM Joint Solutions Center, we are receiving a lot of request about RAC and ASM, this is
the reason why we decided to deliver the first 11gRAC / AIX cookbook with Oracle ASM (Automated
Storage Management). AIX6 is not covered as 11gRAC is not yet supported with AIX6, but certification
should be available soon. Check on metalink for update, the cookbook will be updated with AIX6 when
official certification will be released.
We will describe step by step the architecture Oracle CRS (Cluster Ready Service) on raw disks and database on ASM
(Automatic Storage Management).
For the architecture using Oracle 11gRAC with IBM GPFS as cluster files system, cookbook named Installation
Guide Oracle 11g RAC Release 1 and IBM GPFS on IBM eServer System p and i running AIX with SAN Storage
will be released soon.
Metalink (http://metalink.oracle.com/metalink/plsql/ml2_gui.startup)
Titre

Origine

Rfrence

Oracle Database Oracle Clusterware and Oracle Real


Application Clusters Installation Guide
11g Release 1 (11.1) for AIX

http://www.oracle.com/pls/db111/porta
l.portal_db?selected=11&frame=#aix_i
nstallation_guides

B28252

2 Day + Real Application Clusters Guide - 11g Release 1


(11.1)

http://www.oracle.com/pls/db111/hom
epage?remark=tahiti

B28252

Oracle Database Release Notes 11g Release 1 (11.1) for


AIX 5L Based Systems (64bits)

http://www.oracle.com/pls/db111/porta
l.portal_db?selected=11&frame=#aix_i
nstallation_guides

B32075

Oracle Universal Installer Concepts Guide Release 2.2

http://downloadeast.oracle.com/docs/cd/B10501_01/
em.920/a96697/preface.htm

Oracle Database on AIX,HP-UX,Linux,Mac OS


X,Solaris,Tru64 Unix Operating Systems Installation
and Configuration Requirements Quick Reference (8.0.5 to
11.1)

http://metalink.oracle.com/

http://metalink.oracle.com/

OracleASMandMultiPathingTechnologies

11gRAC/ASM/AIX

Oracle Metalink

Oracle Metalink

ASM FAQ (Frequently Asked Questions)

[email protected]

A96697-01

169706.1
370915.1

Oracle Metalink
http://metalink.oracle.com/

294869.1

6 of 393

The information contained in this paper resulted from :


Oracle and IBM documentations
Workshop experiences done in the Oracle/IBM Joint Solutions Center
Benchmarks and POC implementations for customers performed by PSSC Montpellier
This documentation is a joint effort from Oracle and IBM specialists.
Please also refer to Oracle & IBM online documentation for more information :
http://docs.oracle.com
o Oracle Database 11g Release 1 (11.1) Documentation
http://www.oracle.com/technology/documentation/database.html

http://tahiti.oracle.com

Oracle RAC home page : http://www.oracle.com/database/rac_home.html

For more information IBM System p : http://www-03.ibm.com/systems/p/

Your comments are important for us, and we thanks the ones who send us their feedback about previous
release, and about how this document did help them in their implementation. We want our technical papers to
be as helpful as possible.
Please send us your comments about this document to the Oracle/IBM Joint Solutions Center.

Use our email address :

[email protected]

11gRAC/ASM/AIX

Or our phone number :

+33 (0)4 67 34 67 49

[email protected]

7 of 393

2 ABOUT ORACLE CLUSTERWARE


Extract from : Oracle Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti
Oracle Real Application Clusters (Oracle RAC) uses Oracle Clusterware as the infrastructure that binds together
multiple nodes that then operate as a single server. Oracle Clusterware is a portable cluster management solution that
is integrated with Oracle Database. In an Oracle RAC environment, Oracle Clusterware monitors all Oracle
components (such as instances and listeners). If a failure occurs, Oracle Clusterware automatically attempts to restart
the failed component and also redirects operations to a surviving component.
Oracle Clusterware includes a high availability framework for managing any application that runs on your cluster.
Oracle Clusterware manages applications to ensure they start when the system starts. Oracle Clusterware also
monitors the applications to make sure that they are always available. For example, if an application process fails, then
Oracle Clusterware attempts to restart the process based on scripts that you customize. If a node in the cluster fails,
then youcan program application processes that typically run on the failed node to restart on another node in the
cluster.
Oracle Clusterware includes two important components: the voting disk and the OCR. The voting disk is a file that
manages information about node membership, and the OCR is a file that manages cluster and Oracle RAC database
configuration information.
The Oracle Clusterware installation process creates the voting disk and the OCR on shared storage. If you select the
option for normal redundant copies during the installation process, then Oracle Clusterware automatically maintains
redundant copies of these files to prevent the files from becoming single points of failure. The normal redundancy
feature also eliminates the need for third-party storage redundancy solutions. When you use normal redundancy,
Oracle Clusterware automatically maintains two copies of the OCR file and three copies of the voting disk file.
You will learn more about the operation of the Oracle clusterware, how to build the cluster, and the structure of an
Oracle RAC database in other sections of this guide.
See Also:

Oracle Clusterware Administration and Deployment Guide

And :
DBA Essentials

At http://www.oracle.com/pls/db111/homepage?remark=tahiti

Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide

11gRAC/ASM/AIX

HTML PDF
HTML PDF
HTML PDF

[email protected]

8 of 393

Oracle Clusterware, Group Membership and Heartbeats

Cluster needs to know who is a member at all times

Oracle Clusterware has 2 heartbeats

Network heartbeat

If a node does not send a heartbeat for


MissCount (time in seconds), then node is
evicted from cluster.

Disk heartbeat

If disk heartbeat is not updated in I/O


timeout, then node is evicted from cluster

Oracle Clusterware, OCR and Voting disks

Oracle Cluster Registry contains all


information on the cluster.

11gRAC/ASM/AIX

[email protected]

Since 10gRAC R2 OCR can be


mirrored

9 of 393

Voting is the cluster heartbeat.

Since 10gRAC R2 voting can have


multiple copies in odd number for
majority.

Oracle Clusterware and its virtual IPs

-1-

-2-

Each node participating to the Oracle Cluster has :

-3-

If one node fails :

When failed node return to normal operation :

1 Static IP address (10.3.25.81)

1 Virtual IP address, each virtual address


(10.3.25.181) has its home node

Static IP (10.3.25.81) from failed node wil not


be anymore reachable

Static IP (10.3.25.81) will be back and


reachable

Virtual IP (10.3.25.181) from failed node will


switch to one of the remaining node, and will
remain reachable

Virtual IP (10.3.25.181) from failed node,


hosted on one of the remaining nodes will
switch back to its home node

11gRAC/ASM/AIX

[email protected]

10 of 393

Protecting Single Instance Database and Thrid application tier thru Oracle Clusterware

-1-

-2-

Using Oacle clusterware, we can have Cluster Databases, but


also non cluster database. In our exemple, we have one single
instance database on one node.
Each node has its own node VIP, and we can create an
application VIP (APPS VIP) on wich different resources are
dependents :

ASM Single instance,

HR Single Database Instance,

Listener

HR application tier

/apps and /oracle local files systems

11gRAC/ASM/AIX

[email protected]

If first node where APPS VIP is hosted failed THEN :

Node VIP (10.3.25.81) will switch to an other node

APPS VIP (10.3.25.200) will switch to a preffered node (if configured, inj our case
on the third node. If third node was not available, APPS VIP will switch to one avail
able node).

11 of 393

-3-

-4-

APPS VIPS has switched to preferred 3 node. THEN, All resources


dependant of APPS VIP will be restarted on third node.
rd

When node 1 come back to normal operation, node VIP (10.3.25.81) will switch back
automatically to its home node, meaning first node. BUT APPS VIPS will not switch
back !

-5To switch back APPS VIP and its dependant resources to first node, administrator will have to relocate the APPS VIP to first node thru clusterware commands.
Sending command on cluster events thru Oracle Clusterware
11gRAC/ASM/AIX

[email protected]

12 of 393

-1-

-2-

With Oracle clusterware, all cluster events


(node, database, instance, etc )
information (up, down, etc ) can be
obtained and used to associate an action.
In our example case, we have :

3 nodes, with 3 LPARs each (RAC, APPS,


TEST/DEV)

IBM HMC to administrate the IBM POWER


System p and LPARs (cores, memory)

In normal operation, all LPARs are using


their defined cores and memory, but
could also use micro-partitionning
between LPARs.

11gRAC/ASM/AIX

-3-

In our example case, if first node fails, THEN :

Node VIP (10.3.25.81) from first node will switch


on one available node.

Oracle clusterware will have a FAN (Fast


Application Notification) Event : node1 down

A unix shell script (Call out script) will analyze the


information and send a request to the IBM HMC.
o

Request to remove resources from


TEST/DEV LPARs from second and third
node.

Request to assign new resources to APPS


and RAC LPARs.

[email protected]

13 of 393

THEN, the HMC will execute the requests, and will


resize tle LPARs as requested on each available
nodes.

For more information :


On http://otn.oracle.com
http://www.oracle.com/technology/products/database/clusterware/index.html

Oracle Clusterware
Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a
single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC).
In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a
cluster. In any case Oracle Clusterware is the intelligence in those systems that ensures the required cooperation
between the cluster nodes.
Oracle Clusterware
Oracle Clusterware Whitepaper (PDF)
Oracle Clusterware Technical Articles
Using standard NFS to support a third voting disk on an Extended Distance
cluster configuration on Linux, AIX, HP-UX, or Solaris (PDF) February 2008
Oracle Homes in an Oracle Real Application Clusters Environment (PDF) February 2008
Using Oracle Clusterware to protect any kind of application
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
Using Oracle Clusterware to Protect Oracle Application Server (PDF) November 2005
Using Oracle Clusterware to Protect an Oracle Database 10g with
Oracle Enterprise Manager Grid Control Integration (PDF) February 2008
Using Oracle Clusterware to Protect A Single Instance Oracle Database 11g (PDF) February 2008
Oracle applications protected by Oracle Clusterware
Siebel CRM Applications protected by Oracle Clusterware January 2008
Pre-configured agents for Oracle Clusterware
Providing High Availability for SAP Resources (PDF) March 2007

11gRAC/ASM/AIX

[email protected]

14 of 393

3 ABOUT ORACLE AUTOMATIC STORAGE MANAGEMENT


Extract from : Oracle Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti
With Oracle RAC, each instance must have access to the datafiles and recovery files for the Oracle RAC database.
Using Automatic Storage Management (ASM) is an easy way to satisfy this requirement.
ASM is an integrated, high-performance database file system and disk manager. ASM is based on the principle that the
database should manage storage instead of requiring an administrator to do it. ASM eliminates the need for you to
directly manage potentially thousands of Oracle database files.
ASM groups the disks in your storage system into one or more disk groups. You manage a small set of disk groups and
ASM automates the placement of the database files within those disk groups.
ASM provides the following benefits:

StripingASM spreads data evenly across all disks in a disk group to optimize performance and utilization.
This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.

MirroringASM can increase data availability by optionally mirroring any file. ASM mirrors at the file level,
unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies,
or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy
of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to
access affected files by accessing mirrored copies on the surviving disks in the disk group.

Online storage reconfiguration and dynamic rebalancingASM permits you to add or remove disks from
your disk storage system while the database is operating. When you add a disk to a disk group, ASM
automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the
new disk. The process of redistributing data so that it is also spread across the newly added disks is known as
rebalancing. It is done in the background and with minimal impact to database performance.

Managed file creation and deletionASM further reduces administration tasks by enabling files stored in
ASM disk groups to be managed by Oracle Database. ASM automatically assigns file names when files are
created, and automatically deletes files when they are no longer needed by the database.

ASM is implemented as a special kind of Oracle instance, with its own System Global Area and background processes.
The ASM instance is tightly integrated with the database instance. Every server running one or more database
instances that use ASM for storage has an ASM instance. In an Oracle RAC environment, there is one ASM instance
for each node, and the ASM instances communicate with each other on a peer-to-peer basis. Only one ASM instance is
required for each node regardless of the number of database instances on the node.
Oracle recommends that you use ASM for your database file storage, instead of raw devices or the operating system
file system. However, databases can have a mixture of ASM files and non-ASM files.
See Also:

Oracle Database 2 Day DBA

Oracle Database Storage Administrator's Guide

And :
DBA Essentials

At http://www.oracle.com/pls/db111/homepage?remark=tahiti

Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide

11gRAC/ASM/AIX

HTML PDF
HTML PDF
HTML PDF

[email protected]

15 of 393

For more information :


On http://otn.oracle.com
http://www.oracle.com/technology/products/database/asm/index.html

Automatic Storage Management


Automatic Storage Management (ASM) is a feature in Oracle Database 10g/11g that provides the database
administrator with a simple storage management interface that is consistent across all server and storage platforms. As
a vertically integrated file system and volume manager, purpose-built for Oracle database files, ASM provides the
performance of async I/O with the easy management of a file system. ASM provides capability that saves the DBAs
time and provides flexibility to manage a dynamic database environment with increased efficiency.
What's New for ASM in Oracle Database 11g
11g ASM New Features Technical White Paper
Oracle Database 11g ASM new features technical white paper
ASM Overview and Technical Papers
ASM and Multipathing Best Practices and Information Matrix (PDF)
Multipathing software and ASM information matrix and best practices guide
Oracle Database 10g Release 2 ASM - New Features White Paper (PDF)
This paper discusses the enhancements to ASM which are new in Release 2 of Oracle Database 10g.
ASM Overview and Technical Best Practices White Paper (PDF)
This paper discusses the basics of ASM such as the steps to add disks, create a diskgroup, and create
a database within ASM, emphasizing best practices.
Take the Guesswork Out of Database IO Tuning White Paper (PDF)
Oracle Database layout and storage configurations do not have to be complicated any more. Learn
more about best practices with ASM that reduce complexity and simplify storage management.
Database Storage Consolidation with ASM White Paper (PDF)
Database storage consolidation empowered by ASM for RAC and single instance database
environments
Automatic Storage Management Overview (PDF)
This short paper is a summary of current storage challenges and how ASM addresses them in Oracle
Database 10g.
Automatic Storage Management White Paper (PDF)
This technical overview describes the ASM architecture and its benefits.
Migration to ASM with Minimal Down Time, Technical White Paper (PDF)
Migrate your database to ASM using Oracle Data Guard and RMAN

11gRAC/ASM/AIX

[email protected]

16 of 393

4 ABOUT REAL APPLICATION CLUSTER


Extract from : Oracle Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti
Oracle RAC extends Oracle Database so that you can store, update, and efficiently retrieve data using multiple
database instances on different servers at the same time. Oracle RAC provides the software that facilitates servers
working together in what is called a cluster. The data files that make up the database must reside on shared storage
that is accessible from all servers that are part of the cluster. Each server in the cluster runs the Oracle RAC software.
An Oracle Database database has a one-to-one relationship between datafiles and the instance. An Oracle RAC
database, however, has a one-to-many relationship between datafiles and instances. In an Oracle RAC database,
multiple instances access a single set of database files. The instances can be on different servers, referred to as hosts
or nodes. The combined processing power of the multiple servers provides greater availability, throughput, and
scalability than is available from a single server.
Each database instance in an Oracle RAC database uses its own memory structures and background processes.
Oracle RAC uses Cache Fusion to synchronize the data stored in the buffer cache of each database instance. Cache
Fusion moves current data blocks (which reside in memory) between database instances, rather than having one
database instance write the data blocks to disk and requiring another database instance to reread the data blocks from
disk. When a data block located in the buffer cache of one instance is required by another instance, Cache Fusion
transfers the data block directly between the instances using the interconnect, enabling the Oracle RAC database to
access and modify data as if the data resided in a single buffer cache.
Oracle RAC is also a key component for implementing the Oracle enterprise grid computing architecture. Having ultiple
database instances accessing a single set of datafiles prevents the server from being a single point of failure. Any
packaged or custom application that ran well on a Oracle Database will perform well on Oracle RAC without requiring
code changes.
You will learn more about the operation of the Oracle RAC database in a cluster, how to build the cluster, and the
structure of an Oracle RAC database in other sections of this guide.
See Also:

Oracle Real Application Clusters Administration and Deployment Guide

And :
DBA Essentials

At http://www.oracle.com/pls/db111/homepage?remark=tahiti

Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide

11gRAC/ASM/AIX

HTML PDF
HTML PDF
HTML PDF

[email protected]

17 of 393

For more information :


On http://otn.oracle.com
http://www.oracle.com/technology/products/database/clustering/index.html

Oracle Real Application Clusters


Oracle Real Application Clusters (RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle
RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing
and shared-disk approaches to provide highly scalable and available database solutions for all your business
applications.
Oracle Database Standard Edition includes Real Application Clusters support for higher levels of system uptime.
Real Application Clusters 11g
Oracle Real Application Clusters Datasheet (PDF) July 2007
Oracle Real Application Clusters 11g Technical Overview(PDF) July 2007
Workload Management with Oracle Real Application Clusters (FAN, FCF, Load Balancing)(PDF) July
2007
NEW Oracle Homes in an Oracle Real Application Clusters Environment(PDF) January 2008
UPDATED Using standard NFS to support a third voting disk on an Extended Distance cluster
configuration on Linux,AIX, HP, or Solaris (PDF) February 2008

11gRAC/ASM/AIX

[email protected]

18 of 393

5 ABOUT 11G RAC / ASM ON AIX


11g RAC with OCR (Oracle Cluster Registry) disk(s), and Voting (Heartbeat) disk(s) on raw disks, and database
on Oracle ASM (Automated Storage Management).

Oracle Automated Storage Management (ASM) Solution :


ORACLE CLUSTERWARE MANDATORY !!!
No need for HACMP
No need for GPFS
Oracle Clusterware files (OCR and Voting) are placed on raw disks.
Only Oracle databases files (datafiles, redo logs, archive logs, flash recovery area, ) are stored on the disks
managed by Oracle ASM. No binaries.
ASM provided with oracle software

IBM VIOS Virtual SCSI for ASM data storage and associated raw hdisk based Voting and
OCR (NEW April 2008).

IBM VIOS Virtual LAN for all public network and private network interconnect for supported
data storage options (NEW April 2008).

11gRAC/ASM/AIX

[email protected]

19 of 393

6 WHATS NEW WITH 11G RAC IMPLEMENTATION ON AIX

With Oracle 11g RAC on IBM POWER System p, i running AIX :

HACMP is not any more necessary as clusterware software since


10gRAC Release 1.

Oracle provide with 11gRAC its own clusterware, named as Oracle


Clusterware, or CRS (Oracle Cluster Ready Service).

Oracle Clusterware can cohabitate with HACMP under some


conditions.

With 11gRAC, Oracle Clusterware is mandatory as clusterware, even


so other vendors clusterware software are installed on the same
systems. You MUST check that third party clusterware can cohabitate
with Oracle clusterware.
Subject:UsingOracleClusterwarewithVendorClusterwareFAQ
DocID:Note:332257.1

With Oracle Clusterware 11g Release 1 :


o Oracle VIP (Virtual IP) are now configured at the end of
the Clusterware installation (since 10gRAC Release 2).

With Oracle Real Application Cluster 11g Release 1 :


o

Oracle Real Application Clusters 11g Technical Overview(PDF) July 2007

With ASM 11g Release 1 :


o

ASM software can be, and should be installed in its own


ORACLE_HOME (ASM_HOME) directory which will be
different than the database software ORACLE_HOME.
Oracle Homes in an Oracle Real Application Clusters Environment(PDF) January 2008

New features as Fast Mirror Resync and Preferred


Mirror Read, Fast Rebalance.
o New ASMCMD commands as lsdsk, md_backup,
md_restore, remap and cp
o Etc Check 11g ASM New Features Technical White Paper
o

(Oracle Database 11g ASM new features technical white paper)

7 INFRASTRUCTURE REQUIREMENTS
11gRAC/ASM/AIX

[email protected]

20 of 393

Oracle Real Application Cluster intend to provide :

High Availability of users services to maintain continuity of the business

Scale-in within each node, adding resources when possible, as your business grow

Scale out when scale in is not anymore possible, responding to your business grow

Workload balancing and affinity

Etc

Oracle Real Application protect high availability of your databases, but all hardware components of a cluster as :

SAN storage attachements from nodes to SAN, including SAN switch and HBAs

Cluster network interconnect for Oracle clusterware network heartbeat, and Oracle RAC cache fusion
mechanism (Private Network), including network interfaces

Storage

Etc

must be protected !!!

11gRAC/ASM/AIX

[email protected]

21 of 393

7.1

General Requirements

This chapter will list the general requirements to look at for an Oracle RAC implementation on IBM system p.
Topics covered are :

7.1.1

Servers and processors


Network
Storage
Storage attachements
VIO Server

AboutServersandprocessors

11gRAC/ASM/AIX

[email protected]

22 of 393

7.1.2

AboutRAConIBMSystemp

Oracle Real Application cluster could be implemented on LPARs/DLPARs from Separated physical IBM System p
servers
For production or
testing to achieve :

High Availability
(protecting loss of
a physical server)

Scalability (adding
cores and memory,
or adding new
server)

Database
Workload affinity
across servers

OR implemented between LPARs, DLPARs from a same physical IBM System p server
For production if high availability is not the
focus, but mostly about separating the
database workload.
Or for testing, or development purpose.

11gRAC/ASM/AIX

[email protected]

23 of 393

7.1.3

AboutNetwork

Protect Private network with Etherchanel Implementation, the following diagram show the implementation with 2
servers :

11gRAC/ASM/AIX

[email protected]

24 of 393

For PRIVATE Network (Cluster/RAC Interconnect) !!!


Gigabit switch is mandatory for production
implementation, even for only 2 nodes architecture.
(Cross-over cable can be used only for test purpose, and its not supported by Oracle Support,
please read RAC FAQ on http://metalink.oracle.com).
A second gigabit ethernet interconnect, with a different network mask, can be setup for security
purposes or performance issues.

Network cards for public network must have same name on each participating node in the RAC cluster
(For example en0 on all nodes).

Network cards for Interconnect Network (Private) must have same Name on each participating Node in the
RAC cluster (For example en1 on all nodes).

1 virtual IP per node must be reserved, and not used on the network prior to Oracle clusterware
installation. Dont set IP allias at AIX level, Oracle clusterware will take charge of it.

With AIX EtherChannel implemented with 2 Network switches, well cover the loss of interconnect network cards (for
example en2 and en3) and corresponding interconnect network switch.

11gRAC/ASM/AIX

[email protected]

25 of 393

7.1.4

AboutSANStorage

When implementing RAC, you must be carrefull on the SAN storage to use. The SAN Storage must be

capable thru its drivers of read/write concurrency at same time from any member of the RAC cluster, which
means that reserve_policy attribute from disks (hdisk, hdiskpower, dlmfdrv, etc ) discovered must be able
to be set to no_reserve or no_lock values.

For other storages as EMC, HDS, HP and so on, youll have to check that the storage to be selected is compatible
and supported with RAC. Oracle is not supporting or certifying storage, Storage Constructors will say if they support or
not, and with which Multi-pathing tool (IBM SDDPCM, EMC PowerPath, HDLM, etc).
Some documents to look at for Oracle on IBM storage :

11gRAC/ASM/AIX

[email protected]

26 of 393

11gRAC/ASM/AIX

[email protected]

27 of 393

7.1.5

Proposedinfrastructurewith2servers

Example of infrastructure as it should be with 2 servers, for network and storage.

11gRAC/ASM/AIX

[email protected]

28 of 393

7.1.6

Whatdoweprotect?

Oracle RAC protect against node failures, the infrastructure should cover a maximum of failure case as :
For the storage

loss of 1 HBA on one node


loss of 1 SAN switch

For the Network

loss of 1 interconnect Network card on one node


loss of 1 Interconnect Network switch

All components of the infrastructure are protected for high availability apart for the storage, if the full storage is lost.
The only way to extend the high availability to the storage level is to introduce a second storage and implement RAC
as a stretched or extended cluster.

11gRAC/ASM/AIX

[email protected]

29 of 393

7.1.7

AboutIBMAdvancedPowerVirtualizationandRAC

Extract from Certify : RAC for Unix On IBM AIX based Systems (RAC only) (27/02/2008, check for last update)
https://metalink.oracle.com/metalink/plsql/f?p=140:1:12020374767153922780:::::

Oracle products are certified for AIX5L on all servers that IBM supports with AIX5L. This includes IBM System i
and System p models that use POWER5 and POWER6 processors. Please visit IBM's website at this URL for
more information on AIX5L support for System i details.

IBM power based systems that support AIX5L includes machines branded as RS6000, pSeries, iSeries, System
p and System i.

The minimum AIX levels for POWER 6 based models are AIX5L 5.2 TL10 and AIX5L 5.3 TL06

AIX5L certifications include AIX5L versions 5.2 and 5.3. 5.1 was desupported on 16-JUN-2006.
o

Customers should review MetaLink Note 282036.1

64-bit hardware is required to run Real Application Clusters.

AIX 64-bit kernel is required for 10gR2 RAC. AIX 32-bit kernel and AIX 64-bit kernel are supported with 9.2 and
10g.

Extract from RAC Technologies Matrix for UNIX Platforms (11/04/2008, check for last update)
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html

Technology
Category

Platform
IBM AIX

Server/Processor
Architecture

11gRAC/ASM/AIX

Technology
IBM System p, i,
BladeCenter JS20,
JS21

Notes

Advanced Power Virtualization


hardware LPAR's are
supported including Dynamic
or shared pool with micro
partitions

VIOS Virtual SCSI for ASM data


storage and associated raw hdisk
based Voting and OCR (NEW
April 2008).

VIOS Virtual LAN for all public


network and private network
interconnect for supported data
storage options (NEW April 2008).

[email protected]

30 of 393

7.1.7.1

NetworkandVIOServer

Implementing Etherchannel links thru VIO servers for public and private network between nodes :

Example between 2 LPARs, 1 for RAC node, and one for APPS node.

11gRAC/ASM/AIX

[email protected]

31 of 393

Implementing Etherchannel and VLAN (Virtual LAN) links thru VIO servers for public and private network
between 2 RAC nodes :

11gRAC/ASM/AIX

[email protected]

32 of 393

7.1.7.2

StorageandVIOServer

Following implementation is now supported !!!

Certification for this architecture is done, and its now supported.

11gRAC/ASM/AIX

[email protected]

33 of 393

Following implementation also supported :

The hdisks hosting the following components are accessed thru VIO servers

AIX operating system

Oracle clusterware ($CRS_HOME)

ASM software ($ASM_HOME)

RAC Software ($ORACLE_HOME)

Etc

And OCR, Voting and ASM disks are accessed thru direct attached HBAs to the RAC LPARs.

11gRAC/ASM/AIX

[email protected]

34 of 393

7.2

Cookbook infrastructure

For our infrastructure, we used a cluster which is composed of three partitions (IBM LPAR) on an IBM
p 570 using AIX 5L.

system

BUT in the real world, to achieve true high availability its necessary to have at least two IBM Systems p / i servers as
shown bellow :

Each component of the infrastructure must be protected :

Disk access path (2 HBAs and multi-pathing software)

Interconnect network

Public network

11gRAC/ASM/AIX

[email protected]

35 of 393

7.2.1

IBMSytempservers

This is the IBM Sytem


pServer we used for
our installation

http://www-03.ibm.com/servers/eserver/pseries/hardware/highend/
http://www-03.ibm.com/systems/p/
??????
THEN youll need 1 AIX5L LPAR on each server for real RAC implementation, with necessary memory and Power 6
CPU assigned to each LPAR.

11gRAC/ASM/AIX

[email protected]

36 of 393

Commands to print the config for IBM System p on AIX5L (node1) :

11gRAC/ASM/AIX

[email protected]

37 of 393

{node1:root}/ # prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 1 JSC-11g-node1
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node1
IP Address: 10.3.25.81
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 512MB
Percent Used: 7%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME
PV STATE
TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0
active
273
152
44..00..00..53..55
==============================================================================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0
+ sysplanar0
* vio0
* vscsi0
* hdisk0
* vsa0
* vty0
* ent2
* ent1
* ent0
* pci1
* pci6
+ fcs0
* fcnet0
* fscsi0
* dac0
* dac1
* dac2
* dac3
+ L2cache0
+ mem0
+ proc0
+ hdisk1
+ hdisk2
+ hdisk3

System Object
System Planar
Virtual I/O Bus
U9117.570.65D7D3E-V1-C5-T1
Virtual SCSI Client Adapter
U9117.570.65D7D3E-V1-C5-T1-L810000000000
Virtual SCSI Disk Drive
U9117.570.65D7D3E-V1-C0
LPAR Virtual Serial Adapter
U9117.570.65D7D3E-V1-C0-L0
Asynchronous Terminal
U9117.570.65D7D3E-V1-C4-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V1-C3-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V1-C2-T1
Virtual I/O Ethernet Adapter (l-lan)
U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1-C2-T1
FC Adapter
U7879.001.DQD17GX-P1-C2-T1
Fibre Channel Network Protocol Device
U7879.001.DQD17GX-P1-C2-T1
FC SCSI I/O Controller Protocol Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200900A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200700A0B80FD42F
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200600A0B80FD42F
1722-600 (600) Disk Array Controller
L2 Cache
Memory
Processor
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L0
1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device

11gRAC/ASM/AIX

[email protected]

38 of 393

Commands to print the config for IBM System p on AIX5L (node2) :

11gRAC/ASM/AIX

[email protected]

39 of 393

{node2:root}/ # prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 JSC-11g-node2
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node2
IP Address: 10.3.25.82
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 512MB
Percent Used: 2%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME
PV STATE
TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1
active
273
149
54..20..00..20..55
==============================================================================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0
+ sysplanar0
* pci9
U7879.001.DQD17GX-P1
* pci10
+ fcs0
* fcnet0
* fscsi0
* dac0
* dac1
* dac2
* dac3
* vio0
* vscsi0
* hdisk1
* vsa0
* vty0
* ent2
* ent1
* ent0
+ L2cache0
+ mem0
+ proc0
+ hdisk0
+ hdisk2
+ hdisk3
+ hdisk4

System Object
System Planar
PCI Bus

U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1-C5-T1
FC Adapter
U7879.001.DQD17GX-P1-C5-T1
Fibre Channel Network Protocol Device
U7879.001.DQD17GX-P1-C5-T1
FC SCSI I/O Controller Protocol Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200900A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200700A0B80FD42F
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200600A0B80FD42F
1722-600 (600) Disk Array Controller
Virtual I/O Bus
U9117.570.65D7D3E-V2-C5-T1
Virtual SCSI Client Adapter
U9117.570.65D7D3E-V2-C5-T1-L810000000000
Virtual SCSI Disk Drive
U9117.570.65D7D3E-V2-C0
LPAR Virtual Serial Adapter
U9117.570.65D7D3E-V2-C0-L0
Asynchronous Terminal
U9117.570.65D7D3E-V2-C4-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V2-C3-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V2-C2-T1
Virtual I/O Ethernet Adapter (l-lan)
L2 Cache
Memory
Processor
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L0
1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L3000000000000 1722-600 (600) Disk Array Device

11gRAC/ASM/AIX

[email protected]

40 of 393

11gRAC/ASM/AIX

[email protected]

41 of 393

Command to get information on the LPAR :


{node1:root}/ # lparstat -i
Node Name
Partition Name
Partition Number
Type
Mode
Entitled Capacity
Partition Group-ID
Shared Pool ID
Online Virtual CPUs
Maximum Virtual CPUs
Minimum Virtual CPUs
Online Memory
Maximum Memory
Minimum Memory
Variable Capacity Weight
Minimum Capacity
Maximum Capacity
Capacity Increment
Maximum Physical CPUs in system
Active Physical CPUs in system
Active CPUs in Pool
Shared Physical CPUs in system
Maximum Capacity of Pool
Entitled Capacity of Pool
Unallocated Capacity
Physical CPU Percentage
Unallocated Weight
{node1:root}/ #

:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:

node1
JSC-11g-node1
1
Shared-SMT
Uncapped
0.50
32769
0
1
1
1
3072 MB
6144 MB
1024 MB
128
0.10
1.00
0.01
16
16
16
0.00
50.00%
0

{node2:root}/home # lparstat -i
Node Name
Partition Name
Partition Number
Type
Mode
Entitled Capacity
Partition Group-ID
Shared Pool ID
Online Virtual CPUs
Maximum Virtual CPUs
Minimum Virtual CPUs
Online Memory
Maximum Memory
Minimum Memory
Variable Capacity Weight
Minimum Capacity
Maximum Capacity
Capacity Increment
Maximum Physical CPUs in system
Active Physical CPUs in system
Active CPUs in Pool
Shared Physical CPUs in system
Maximum Capacity of Pool
Entitled Capacity of Pool
Unallocated Capacity
Physical CPU Percentage
Unallocated Weight
{node2:root}/home #

:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:

node2
JSC-11g-node2
2
Shared-SMT
Uncapped
0.50
32770
0
1
1
1
3072 MB
6144 MB
1024 MB
128
0.10
1.00
0.01
16
16
16
0.00
50.00%
0

11gRAC/ASM/AIX

[email protected]

42 of 393

11gRAC/ASM/AIX

[email protected]

43 of 393

7.2.2

OperatingSystem

Operating system must be installed the same way on each LPAR, with the same maintenance level, same APAR
and FILESETS level.

Check PREPARING THE SYSTEM chapter for Operating System requirements on AIX5L

AIX5.1 is not supported

AIX5.2 / 5.3 are supported and certified

The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if
youve chosen an implementation without HACMP. If this layer is implemented for
other purpose, disks ressources necessary to install and run CRS data will have to be part of an
HACMP volume group resource.
If you have previously installed HACMP, you must remove :

HACMP filesets (cluster.es.*)

rsct.hacmp.rte

rsct.compat.basic.hacmp.rte

rsct.compat.clients.hacmp.rte

If you did run a first installation of the Oracle Clusterware (CRS) with HACMP
installed,
Check if /opt/ORCLcluster directory does exist and if so, remove it on
all nodes.
THEN REBOOT ALL NODES ...

11gRAC/ASM/AIX

[email protected]

44 of 393

7.2.3

MultipathingandASM

Please check Metalink note Oracle ASM and Multi-Pathing Technologies Doc ID: Note:294869.1
Note, that Oracle Corporation does not certify ASM against multipathing utilities. The MP utilities listed below are ones
that known working solutions. As we do more testing, additional MP utilities will be listed here, thus, this document is an
active document.
Multi-pathing allow SAN access failover, and load balancing accros SAN Fiber Channel attachements.

OS
Multi-pathing ASM Device
Platfor
tool
Usage
m
AIX

EMC PowerPath Use raw partitions


thru the pseudo
device
/dev/rhdiskpowerx
IBM SDD (Vpath) /dev/vpathx
NOT
SUPPORTED for
RAC/ASM on
AIX !!!

Notes

As of this writing, SDD-AIX is known to cause discovery and


device handling problems for ASM, and thus is not viable
solution.
ASM needs to access disks/vpath thru non root user, which is not
allowed by SDD as for today 27 March 2007.
See SDDPCM section below for an alternative solution to SDD
for AIX

IBM SDDPCM

Use /dev/rhdiskx
device

You must install SDDPCM filesets and enable SDDPCM..


SDDPCM cannot co-exist w/ SDD.
SDDPCM only works with the following IBM storage components:
DS8000,DS6000,Enterprise Storage Server (ESS)
SDDPCM works also on top of IBM SVC (SAN Volume
Controler), and on top of other supported storages like HDS,
EMC, etc

IBM RDAC
(Redundant Disk
Array Controller)
Hitachi Dynamic
Link Manager HDLM

Use /dev/rhdiskx
device

RDAC is installed by default and must be used with IBM storage


DS4000, and former FasTt series.

Use
/dev/rdsk/cxtydz
thats generated by
HDLM
Or /dev/dlmfdrvx

HDLM generates a scsi (cxtydzx) address where the controller is


highest unused controller number.
HDLM no longer requires HACMP.
/dev/dlmfdrvx can be used out of HDLM Volume Group
Or if using HDLM Volume Group, logical volumes must be
created with mklv command using -T O options.

Fujitsu ETERNUS Use /dev/rhdisk


GR Multipath
device
Driver

11gRAC/ASM/AIX

[email protected]

45 of 393

7.2.4

IBMstorageandmultipathing

With IBM, please refer to IBM to confirm which IBM storage is supported with RAC, if not specified in our document.
IBM TotalStorage products for IBM Sytem p
IBM DS4000, DS6000 and DS8000 series are supported with 10gRAC.
IBM Storage DS300 and DS400 are not, and will not be supported with 10gRAC.
As for today March 27, 2007 IBM Storage DS3200 and DS3400 are not yet
supported with 10gRAC.
IBM System Storage and TotalStorage products
http://www-03.ibm.com/servers/storage/product/products_pseries.html
??????
There are 2 cases when using IBM storage :

IBM MPIO (Multi-Path I/O).


MPIO driver is supported with IBM Total Storage ESS, DS6000 and DS8000 series only
And with IBM SVC (SAN Volume Controler).

IBM RDAC (Redundant Disk Array Controller) for IBM Total Storage DS4000.
RDAC driver is supported with IBM Total Storage DS4000 series only, and former FasTt.
You MUST use one or the other, depending on the storage used.

case 1: Luns provided by the IBM storage with IBM MPIO installed as multi-pathing driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.

On node 1

{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none

rootvg
None
None
None
None

active

case 2: Luns provided by the IBM DS4000 storage with IBM RDAC installed as multi-pathing
driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.

On node 1

11gRAC/ASM/AIX

{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none

[email protected]

rootvg
None
None
None
None

active

46 of 393

7.2.4.1 IBMMPIO(MultiPathI/O)SetupProcedure

AIX Packages needed to install


on all nodes :

devices.sddpcm.53.2.1.0.7.bff
devices.sddpcm.53.rte
devices.fcp.disk.ibm.mpio.rte

devices.fcp.disk.ibm.mpio.rte download page :


http://www1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D410&q1=host+scripts&uid=ssg1S4000203&loc=en_US&cs=utf8&lang=en

MPIO for AIX 5.3 download page :


http://www-1.ibm.com/support/docview.wss?uid=ssg1S4000201

On node 1,
and node 2

Installing
the filesets :

smitty install
Install and Update Software
Install Software
* INPUT device / directory for software
[/mydir_with_my_filesets]
SOFTWARE to install
Press F4

[]

Select devices.fcp.disk.ibm.mpio
Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* INPUT device / directory for software
* SOFTWARE to install
[devices.fcp.disk.ibm.> +
PREVIEW only? (install operation will NOT occur)
+
COMMIT software updates?
+
SAVE replaced files?
+
AUTOMATICALLY install requisite software?
+
EXTEND file systems if space needed?
+
OVERWRITE same or newer versions?
+
VERIFY install and check file sizes?
+
Include corresponding LANGUAGE filesets?
+
DETAILED output?
+
Process multiple volumes?
+
ACCEPT new license agreements?
+
Preview new LICENSE agreements?
+

11gRAC/ASM/AIX

[email protected]

[Entry Fields]

.
no

yes
no
yes
yes
no
no
yes
no
yes
no
no

47 of 393

Installation Summary
-------------------Name
Level
Part
Event
Result
------------------------------------------------------------------------------devices.fcp.disk.ibm.mpio.r 1.0.0.0
USR
APPLY
SUCCESS

Install devices.sddpcm.53

Check the insatllation


succed and the
installation summary
message :

Select :

devices.sddpcm.53
ALL

+ 2.1.0.0 IBM SDD PCM for AIX V53

+ 2.1.0.7 IBM SDD PCM for AIX V53

Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* INPUT device / directory for software
* SOFTWARE to install
> +
PREVIEW only? (install operation will NOT occur)
+
COMMIT software updates?
+
SAVE replaced files?
+
AUTOMATICALLY install requisite software?
+
EXTEND file systems if space needed?
+
OVERWRITE same or newer versions?
+
VERIFY install and check file sizes?
+
Include corresponding LANGUAGE filesets?
+
DETAILED output?
+
Process multiple volumes?
+
ACCEPT new license agreements?
+
Preview new LICENSE agreements?
+

.
[devices.sddpcm.53
no
yes
no
yes
yes
no
no
yes
no
yes
no
no

Installation Summary
-------------------Name
Level
Part
Event
Result
------------------------------------------------------------------------------devices.sddpcm.53.rte
2.1.0.0
USR
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.0
ROOT
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
USR
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
ROOT
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
USR
COMMIT
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
ROOT
COMMIT
SUCCESS

Now you need to reboot all AIX nodes !!!

11gRAC/ASM/AIX

[email protected]

48 of 393

Commands
to know AIX WWPN

{node1:root}/ # pcmpath query wwpn


Adapter Name
PortWWN
fscsi0
10000000C935A7E7
fscsi1
10000000C93A1BF3

{node1:root}/ # lsdev -Cc disk -t 2107


Commands
to check disks :

hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11

Available
Available
Available
Available
Available
Available
Available
Available
Available
Available

0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02

IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM

MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO

FC
FC
FC
FC
FC
FC
FC
FC
FC
FC

2107
2107
2107
2107
2107
2107
2107
2107
2107
2107

{node1:root}/ # pcmpath query device


Commands
to check disks :

DEV#:
2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812000
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0
DEV#:
3 DEVICE NAME: hdisk3 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812001
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0
DEV#:
4 DEVICE NAME: hdisk4 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812002
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0

11gRAC/ASM/AIX

[email protected]

49 of 393

7.2.4.2 IBMAIXRDAC(FCP.ARRAYfilesets)SetupProcedure
This ONLY apply to use of of DS4000 storage series. NOT to DS6000, DS8000 and ES800.
RDAC is installed by default on AIX5L.
Each node must have 2 HBA cards, for multi-pathing. With ONLY 1 HBA per node, it will works but path to SAN will not
be protected. THEN in production, 2 HBA per node must be used.
All AIX hosts in your storage subsystem must have the RDAC multipath driver installed.
In a single server environment, AIX allows load sharing (also called load balancing). You can set the load balancing
parameter to yes. In case of heavy workload on one path the driver will move other LUNs to the controller with less
workload and, if the workload reduces back to the preferred controller. Problem that can occur is disk thrashing. That
means that the driver moves the LUN back and forth from one controller to the other. As a result the controller is more
occupied by moving disks around than servicing I/O. The recommendation is to NOT load balance on an AIX system.
The performance increase is minimal (or performance could actually get worse).
RDAC (fcp.array filesets) for AIX support round-robin load-balancing
Setting the attributes of the RDAC driver for AIX
The AIX RDAC driver files are not included on the DS4000 installation CD.
Either install them from the AIX Operating Systems CD, if the correct version is included, or download them from the
following Web site: http://techsupport.services.ibm.com/server/fixes
or http://www-304.ibm.com/jct01004c/systems/support/

Commands
to check that
necessary filesets
are present for
RDAC

{node1:root}/ # lslpp -L devices.fcp.disk.array.rte


Fileset
Level State Type Description (Uninstaller)
---------------------------------------------------------------------------devices.fcp.disk.array.rte
5.3.0.52
A
F
FC SCSI RAIDiant Array Device
Support Software
State
A -B -C -E -O -? --

codes:
Applied.
Broken.
Committed.
EFIX Locked.
Obsolete. (partially migrated to newer version)
Inconsistent State...Run lppchk -v.

Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #
{node1:root}/ # lslpp -L devices.common.IBM.fc.rte
Fileset
Level State Type Description (Uninstaller)
---------------------------------------------------------------------------devices.common.IBM.fc.rte
5.3.0.50
C
F
Common IBM FC Software
State
A -B -C -E -O -? --

codes:
Applied.
Broken.
Committed.
EFIX Locked.
Obsolete. (partially migrated to newer version)
Inconsistent State...Run lppchk -v.

Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

50 of 393

Commands
to check the RDAC configuration and HBA path to hdisk

On node1

On node2

{node1:root}/ # fget_config -v -A

{node2:root}/ # fget_config -v -A

---dar0---

---dar0---

User array name = 'DS4000_JSC'


dac0 ACTIVE dac3 ACTIVE

User array name = 'DS4000_JSC'


dac0 ACTIVE dac3 ACTIVE

Disk
DAC
LUN
hdisk1
dac0
0
hdisk2
dac3
1
hdisk3
dac0
2
hdisk4
dac3
3
hdisk5
dac0
4
hdisk6
dac3
5
hdisk7
dac0
6
hdisk8
dac3
7
hdisk9
dac0
8
hdisk10 dac3
9
hdisk11 dac0
10
hdisk12 dac3
11
hdisk13 dac0
12
{node1:root}/ #

Disk
DAC
LUN
hdisk0
dac0
0
hdisk1
dac3
1
hdisk2
dac0
2
hdisk3
dac3
3
hdisk4
dac0
4
hdisk5
dac3
5
hdisk6
dac0
6
hdisk7
dac3
7
hdisk8
dac0
8
hdisk9
dac3
9
hdisk10 dac0
10
hdisk12 dac3
11
hdisk13 dac0
12
{node2:root}/ #

Logical Drive
G8_spfile
G8_OCR1
G8_OCR2
G8_Vote1
G8_Vote2
G8_Vote3
G8_Data1
G8_Data2
G8_Data3
G8_Data4
G8_Data5
G8_Data6
G8_tie

Logical Drive
G8_spfile
G8_OCR1
G8_OCR2
G8_Vote1
G8_Vote2
G8_Vote3
G8_Data1
G8_Data2
G8_Data3
G8_Data4
G8_Data5
G8_Data6
G8_tie

Commands

to check the RDAC configuration and HBA path to hdisk for one specific dar
{node1:root}/ # fget_config -l dar0
dac0 ACTIVE dac3 ACTIVE
hdisk1
dac0
hdisk2
dac3
hdisk3
dac0
hdisk4
dac3
hdisk5
dac0
hdisk6
dac3
hdisk7
dac0
hdisk8
dac3
hdisk9
dac0
hdisk10 dac3
hdisk11 dac0
hdisk12 dac3
hdisk13 dac0
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

51 of 393

Fcs0 is one of the HBA.

{node1:root}/ # fcstat fcs0


FIBRE CHANNEL STATISTICS REPORT: fcs0

Commands
To see the HBA fiber
channel statistics :

Device Type: FC Adapter (df1080f9)


Serial Number: 1F41709923
Option ROM Version: 02E01871
Firmware Version: H1D1.81X1
World Wide Node Name: 0x20000000C93F8E29
World Wide Port Name: 0x10000000C93F8E29
FC-4 TYPES:
Supported:
0x0000012000000000000000000000000000000000000000000000000000000000
Active:
0x0000010000000000000000000000000000000000000000000000000000000000
Class of Service: 3
Port Speed (supported): 2 GBIT
Port Speed (running):
2 GBIT
Port FC ID: 0x650B00
Port Type: Fabric
Seconds Since Last Reset: 2795
Transmit Statistics
------------------Frames: 41615
Words: 1537024

Receive Statistics
-----------------96207
12497408

LIP Count: 0
NOS Count: 0
Error Frames: 0
Dumped Frames: 0
Link Failure Count: 269
Loss of Sync Count: 469
Loss of Signal: 466
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 51
Invalid CRC Count: 0
IP over FC Adapter Driver Information
No DMA Resource Count: 0
No Adapter Elements Count: 0
FC SCSI Adapter Driver Information
No DMA Resource Count: 0
No Adapter Elements Count: 0
No Command Resource Count: 0
IP over FC Traffic Statistics
Input Requests:
0
Output Requests: 0
Control Requests: 0
Input Bytes: 0
Output Bytes: 0
FC SCSI Traffic Statistics
Input Requests:
24721
Output Requests: 8204
Control Requests: 252
Input Bytes: 46814436
Output Bytes: 4207616
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

52 of 393

7.2.5

EMCstorageandmultipathing

With EMC, please refer to Hitachi to see which EMC storage is supported with RAC.
There are 2 cases when using EMC storage :

case 1: Luns provided by the EMC storage with IBM MPIO installed as multi-pathing driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.

On node 1

{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none

rootvg
None
None
None
None

active

Then for disks to be used for ASM, and on all nodes :


1. Install MPIO on all nodes, attach the LUN to each node, dicover LUNs with cfgmgr.
2. Identify hdisk names on each nodes, for a given LUN ID.
3. remove PVID from hdisk and change the reserve policy to no reserve using :
chdev -l hdisk -a pv=clear
chdev -l hdisk -a reserve_policy=no_reserve
4. set ownerchip to oracle:dba to the /dev/rhdisk
5. set read/write permissions to 660 to the /dev/rhdisk
6. access the disk thru /dev/rhdisk for ASM diskgroup configuration

case 2 : Luns provided by the EMC storage with EMC PowerPath installed as multi-pathing
driver.
Disks (LUNs) will be seen as hdiskpower at AIX level using lspv command.

On node 1

{node1:root}/ # lspv
hdiskpower0
00ced22cf79098ff
hdiskpower1
none
hdiskpower2
none
hdiskpower3
none
hdiskpower4
none

rootvg
None
None
None
None

active

Then for disks to be used for ASM, and on all nodes :


1. Install PowerPath on all nodes, attach the LUN to each node, dicover LUNs with cfgmgr.
2. Identify hdiskpower names on each nodes, for a given LUN ID.
3. remove PVID from hdiskpower and change the reserve policy to no reserve using :
chdev -l hdiskpower -a pv=clear
chdev -l hdiskpower -a reserve_lock=no
4. set ownerchip to oracle:dba to the /dev/rhdiskpower
5. set read/write permissions to 660 to the /dev/rhdiskpower
6. access the disk thru /dev/rhdiskpower for ASM diskgroup configuration

11gRAC/ASM/AIX

[email protected]

53 of 393

7.2.5.1 EMCPowerPathSetupProcedure
See PowerPath for AIX version 4.3 Installation & Administration Guide, P/N 300-001-683 for details
On node 1, and node 2
1. Install EMC ODM drivers and necessary filesets
5.2.0.1 from ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS/EMC.AIX.5.2.0.1.tar.Z
install using smit install
2. remove any existing devices attached to the EMC
{node1:root}/ # rmdev dl hdiskX
3. run /usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr to detect devices
4. Install PowerPath version 4.3.0 minimum using smit install
5. register PowerPath
{node1:root}/ # emcpreg install
6. initialize PowerPath devices
{node1:root}/ # powermt config
7. verify that all PowerPath devices are named consistently across all cluster nodes
{node1:root}/ # /usr/lpp/EMC/Symmetrix/bin/inq.aix64 | grep hdiskpower
compare results. Consistent naming is not required for ASM devices, but LUNs used
for the OCR and VOTE functions must have the same device names on all rac systems.
Identify two small luns to be used for OCR and voting
if the hdiskpowerX names for the OCR and VOTE devices are different, create a new
device for each of these functions as follows:
{node1:root}/ # mknod /dev/ocr c <major # of OCR LUN> <minor # of OCR LUN>
{node1:root}/ # mknod /dev/vote c <major # of VOTE LUN> <minor # of VOTE LUN>
Major and minor numbers can be seen using the command ls al /dev/hdiskpower*
8. On all hdiskpower devices to be used by Oracle for ASM, voting, or the OCR, the reserve_lock
attribute must be set to "no"
{node1:root}/ # chdev -l hdiskpowerX -a reserve_lock=no
9. Verify the attribute is set
{node1:root}/ # lsattr El hdiskpowerX
10. Set permissions on all hdiskpower drives to be used for ASM, voting, or the OCR as follows :
{node1:root}/ # chown oracle:dba /dev/rhdiskpowerX
{node1:root}/ # chmod 660 /dev/rhdiskpowerX
The Oracle Installer will change these permissions and ownership as necessary during the CRS install
process.

11gRAC/ASM/AIX

[email protected]

54 of 393

7.2.6

HITACHIstorageandmultipathing

With Hitachi, please refer to Hitachi to see which HDS storage is supported with RAC.
There are 3 cases when using Hitachi HDS storage :

case 1: Luns provided by the HDS storage with IBM MPIO installed as multi-pathing driver.
*** NOT SUPPORTED with all HDS storage, check with Hitachi ***
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.

On node 1

{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none

rootvg
None
None
None
None

active

Then for disks to be used for ASM, and on all nodes :


1. Install MPIO on all nodes, attach the LUN to each node, dicover LUNs with cfgmgr.
2. Identify hdisk names on each nodes, for a given LUN ID.
3. remove PVID from hdisk and change the reserve policy to no reserve using :
chdev -l hdisk -a pv=clear
chdev -l hdisk -a reserve_policy=no_reserve
4. set ownerchip to oracle:dba to the /dev/rhdisk
5. set read/write permissions to 660 to the /dev/rhdisk
6. access the disk thru /dev/rhdisk for ASM diskgroup configuration

case 2 : Luns provided by the HDS storage with HDLM installed as multi-pathing driver.
*** NOT SUPPORTED ***

ReleaseNotes10g Release2(10.2)forAIX5LBasedSystems(64Bit)
B19074-04 / April2006

Hitachi HDLM for Storage


If you use Hitachi HDLM (dmlf devices) for storage, then automatic storage
instances do not automatically identify the physical disk. Instead, the instances
identify only the logical volume manager (LVM). This is because the physical
disks can only be opened by programs running as root.
Physical disks have path names similar to the following:
/dev/rdlmfdrv8
/dev/rdlmfdrv9

11gRAC/ASM/AIX

[email protected]

55 of 393

Case 3 : Luns provided by the HDS storage with HDLM as multi-pathing driver.
Disks will be seen as dlmfdrv at AIX level using lspv command, and part of a HDLM VG
(volume groups).
{node1:root}/ # lspv
dlmfdrv0
00ced22cf79098ff
dlmfdrv1
none
dlmfdrv2
none
dlmfdrv3
none
dlmfdrv4
none
dlmfdrv5
none

On node 1

rootvg
vg_asm
vg_asm
vg_asm
vg_asm
vg_asm

active
active
active
active
active
active

rootvg
vg_ocr_disk1
vg_voting_disk1
vg_asm_disk1
vg_asm_disk2
vg_asm_disk3

active
active
active
active
active
active

or
{node1:root}/ # lspv
dlmfdrv0
00ced22cf79098ff
dlmfdrv1
none
dlmfdrv2
none
dlmfdrv3
none
dlmfdrv4
none
dlmfdrv5
none

Then for disks to be used for ASM, and on all nodes :


1. Install HDLM on all nodes, attach the LUN to each node, dicover LUNs with cfgmgr.
2.

Turn off reserve locking on all nodes dlnkmgr set rsv on 0 s

3. Create VGs (Volume Groups) and LVs (Logical Volumes)


2 options :
1/ Create a VG with all dlmfdrv, and create LVs out of it
On node 1

2/ Create on VG for one dlmfdrv, and one LV for one VG.


(Recommended for ease of administration and avoid
downtime.)

1) create the volume groups

On node 1

dlmmkvg -y vg_asm -B -s 256 -V 101 dlmfdrv1 dlmfdrv2


dlmfdrv3 dlmfdrv4 dlmfdrv5

1) create the volume groups


dlmmkvg
dlmmkvg
dlmmkvg
dlmmkvg
dlmmkvg

2) enable the volume groups


dlmvaryonvg vg_asm
3) Create the logical volumes
mklv
mklv
mklv
mklv
mklv

-y
-y
-y
-y
-y

lv_ocr_disk1 -T
lv_voting_disk1
lv_asm_disk1 -T
lv_asm_disk2 -T
lv_asm_disk3 -T

O -w
-T O
O -w
O -w
O -w

n -s
-w n
n -s
n -s
n -s

n -r
-s n
n -r
n -r
n -r

n vg_asm 2
-r n vg_asm 2
n vg_asm 2
n vg_asm 2
n vg_asm 2

4) Change permissions, owner, group of the raw devices


B10811-05 Oracle DB Installation Guide p2-70
chown
chown
chown
chown
chown

chmod
chmod
chmod
chmod
chmod

oracle:dba
oracle:dba
oracle:dba
oracle:dba
oracle:dba
660
660
660
660
660

/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3

/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3

5) disable the volume groups

11gRAC/ASM/AIX

-y
-y
-y
-y
-y

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

-B
-B
-B
-B
-B

-s
-s
-s
-s
-s

256
256
256
256
256

-V
-V
-V
-V
-V

101
101
101
101
101

dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
dlmfdrv5

2) enable the volume groups


dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

3) Create the logical volumes


mklv
mklv
mklv
mklv
mklv

-y
-y
-y
-y
-y

lv_ocr_disk1 -T
lv_voting_disk1
lv_asm_disk1 -T
lv_asm_disk2 -T
lv_asm_disk3 -T

O -w
-T O
O -w
O -w
O -w

n -s
-w n
n -s
n -s
n -s

n -r
-s n
n -r
n -r
n -r

n vg_ocr1 2
-r n vg_vot1 2
n vg_asm1 2
n vg_asm2 2
n vg_asm3 2

4) Change permissions, owner, group of the raw devices


B10811-05 Oracle DB Installation Guide p2-70
chown
chown
chown
chown
chown

chmod

oracle:dba
oracle:dba
oracle:dba
oracle:dba
oracle:dba

/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3

660 /dev/rlv_ocr_disk1

[email protected]

56 of 393

dlmvaryoffvg vg_asm

chmod
chmod
chmod
chmod

On node 2
6) Identify which dlmfdrv correspond to dlmfdrv1 from
node1.
dlmfdrv1 on node1 dlmfdrv0 on node2
7) import the volume groups on node2
This will copy the vg/lv configuration that was made on
node1
dlmimportvg -V 101 -y vg_asm dlmfdrv0

660
660
660
660

/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3

5) disable the volume groups


dlmvaryoffvg
dlmvaryoffvg
dlmvaryoffvg
dlmvaryoffvg
dlmvaryoffvg

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

On node 2

7) enable the volume groups on node2

6) Identify which dlmfdrv correspond to dlmfdrv1 from


node1, and so on with other dlmfdrv

dlmvaryonvg vg_asm
8) this will ensure the vg will not get varyon'd at
boot
dlmchvg -a n vg_asm
On node 1
9) enable the volume groups on node1

dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
dlmfdrv5

on
on
on
on
on

node1
node1
node1
node1
node1

dlmfdrv0
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4

on
on
on
on
on

node2
node2
node2
node2
node2

7) import the volume groups on node2

dlmvaryonvg vg_asm

This will copy the vg/lv configuration that was made on


node1

Check for document :

Hitachi Dynamic Link Manager (HDLM) for IBM AIX


Systems Users Guide :
http://sysadmin.net/uploads/SAN/hldm_admin_guide.pdf

dlmimportvg
dlmimportvg
dlmimportvg
dlmimportvg
dlmimportvg

-V
-V
-V
-V
-V

101
101
101
101
101

-y
-y
-y
-y
-y

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

dlmfdrv0
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4

7) enable the volume groups on node2


dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

8) this will ensure the vg will not get varyon'd at


boot
dlmchvg
dlmchvg
dlmchvg
dlmchvg
dlmchvg

-a
-a
-a
-a
-a

n
n
n
n
n

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

On node 1
9) enable the volume groups on node1
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg

vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3

4. remove any PVID from dlmfdrv


5. access the disk thru /dev/rlv_asm_disk for ASM diskgroup configuration

7.2.7

Others,StorageTek,HPEVAstorageandmultipathing

For most of the storage solutions, please contact the providing company for supported configuration, as read/write
concurrent acces from all RAC nodes must be possible to implement a RAC solution. That means possibility to setup
the disk reserve_policy to no_reserve or equivalent.
11gRAC/ASM/AIX

[email protected]

57 of 393

11gRAC/ASM/AIX

[email protected]

58 of 393

8 SPECIFIC CONSIDERATIONS FOR RAC/ASM SETUP WITH HACMP


INSTALLED
Oracle Clusterware does not requires HACMP to work, but some customers may need to have HACMP
installed on the RAC node cluster to protect third party products, or ressources. Oracle clusterware could
replace HACMP for most operations as cold failover for 11g, 10g and 9i single database, applications servers or any
applications.
Please check following documents for details on
http://www.oracle.com/technology/products/database/clustering/index.html :

Comparing Oracle Real Applicaton Clusters to Failover Clusters for Oracle Database(PDF) December 2006
Workload Management with Oracle Real Application Clusters (FAN, FCF, Load Balancing) (PDF) May 2005
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) May 2005
Using Oracle Clusterware to Protect a Single Instance Oracle Database (PDF) November 2006
Using Oracle Clusterware to protect Oracle Application Server (PDF) November 2005
How to Build End to End Recovery and Workload Balancing for Your Applications 10g Release 1(PDF) Dec
2004
Oracle Database 10g Services (PDF) Nov 2003
Oracle Database 10g Release 2 Best Practices: Optimizing Availability During Unplanned Outages Using
Oracle Clusterware and RAC New!

HOWEVER, if customer still need to have HACMP, Oracle Clusterware can cohabitate with HACMP.
Please check for last status of certification on Oracle Metalink (certify)
Extract from metalink in date of March 3rd, 2008.

11g R1 64-bit Certification Summary


OS

Product

Certified With

Version

Status

AIX (53)

11gR1 64-bit

Oracle Clusterware

11g

Certified

AIX (53)

11gR1 64-bit

HACMP

5.4.1

Certified

AIX (53)

11gR1 64-bit

HACMP

5.3

Certified

11gRAC/ASM/AIX

[email protected]

59 of 393

Detailed information for HACMP 5.3 and 11gRAC :


Certify - Additional Info RAC for Unix Version 11gR1 64-bit On IBM AIX based Systems (RAC only)
Operating System: IBM AIX based Systems (RAC only) Version 5.3
RAC for Unix Version 11gR1 64-bit
HACMP Version 5.3
Status: Certified
Product Version Note:
None available for this product.
Certification Note:
Use of HACMP 5.3 requires minimum service levels of the following:
o

AIX5L 5.3 TL 6 or later, specifically bos.rte.lvm must be at least 5.3.0.60

HACMP V5.3 with PTF5, cluster.es.clvm installed and ifix for APAR IZ01809

RSCT (rsct.basic.rte) version 2.4.7.3 and ifix for APAR IZ01838

Detailed information for HACMP 5.4.1 and 11gRAC :


Certify - Additional Info RAC for Unix Version 11gR1 64-bit On IBM AIX based Systems (RAC only)
Operating System: IBM AIX based Systems (RAC only) Version 5.3
RAC for Unix Version 11gR1 64-bit
HACMP Version 5.4.1
Status: Certified
Product Version Note:
None available for this product.
Certification Note:
Use of HACMP 5.4 requires minimum service levels of the following:
o

AIX5L 5.3 TL 6 or later, specifically bos.rte.lvm must be at least 5.3.0.60

HACMP V5.4.1 (Available in media or APAR IZ02620)

RSCT (rsct.basic.rte) version 2.4.7.3 and ifix for APAR IZ01838 This APAR is integrated into V2.4.8.1

11gRAC/ASM/AIX

[email protected]

60 of 393

Following rules have to be applied :


1. HACMP must not take-over/failover the Oracle Clusterware ressources (VIP, database, etc )
2. HACMP VIP must not be configured on IP from Public node name used by RAC (hostname), or Oracle
Clusterware VIP
3. With 11g, its not necessary to declare the RAC interconnect in HACMP
4. Its not mandatory to declare hdisks used for ASM in HACMP as logical volumes (LV) from Volume Groups
(VG). In this case follow the cookbook to prepare the disks for OCR, Voting and ASM disks.
5. If the choice is to declare hdisks used by ASM in HACMP Volume Groups, THEN youll have to prepare the
disks for OCR, Voting, ASM spfile and ASM disks as describe in official Oracle document available on
http://tahiti.oracle.com
Please check :
Oracle Real Application Clusters Installation Guide
11g Release 1 (11.1) for Linux and UNIX
Part Number B28264-03
http://download.oracle.com/docs/cd/B28359_01/install.111/b28264/toc.htm
And Metalink note 404474.1
Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4
STATUS of IBM HACMP 5.3, 5.4 Certifications with Oracle RAC 10g
5.4 Certifications with Oracle RAC 10g What do you need to do? Configuring HACMP 5.3 or HACMP 5.4.1,
and 10gR2 with Multi-Node Disk Heartbeat (MNDHB)
Even so this note is written for 10gRAC R2, its also applicable to 11g RAC R1.
Check for the following chapters :
3.3 Configuring Storage for Oracle Clusterware Files on Raw Devices
The following subsections describe how to configure Oracle Clusterware files on raw partitions.
Configuring Raw Logical Volumes for Oracle Clusterware
Creating a Volume Group for Oracle Clusterware
Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group
Importing the Volume Group on the Other Cluster Nodes
Activating the Volume Group in Concurrent Mode on All Cluster Nodes
For OCR/voting disks, do create a volume group (VG), and create logical volumes (lv) with names as
/dev/Ocr_Disk1, /dev/Ocr_Disk2, /dev/Voting_Disk1, etc
And dont forget to remove the reserve policy on all hdisks
3.6 Configuring Database File Storage on Raw Devices
The following subsections describe how to configure raw partitions for database files.
Configuring Raw Logical Volumes for Database File Storage
Creating a Volume Group for Database Files
Creating Database File Raw Logical Volumes in the New Volume Group
Importing the Database File Volume Group on the Other Cluster Nodes
Activating the Database File Volume Group in Concurrent Mode on All Cluster Nodes
For ASM disks, do create a volume group (VG), and create logical volumes (lv) with names as
/dev/ASM_Disk1, /dev/ASM_Disk2, /dev/ASM_Disk3, etc
And dont forget to remove the reserve policy on all hdisks

11gRAC/ASM/AIX

[email protected]

61 of 393

9 INSTALLATION STEPS
Prioir to install and use Oracle 11g Real Application Cluster, you must :
1. Prepare the infrastructure
a. Hardware
b. Storage
c. Network
d. San and Network connectivity
e. Operating system

2. Prepare the systems


a. Defining Network Layout, Public, Virtual and
Private hostnames
b. AIX Operating system level, required APARs
and filsets
c. System Requirements (Swap, temp, memory,
internal disks)
d. Users And Groups
e. Kernel and Shell Limits
f. User equivalences
g. Time Server Synchronization

3. Prepare the storage


a. Create LUNs for OCR and Voting disks, and
pepare them at AIX level
b. Create LUN for ASM instances spfile, and
pepare it at AIX level
c. Create LUNs for ASM disks, and pepare them
at AIX level

4. Check that all pre- installation requirements are


fullfiled
a. Using Oracle cluster verification utility (CVU)
b. Using in-house shell scripts

11gRAC/ASM/AIX

[email protected]

62 of 393

When all that is done !!!, you can process the installation in the following order :
1. Install Oracle 11g Clusterware
a. Apply necessary patchset and patches
b. Check that Oracle clusterware is working
properly

2. Install
a.
b.
c.
d.

Oracle 11g Automated Storage Management


Apply necessary patchset and patches
Create default nodes listeners
Create ASM instances on each node
Configure ASM instances Local and remote
listeners if required
e. Change necessary ASM instances parameters
(process, etc )
f. Create ASM diskgroup(s)

3. Install Oracle 11g Real Application Cluster or/and


Oracle 10g Real Application cluster
a. Apply necessary patchset and patches

4. Create database(s)
a. Configure Databases instances Local and
remote listeners if required
b. Change necessary Databases instances
parameters (process, etc )
c. Create database(s) associated cluster services,
and configure TAF (Transactions Applications
Failover) as required for your needs
d.

5. Test your cluster with crash scenarios

11gRAC/ASM/AIX

[email protected]

63 of 393

10 PREPARING THE SYSTEM


Preparing the system is to be done on all servers which are planned to participate in the Oracle Cluster.
All servers MUST be as clones, with ONLY different HOSTNAME and IP addresses !!!
IMPORTANT !!!

For ALL servers, you MUST apply the Oracle pre-requisites, those prerequisites are not
optional, BUT MANDATORY !!!
For some parameters as tuning settings, values specified are the minimum required, and
migh be increase depending of your needs !!!
PLEASE check Oracle documentation for last update, and MOSTLY :
Oracle METALINK Note

Preparing the system is about :

Defining Network Layout, Public, Virtual and Private hostnames


o Hostname and RAC Public Node name
o Network interface identification
o Default Gateway
o Network Tuning Parameters

AIX Operating system level, required APARs and filsets

System Requirements (Swap, temp, memory, internal disks)

Users And Groups

Kernel and Shell Limits

User equivalences

Time Server Synchronization

Etc

11gRAC/ASM/AIX

[email protected]

64 of 393

10.1 Network configuration


10.1.1

DefineNetworkslayout,Public,VirtualandPrivateHostnames

We need 2 differents networks, with associated network interfaces on each node participating to the RAC cluster :

Public Network to be used as Users network or reserved network for Application and Databases servers.
Private Network to be used as a Reserved network for Oracle clusterware, and RAC.

MANDATORY : Network interfaces must have same name, same subnet and same usage !!!
For each node, We have to define :

Public hostname and corresponding IP address on the Public Network


Virtual hostname and corresponding IP address, also on the Public Network.
IP address must be of same range then Public hostname, but not used on the network prior to Oracle
clusterware installation.
Private hostname and corresponding IP address on the Private Network.

11gRAC/ASM/AIX

[email protected]

65 of 393

Let identify hostnames, and network interfaces

Please make a table as follow to have a clear view of your RAC network architecture :
PUBLIC NODE NAME MUST BE NAME RETURNED BY hostname AIX COMMAND
Public Hostname

VIP Hostname

Private Hostname

(Virtual IP)

(RAC Interconnect)

en ? (Public Network)
Node
Name

IP

Node
Name

Not Used

en ? (Private Network)
IP

Node
Name

IP

en ?
Node Name

IP

Issue the AIX command hostname on each node to identify default node name :

For first server ...


{node1:root}/ # hostname
node1
{node1:root}/ # ping node1
PING node1: (10.3.25.81): 56 data bytes
64 bytes from 10.3.25.81: icmp_seq=0 ttl=255 time=0 ms
64 bytes from 10.3.25.81: icmp_seq=1 ttl=255 time=0 ms
64 bytes from 10.3.25.81: icmp_seq=2 ttl=255 time=0 ms
^C
----node1 PING Statistics---3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

66 of 393

For second server ...


{node2:root}/ # hostname
node2
{node2:root}/ # ping node2
PING node2: (10.3.25.82): 56 data bytes
64 bytes from 10.3.25.82: icmp_seq=0 ttl=255 time=0 ms
64 bytes from 10.3.25.82: icmp_seq=1 ttl=255 time=0 ms
64 bytes from 10.3.25.82: icmp_seq=2 ttl=255 time=0 ms
^C
----node2 PING Statistics---3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
{node2:root}/ #

Other method to check the default hostname :

{node1:root}/crs/11.1.0/bin # lsattr -El inet0


authm
65536
Authentication Methods
bootup_option no
Use BSD-style Network Configuration
gateway
Gateway
hostname
node1
Host Name
rout6
IPv6 Route
route
net,-hopcount,0,,0,10.3.25.254 Route
{node1:root}/crs/11.1.0/bin #

True
True
True
True
True
True

To change the default hostname :


{node1:root}/crs/11.1.0/bin # chdev -l inet0 -a hostname=node1
inet0 changed
{node1:root}/crs/11.1.0/bin #

Now, we have Public hostnames and corresponding IPs in our table :


Public Hostname

VIP Hostname

Private Hostname

(Virtual IP)

(RAC Interconnect)

en ? (Public Network)
Node
Name

IP

node1

10.3.25.81

node2

10. 3.25.82

11gRAC/ASM/AIX

Node
Name

en ? (Private Network)
IP

Node
Name

[email protected]

IP

Not Used
en ?
Node Name

IP

67 of 393

Oracle clusterware VIPs IP adress and corresponding nodes names must not be used on the network prior to
Oracle Clusterware installation. Dont make any AIX allias on the public network interface, the clusterware installation
will do it. Just reserve 1 VIP and its hostname per RAC node.
Public Hostname

VIP Hostname

Private Hostname

(Virtual IP)

(RAC Interconnect)

en ? (Public Network)

Not Used

en ? (Private Network)

Node
Name

IP

Node
Name

IP

Node
Name

node1

10.3.25.81

node1-vip

10.3.25.181

node2

10. 3.25.82

node2-vip

10. 3.25.182

IP

en ?
Node Name

IP

Oracle Clusterware VIPs IP and corresponding nodes names can be declared in the DNS, or at minimum in the local
hosts file.
10.1.2

IdentifyNetworkInterfacescards

As root, Issue the AIX command ifconfig l to list network card on each node :
Result from node1 :

Result from node2 :

{node1:root}/ # ifconfig l
en0 en1 en2 lo0
{node1:root}/ #

{node2:root}/ # ifconfig -l
en0 en1 en2 lo0
{node2:root}/ #

{node1:root}/crs/11.1.0/bin # lsdev |grep en


en0
Available
Standard Ethernet Network Interface
en1
Available
Standard Ethernet Network Interface
en2
Available
Standard Ethernet Network Interface
ent0
Available
Virtual I/O Ethernet Adapter (l-lan)
ent1
Available
Virtual I/O Ethernet Adapter (l-lan)
ent2
Available
Virtual I/O Ethernet Adapter (l-lan)
inet0
Available
Internet Network Extension
rcm0
Defined
Rendering Context Manager Subsystem
vscsi0
Available
Virtual SCSI Client Adapter
{node1:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

68 of 393

As root, Issue the following shell to get necessary information from network interfaces on each node :

Result from node1 :

Result from node2 :

{node1:root}/ # for i in en0 en1 en2


do
echo $i
for attribut in netaddr netmask broadcast
state
do
lsattr -El $i -a $attribut
done
done
en0
netaddr 10.3.25.81 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
en1
netaddr 10.10.25.81 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
en2
netaddr 20.20.25.81 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
{node1:root}/ #

{node2:root}/ # for i in en0 en1 en2


do
echo $i
for attribut in netaddr netmask broadcast
state
do
lsattr -El $i -a $attribut
done
done
en0
netaddr 10.3.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
en1
netaddr 10.10.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
en2
netaddr 20.20.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True
state up Current Interface Status True
{node2:root}/ #

As root, Issue the AIX command ifconfig a to list network card on each node :

Result example from node1 :


{node1:root}/ #ifconfig a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 10.3.25.81 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 10.10.25.81 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 20.20.25.81 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

69 of 393

Now, we have identified the following network interfaces :

en0 is set to be the Public Network Interface on all nodes.


en0 is set to be the VIP Network Interface on all nodes.
en1 is set to be the Private Network Interface, also named as RAC Interconnect on all nodes.
en2 is set as not used

THEN, the table looks as follow :


Public Hostname

VIP Hostname

Private Hostname

(Virtual IP)

(RAC Interconnect)

en0 (Public Network)

en1 (Private Network)

Node
Name

IP

Node
Name

IP

node1

10.3.25.81

node1-vip

10.3.25.181

node2

10. 3.25.82

node2-vip

10. 3.25.182

To see
details on
the network
interface :

Node
Name

IP

Not Used
en2
Node Name

{node1:root}/oracle -> lsattr -El en0


alias4
IPv4 Alias including Subnet Mask
alias6
IPv6 Alias including Prefix Length
arp
on
Address Resolution Protocol (ARP)
authority
Authorized Users
broadcast
Broadcast Address
mtu
1500
Maximum IP Packet Size for This Device
netaddr
10.3.25.81
Internet Address
netaddr6
IPv6 Internet Address
netmask
255.255.255.0 Subnet Mask
prefixlen
Prefix Length for IPv6 Internet Address
remmtu
576
Maximum IP Packet Size for REMOTE Networks
rfc1323
Enable/Disable TCP RFC 1323 Window Scaling
security
none
Security Level
state
up
Current Interface Status
tcp_mssdflt
Set TCP Maximum Segment Size
tcp_nodelay
Enable/Disable TCP_NODELAY Option
tcp_recvspace
Set Socket Buffer Space for Receiving
tcp_sendspace
Set Socket Buffer Space for Sending
{node1:root}/oracle ->

11gRAC/ASM/AIX

[email protected]

IP

True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True

70 of 393

THEN, we will get the following table with our system :


Public

VIP

RAC Interconnect (Private


Network)

Not Used

en1

en2

en0
Node
Name

IP

Node
Name

IP

Node
Name

IP

node1

10.3.25.81

node1-vip

10.3.25.181

node1-rac

10.10.25.81

node2

10. 3.25.82

node2-vip

10. 3.25.182

node1-rac

10.10.25.82

Node
Name

IP

Within our infrastructure for the cookbook, we have the following layout :

10.1.3

Updatehostsfile

You should have the following entries on each node for /etc/hosts
Update/check entries in hosts file on each node

{node1:root}/ # cat /etc/hosts

NOTA :

# Oracle Clusterware Public nodes list


10.3.25.81
node1
10.3.25.82
node2
# Oracle Clusterware Virtual IP nodes list
10.3.25.181
node1-vip
10.3.25.182
node2-vip

Oracle Clusterware Virtual IPs are just free


IPs available, and ONLY declared in the hosts
file form each node prior to Oracle
clusterware installation.

# Oracle Clusterware and RAC Interconnect


10.10.25.81
node1-rac
10.10.25.82
node2-rac

11gRAC/ASM/AIX

[email protected]

71 of 393

10.1.4

DefiningDefaultgatewayonpublicnetworkinterface

Now, if you want to set the default gateway on public network interface, be carrefull on the impact it
11gRAC/ASM/AIX

[email protected]

72 of 393

may have if you have already a default gateway set on a different network interface, and multiple network
interfaces not used for Oracle clusterware and RAC purpose.
First check if default gateway is set :
Use netstat -r On both nodes.
{node1:root}/ # netstat -r
Routing tables
Destination
Gateway

Flags

Refs

Route Tree for Protocol Family 2 (Internet):


default
10.3.25.254
UG
1
default
9.212.131.254
UG
0
10.3.25.0
node1
UHSb
0
10.3.25/24
node1
U
11
node1
loopback
UGHS
37
node1-vip
loopback
UGHS
8
10.3.25.255
node1
UHSb
0
10.10.25.0
node1-rac
UHSb
0
10.10.25/24
node1-rac
U
25
node1-rac
loopback
UGHS
16
10.10.25.255
node1-rac
UHSb
0
20.20.25.0
node1-rac-b
UHSb
0
20.20.25/24
node1-rac-b
U
16
node1-rac-b
loopback
UGHS
5
20.20.25.255
node1-rac-b
UHSb
0
127/8
loopback
U
49

Use
73348
435
0
6113590
1034401
80831
4
0
350557
481
4
0
176379
392
4
187105

If

Exp

Groups
-

32 lo0

Route Tree for Protocol Family 24 (Internet v6):


::1
::1
UH
0
{node1:root}/ #

en0
en0
en0
en0
lo0
lo0
en0
en1
en1
lo0
en1
en2
en2
lo0
en2
lo0

=>
=>

=>

=>

If not set :
Using route add, do set the default gateway on public network interface, en0 in our case, and on all nodes :

MUST BE DONE on each node !!!


To establish a default gateway (10.3.25.254 in our case), type:
{node1:root}/ # route add 0 10.3.25.254
{node1:root}/crs/11.1.0/bin # netstat -r | grep 10.3.25.254
default
10.3.25.254
UG
2
1050 en0
{node1:root}/crs/11.1.0/bin #

{node2:root}/ # route add 0 10.3.25.254


{node2:root}/ # netstat -r | grep 10.3.25.254
default
10.3.25.254
UG
{node2:root}/ #

7324 en0

The value 0 or the default keyword for the Destination parameter means that any packets sent to destinations not
previously defined and not on a directly connected network go through the default gateway. The 10.3.25.254 address
is that of the gateway chosen to be the default.

11gRAC/ASM/AIX

[email protected]

73 of 393

10.1.5

ConfigureNetworkTuningParameters

Verify that the network tuning parameters shown in the following table are set to the values shown or higher values.
The procedure following the table describes how to verify and set the values.
Network Tuning
Parameter

Recommended minimum Value on all nodes

ipqmaxlen

512

rfc1323

sb_max

1310720

tcp_recvspace

65536

tcp_sendspace

65536

udp_recvspace

655360
Note: The recommended value of this parameter is 10 times the value of the udp_sendspace
parameter. The value must be less than the value of the sb_max parameter.

udp_sendspace

65536
Note: This value is suitable for a default database installation. For production databases, the
minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE
initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT
initialization parameter:
(DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB

To check values
{node1:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace
udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node1:root}/ #
{node2:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace
udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node2:root}/ #

11gRAC/ASM/AIX

[email protected]

74 of 393

To change the
current values to
required ones, if
necessary,
follow these steps :

1. If you must change the value of any parameter, enter the following command to
determine whether the system is running in compatibility mode:
# /usr/sbin/lsattr -E -l sys0 -a pre520tune
If the system is running in compatibility mode, the output is similar to the
following, showing that the value of the pre520tune attribute is enable:
pre520tune enable Pre-520 tuning compatibility mode True
By default, with AIX5L, compatibility mode is set to false !!!
Change it to true ONLY if necessary !!!
** if you want to enable the compatibility mode, issue the following command :
# chdev -l sys0 -a pre520tune=enable

2.

If the system is running in compatibility mode,

THEN

ELSE

follow these steps to change the parameter


values:

# /usr/sbin/lsattr -E -l sys0 -a pre520tune

Enter commands similar to the following to


change the value of each parameter:

# /usr/sbin/no o parameter_name=value
For example:
# /usr/sbin/no o udp_recvspace=655360

Add entries similar to the following to the


/etc/rc.net file for each parameter that you
changed in the previous step:

Will give the following :


pre520tune disable Pre-520 tuning
compatibility mode True
The system is not running in compatibility mode, enter
commands similar to the following to change the
parameter values:
Enter commands similar to the following to change
the value of parameter ipqmaxlen:

ipqmaxlen parameter:
/usr/sbin/no -r -o ipqmaxlen=512

if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=65536
/usr/sbin/no -o udp_recvspace=655360
/usr/sbin/no -o tcp_sendspace=65536
/usr/sbin/no -o tcp_recvspace=65536
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=1310720
/usr/sbin/no -o ipqmaxlen=512
fi

By adding these lines to the /etc/rc.net file, the values


persist when the system restarts.
THEN YOU NEED TO Modify and REBOOT all nodes !

Enter commands similar to the following to change


the value of each others parameters:

/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no

-p
-p
-p
-p
-p
-p

o
o
o
o
o
o

udp_sendspace=65536
udp_recvspace=655360
tcp_sendspace=65536
tcp_recvspace=65536
rfc1323=1
sb_max=1310720

Note: If you modify the ipqmaxlen parameter, you must restart the system.
These commands modify the /etc/tunables/nextboot file, causing the attribute values to persist when the
system restarts.

11gRAC/ASM/AIX

[email protected]

75 of 393

10.2 AIX Operating system level, required APARs and filsets

To

have the latest information please refer to Metalink Note


282036.1. on
http://metalink.oracle.com, this document include last update. Check also certification
status on metalink.oracle.com, or otn.oracle.com

AIX release supported with


Oracle 11g RAC R1 as for
February 3rd, 2008.

To determine which version of


AIX is installed, enter the
following command :

{node1:root}/ # oslevel -s

AIX 5L version 5.3, Maintenance Level 5 or


later

5300-07-01-0748 gives level of TL and sublevel of service pack, which means AIX5L TL7 SP1

If the operating system version is lower than the minimum required, upgrade your operating system to this level. AIX 5L
maintenance packages are available from the following Web site :
http://www-912.ibm.com/eserver/support/fixes/

11gRAC/ASM/AIX

[email protected]

76 of 393

10.2.1

FilesetsRequirementsfor11gRACR1/ASM(NOHACMP)

AIX filesets required on ALL nodes for 11g RAC Release 1 implementation with
ASM !!!
Check that the
required filsets are
installed on the
system.

(Note: If the PTF is


not downloadable,
customers should
request an efix
through AIX
customer support.)

AIX 5L version 5.3,


Maintenance Level 6 or later

AIX6 version ...


Maintenance Level ... or later

Filesets

NOT YET
SUPPORTED

bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix50.rte 8.0.0.5
xlC.rte 8.0.0.5

Check Metalink and certify for last update on


certification status.

Specific Filesets
For EMC Symmetrix :

EMC.Symmetrix.aix.rte.5.2.0.1 or higher
For EMC CLARiiON :

EMC.CLAR11ON.fcp.rte.5.2.0.1 or
higher

Depending on the AIX Level that you intend to install, verify that the required filesets are installed on the system. The
following procedure describes how to check these requirements.

To ensure that the


system meets
these
requirements,
follow these steps :

And Check that


required filesets
are all installed.

And that :
xlC.aix50.rte and
xlC.rteare at
minimum release
of 8.0.0.4 and
8.0.0.0

{node1:root}/ # lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat


bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte
xlC.aix50.rte xlC.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
bos.adt.base
5.3.7.0 COMMITTED Base Application Development
Toolkit
bos.adt.lib
5.3.0.60 COMMITTED Base Application Development
Libraries
bos.adt.libm
5.3.7.0 COMMITTED Base Application Development
Math Library
bos.perf.libperfstat
5.3.7.0 COMMITTED Performance Statistics Library
Interface
bos.perf.perfstat
5.3.7.0 COMMITTED Performance Statistics
Interface
bos.perf.proctools
5.3.7.0 COMMITTED Proc Filesystem Tools
rsct.basic.rte
2.4.8.0 COMMITTED RSCT Basic Function
rsct.compat.clients.rte
2.4.8.0 COMMITTED RSCT Event Management Client
Function
xlC.aix50.rte
9.0.0.1 COMMITTED XL C/C++ Runtime for AIX 5.2
xlC.rte
9.0.0.1 COMMITTED XL C/C++ Runtime
Path: /etc/objrepos
bos.perf.libperfstat
bos.perf.perfstat
rsct.basic.rte
{node1:root}/ #

5.3.0.0

COMMITTED

5.3.7.0

COMMITTED

2.4.8.0

COMMITTED

Performance Statistics Library


Interface
Performance Statistics
Interface
RSCT Basic Function

If a fileset is not installed and committed, then install it. Refer to your operating system or software documentation for
information about installing filesets. If fileset is only APPLIED, its not mandatory to commit it.

11gRAC/ASM/AIX

[email protected]

77 of 393

10.2.2

APARsRequirementsfor11gRACR1/ASM(NOHACMP)

Check that the required APARs are installed on the system.

AIX Patches (APAR) required on ALL nodes for 11g RAC R1 implementation with
ASM !!!
(Note: If the PTF is not downloadable, customers should request an efix through AIX customer support.)
AIX 5L version 5.3,
Maintenance Level 6 or later

AIX6 version ...


Maintenance Level ... or later

APARs

IY89080
IY92037
IY94343
IZ01060 or efix for IZ01060
IZ03260 or efix for IZ03260

NOT YET
SUPPORTED

JDK : (Not mandatory for the installation), thes APAR are


required only if you are using the associated JDK version

IY58350 Patch for SDK 1.3.1.16 (32-bit)


IY63533 Patch for SDK 1.4.2.1 (64-bit)
IY65305 Patch for SDK 1.4.2.2 (32-bit)

Check Metalink and certify for last update on certification


status.

IBM JDK 1.5.0.06 (IA64 - mixed mode) is installed

To ensure that the system meets these requirements, follow these steps:

To determine
whether an
APAR is
installed, enter a
command similar
to the following:

This is an example for AIX5L 5.3, with TL07 :


{node1:root}/ # /usr/sbin/instfix
IZ01060 IZ03260"
All filesets for IY89080 were
All filesets for IY92037 were
All filesets for IY94343 were
All filesets for IZ01060 were
All filesets for IZ03260 were
{node1:root}/ #

-i -k "IY89080 IY92037 IY94343


found.
found.
found.
found.
found.

If an APAR is not installed, download it from the following Web site and install it:
http://www-912.ibm.com/eserver/support/fixes/

11gRAC/ASM/AIX

[email protected]

78 of 393

10.3 System Requirements (Swap, temp, memory, internal disks)

Requirements to meet on ALL nodes !!!


RAM >= 512 MB minimum
. Command to check the physical memory : lsattr El sys0 a realmem
{node1:root}/ # lsattr -El sys0 -a realmem
realmem 3145728 Amount of usable physical memory in Kbytes False
{node1:root}/ #
{node2:root}/home # lsattr -El sys0 -a realmem
realmem 3145728 Amount of usable physical memory in Kbytes False
{node2:root}/home #

Internal disk >= 12 GB for the oracle code (CRS_HOME, ASM_HOME, ORACLE_HOME)
This part will be detailed in chapter Required local disks (Oracle Clusterware, ASM and RAC software) !!!
{node1:root}/ # df -k
Filesystem
1024-blocks
Free %Used
Iused %Iused Mounted on
/dev/hd4
262144
207816
21%
13591
23% /
/dev/hd2
4718592
2697520
43%
46201
7% /usr
/dev/hd9var
262144
233768
11%
565
2% /var
/dev/hd3
1310720
1248576
5%
255
1% /tmp
/dev/hd1
262144
261760
1%
5
1% /home
/proc
- /proc
/dev/hd10opt
524288
283400
46%
5663
9% /opt
/dev/oraclelv
15728640 15506524
2%
73
1% /oracle
/dev/crslv
5242880
5124692
3%
41
1% /crs
fas960c2:/vol/VolDistribJSC
276824064 71068384
75%
144273
2% /distrib
{node1:root}/ #
{node2:root}/home # df -k
Filesystem
1024-blocks
Free %Used
Iused %Iused Mounted on
/dev/hd4
262144
208532
21%
13564
23% /
/dev/hd2
4718592
2697360
43%
46201
7% /usr
/dev/hd9var
262144
236060
10%
485
1% /var
/dev/hd3
2097152
29636
99%
4812
35% /tmp
/dev/hd1
262144
261760
1%
5
1% /home
/proc
- /proc
/dev/hd10opt
524288
283436
46%
5663
9% /opt
/dev/oraclelv
15728640 15540012
2%
64
1% /oracle
/dev/crslv
5242880
5119460
3%
43
1% /crs
fas960c2:/vol/VolDistribJSC
276824064 71068392
75%
144273
2% /distrib
{node2:root}/home #

11gRAC/ASM/AIX

[email protected]

79 of 393

Paging space = 2 x RAM, with a minimum of 400 MB and a maximum of 2 GB.


To check the paging space configured : lsps a
{node1:root}/ # lsps -a
Page Space
Physical Volume
hd6
hdisk0
{node1:root}/ #

Volume Group
rootvg

Size %Used Active


512MB
7
yes

Auto
yes

Type
lv

{node2:root}/home # lsps -a
Page Space
Physical Volume
hd6
hdisk1
{node2:root}/home #

Volume Group
rootvg

Size %Used Active


512MB
2
yes

Auto
yes

Type
lv

Temporary Disk Space : The Oracle Universal Installer requires up to 400 MB of free space in the /tmp directory.
To check the free temporary space available: df k /tmp
{node1:root}/ # df -k /tmp
Filesystem
1024-blocks
/dev/hd3
1310720
{node1:root}/ #

Free %Used
1248576
5%

Iused %Iused Mounted on


255
1% /tmp

{node2:root}/home # df -k /tmp
Filesystem
1024-blocks
Free %Used
/dev/hd3
2097152
29636
99%
{node2:root}/home #

Iused %Iused Mounted on


4812
35% /tmp

You can use an other filesystem instead of /tmp.


Set the TEMP environment variable (used by crs, asm, rdbms users and/or oracle user)
and the TMPDIR environment variable to the new location.
For example :
export TEMP=/new_tmp
export TMPDIR=/new_tmp
export TMP=/new_tmp

11gRAC/ASM/AIX

[email protected]

80 of 393

10.4 Users And Groups


2 options possibles
OPTION 1

OPTION 2

1 user for each installation, for example :

1 user for all installations,


for example oracle unix user for :

unix crs user for CRS_HOME installation


unix asm user for ASM_HOME installation
unix rdbms user for ORACLE_HOME
installation

CRS_HOME installation
ASM_HOME installation
ORACLE_HOME installation

For the cookbook purpose, and ease of administration, well implement the first option with crs, asm and
rdbms users.
Well also create an oracle user to own the /oracle ($ORACLE_BASE) wich will be shared by asm (/oracle/asm) and
rdbms (/oracle/rdbms).
This oracle user (UID=500) will be part of the following groups : oinstall, dba, oper, asm, asmdba.

11gRAC/ASM/AIX

[email protected]

81 of 393

This setup has to be done on all the nodes of the cluster.


Default user home for crs, asm and rdbms users MUST be in /home, otherwise you may have trouble
with ssh setup (user equivalence).

To create the following groups, and users :


Be sure that all the groups and user numbers are identical thru the nodes.

On node1
mkgroup -'A' id='500' adms='root' oinstall
mkgroup -'A' id='501' adms='root' crs
mkgroup -'A' id='502' adms='root' dba
mkgroup -'A' id='503' adms='root' oper
mkgroup -'A' id='504' adms='root' asm
mkgroup -'A' id='505' adms='root' asmdba
mkuser id='500' pgrp='oinstall' groups='crs,dba,oper,asm,asmdba,staff'
home='/home/oracle' oracle
mkuser id='501' pgrp='oinstall' groups='crs,dba,oper,staff' home='/home/crs' crs
mkuser id='502' pgrp='oinstall' groups='dba,oper,asm,asmdba,staff' home='/home/asm'
asm
mkuser id='503' pgrp='oinstall' groups='dba,oper,staff' home='/home/rdbms' rdbms
THEN On node2

11gRAC/ASM/AIX

[email protected]

82 of 393

The crs, asm and rdbms users must have oinstall as primary group, dba as secondary groups.
Verification: check if the file /etc/group contains lines such as :
(the numbers could be different)

{node1:root}/ # cat /etc/group |grep crs


staff:!:1:ipsec,sshd,oracle,crs,asm,rdbms
oinstall:!:500:oracle,crs,asm,rdbms
crs:!:501:oracle,crs
dba:!:502:oracle,crs,asm,rdbms
oper:!:503:oracle,crs,asm,rdbms
asm:!:504:oracle,asm
asmdba:!:505:oracle,asm
{node1:root}/ #
{node1:root}/ # su - crs
{node1:crs}/crs # id
uid=501(crs) gid=500(oinstall) groups=1(staff),501(crs),502(dba),503(oper)
{node1:crs}/crs #
{node1:root}/ # su - asm
{node1:asm}/oracle/asm # id
uid=502(asm) gid=500(oinstall) groups=1(staff),502(dba),503(oper),504(asm),505(asmdba)
{node1:asm}/oracle/asm #
{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms # id
uid=503(rdbms) gid=500(oinstall) groups=1(staff),502(dba),503(oper)
{node1:rdbms}/oracle/rdbms #
{node1:root}/ # su - oracle
{node1:oracle}/oracle # id
uid=500(oracle) gid=500(oinstall)
groups=1(staff),501(crs),502(dba),503(oper),504(asm),505(asmdba)
{node1:oracle}/oracle #

11gRAC/ASM/AIX

[email protected]

83 of 393

Set a password to

crs user
asm user
rdbms user

Set the same password


for all the nodes of the
cluster, with the
following command :

And connect at least


once to each node with
each user (crs, asm,
rdbms), to validate the
password, as first
connexion will ask to
change the password :

11gRAC/ASM/AIX

{node1:root}/ # passwd crs


Changing password for "crs"
crs's New password:
Enter the new password again:
{node1:root}/ #

{node2:root}/ # passwd crs


Changing password for "crs"
crs's New password:
Enter the new password again:
{node2:root}/ #

{node1:root}/ # passwd asm


Changing password for "asm"
asm's New password:
Enter the new password again:
{node1:root}/ #

{node2:root}/ # passwd asm


Changing password for "asm"
asm's New password:
Enter the new password again:
{node2:root}/ #

{node1:root}/ # passwd rdbms


Changing password for "rdbms"
rdbms's New password:
Enter the new password again:
{node1:root}/ #

{node2:root}/ # passwd rdbms


Changing password for "rdbms"
rdbms's New password:
Enter the new password again:
{node2:root}/ #

{node1:root}/ # telnet node1


.
User : crs
Password : *****

{node2:root}/ # telnet node2


.
User : crs
Password : *****

[email protected]

84 of 393

10.5 Kernel and Shell Limits


Configuring Shell Limits, and System Configuration (Extract from Oracle Documentation)
Note:
The parameter and shell limit values shown in this section are minimum recommended
values only. For production database systems, Oracle recommends that you tune these values
to optimize the performance of the system. Refer to your operating system documentation for
more information about tuning kernel parameters.

Oracle recommends that you

10.5.1

set shell limits for root and oracle users,


set system configuration parameters as described in this section on all cluster nodes for root users.
ConfigureShellLimits

Verify that the shell limits shown in the following table are set to the values shown. The procedure following the table
describes how to verify and set the values.
Recommended Value for oracle user
In our case :
Shell Limit
(As Shown in smit)

Recommended Value
for root user

crs user
asm user
rdbms user

Soft FILE size

-1 (Unlimited)

-1 (Unlimited)

Soft CPU time

-1 (Unlimited)

-1 (Unlimited)

Note: This is the default value.

Note: This is the default value.

Soft DATA segment

-1 (Unlimited)

-1 (Unlimited)

Soft STACK size

-1 (Unlimited)

-1 (Unlimited)

To view the current value


specified for these shell
limits, and to change
them if necessary,
Follow these steps:

1. Enter the following command:


# smit chuser

2. In the User NAME field, enter the user name of the Oracle software
owner, for example crs, asm and rdbms.
3. Scroll down the list and verify that the value shown for the soft limits
listed in the previous table is -1.
If necessary, edit the existing value.
4. When you have finished making changes, press F10 to exit.

11gRAC/ASM/AIX

[email protected]

85 of 393

OR for root and oracle user on each node, to check thru ulimit command :
{node1:root}/ # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:root}/

{node1:root}/ # su - crs
{node1:crs}/crs/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:crs}/crs/11.1.0 #
{node1:root}/ # su - asm
{node1:asm}/oracle/asm/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:asm}/oracle/asm/11.1.0 #
{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:rdbms}/oracle/rdbms/11.1.0 #

Add the following lines to the /etc/security/limits file on each node : Do set unlimited with -1
default:
fsize = -1
core = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
Or set at minimum Oracle specified values as follow :
default:
fsize = -1
core = -1
cpu = -1
data = 512000
rss = 512000
stack = 512000
nofiles = 2000

11gRAC/ASM/AIX

[email protected]

86 of 393

10.5.2

Setcrs,asmandrdbmsuserscapabilities

Add capabilities CAP_NUMA_ATTAC, CAP_BYPASS_RAC_VMM, CAP_PROPAGATE to users crs, asm, rdbms


and oracle.
To set crs, asm and rdbms user capabilities on each node :
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#

chuser
chuser
chuser
chuser

capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

crs
asm
rdbms
oracle

{node2:root}/
{node2:root}/
{node2:root}/
{node1:root}/

#
#
#
#

chuser
chuser
chuser
chuser

capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

crs
asm
rdbms
oracle

To check user capabilities on each node, with crs user for example :
{node1:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node1:root}/crs/11.1.0/bin #
{node2:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node2:root}/crs/11.1.0/bin #

10.5.3

SetNCARGSparameter

Change parameters ncargsto 128


To set ncargs attribute on each node :
{node1:root}/ # chdev -l sys0 -a ncargs=128
sys0 changed
{node1:root}/
{node2:root}/ # chdev -l sys0 -a ncargs=128
sys0 changed
{node2:root}/
To check ncargs attribute on each node :
{node1:root}/crs/11.1.0/bin # lsattr -El sys0 -a ncargs
ncargs 128 ARG/ENV list size in 4K byte blocks True
{node1:root}/crs/11.1.0/bin #
{node2:root}/crs/11.1.0/bin # lsattr -El sys0 -a ncargs
ncargs 128 ARG/ENV list size in 4K byte blocks True
{node2:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

87 of 393

10.5.4

ConfigureSystemConfigurationParameters

Verify that the maximum number of processes allowed per user is set to 16384 or greater :

Note:
For production systems, this value should be at least 128 plus the sum of the PROCESSES and
PARALLEL_MAX_SERVERS initialization parameters for each database running on the system.

To check
the value

To edit
and
modify the
value

{node1:root}/ # lsattr -El sys0 -a maxuproc


maxuproc 1024 Maximum number of PROCESSES allowed per user True
{node1:root}/ #

Setting value to 16384


{node1:root}/ # chdev -l sys0 -a maxuproc='16384'
{node1:root}/ #
{node1:root}/ # lsattr -El sys0 -a maxuproc
maxuproc 16384 Maximum number of PROCESSES allowed per user True
{node1:root}/ #
OR
1. Enter the following command:
# smit chgsys
2. Verify that the value shown for Maximum number of PROCESSES allowed per user
is greater than or equal to 16384.
If necessary, edit the existing value.
3. When you have finished making changes, press F10 to exit.

11gRAC/ASM/AIX

[email protected]

88 of 393

10.5.5

Lru_file_repagesetting

Verify that the lru_file_repage parameter is set to 0 :


The default value is "1", but it is recommended to set this to "0". This setting hints to the VMM to only steal file
pages (from the AIX file buffer cache) and leave the computational pages (from the SGA) alone.
By setting "lru_file_repage=0", AIX only frees file cache memory. This guarantees working storage stays in memory,
and allows file cache to grow.
So in the past you might have set maxclient at 20% for database servers. Today you could set maxclient at 90% and
"lru_file_repage=0". The exact setting will vary based on your application and amount of memory. Contact IBM Support
if you need help determining the optimum setting.
This new lru_file_repage parameter is only available on AIX 5.2 ML04+ and AIX 5.3 ML01+
To check the value on each node
{node1:root}/ # vmo -L lru_file_repage
NAME
CUR
DEF
BOOT
MIN
MAX
UNIT
TYPE
DEPENDENCIES
-------------------------------------------------------------------------------lru_file_repage
1
1
1
0
1
boolean
D
-------------------------------------------------------------------------------{node1:root}/ #
Change the value to 0 on each node :
{node1:root}/ # vmo -p -o lru_file_repage=0
Setting lru_file_repage to 0 in nextboot file
Setting lru_file_repage to 0
{node1:root}/ #
{node1:root}/ # vmo -L lru_file_repage
NAME
CUR
DEF
BOOT
MIN
MAX
UNIT
TYPE
DEPENDENCIES
-------------------------------------------------------------------------------lru_file_repage
0
1
0
0
1
boolean
D
-------------------------------------------------------------------------------{node1:root}/ #

Typical vmo settings for Oracle :


lru_file_repage=0 (default=1) (AIX 5.2 ML04 or later)
Forces file pages to be repaged before computational pages
minperm%=5 (default 20)
Target for minimum % of physical memory to be used for file system cache
- maxperm%=90 ( default 80)
- strict_maxperm = 1
- strict_maxclient = 1

11gRAC/ASM/AIX

[email protected]

89 of 393

Content of
/etc/tunables/nextboot
For node1 :

{node1:root}/ # cat /etc/tunables/nextboot


"/etc/tunables/nextboot" 47 lines, 1001 characters
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/perf/tune/nextboot 1.1
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 2002
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
vmo:
strict_maxperm = "0"
minperm% = "5"
maxperm% = "90"
maxclient% = "90"
lru_file_repage = "0"
strict_maxclient = "1"
minfree = "3000"
maxfree = "4000"
nfso:
nfs_rfc1323 = "1"
no:
ipqmaxlen = "512"
udp_sendspace = "65536"
udp_recvspace = "655360"
tcp_sendspace = "262144"
tcp_recvspace = "262144"
sb_max = "1310720"
rfc1323 = "1"
ioo:
pv_min_pbuf = "512"
numfsbufs = "2048"
maxrandwrt = "32"
maxpgahead = "16"
lvm_bufcnt = "64"
j2_nBufferPerPagerDevice = "1024"
j2_maxRandomWrite = "32"
j2_maxPageReadAhead = "128"
j2_dynamicBufferPreallocation = "256"
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

90 of 393

10.5.6

AsynchronousI/Osetting

Setting Asynchronous I/O :


To check
the value
on each
node

{node1:root}/ # smitty aio

Select change / Show Characteristics of Asynchronous I/O

Set STATE to available


OR
{node1:root}/ # lsattr -El aio0
autoconfig available STATE to be configured at system restart
fastpath
enable
State of fast path
kprocprio 39
Server PRIORITY
maxreqs
16384
Maximum number of REQUESTS
maxservers 300
MAXIMUM number of servers per cpu
minservers 150
MINIMUM number of servers
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

True
True
True
True
True
True

91 of 393

10.6 User equivalences

Before installing Oracle clusterware, ASM softwate and Real Application clusters software, you must configure
user equivalence for the crs, asm and rdbms users on all cluster nodes.
You have two type of User Equivalence implementation :

RSH (Remote shell)


SSH (Secured Shell)

When SSH is not available, the Installer uses the rsh and rcp commands
instead of ssh and scp. .

You have to choose one or the other, but dont implement both at the same time.
Usually, customers will implement SSH. AND if SSH is started and used, do configure SSH

On the public node name returned by the AIX command hostname :


node1 must ssh or rsh to
node1 must ssh or rsh to
node2 must ssh or rsh to
node2 must ssh or rsh to

11gRAC/ASM/AIX

node1,
node2,
node1,
node2,

as crs, asm and rdbms user.


as crs, asm and rdbms user.
as crs, asm and rdbms user.
as crs, asm and rdbms user.

[email protected]

92 of 393

10.6.1

RSHimplementation

Set up user equivalence for the oracle and root account, to enable rsh, rcp, rlogin commands.
You should have the entries on each node for : /etc/hosts, /etc/hosts.equiv and on root/oracle home directory
$HOME/.rhosts.

/etc/hosts.equiv

{node1:root}/ # pg /etc/hosts.equiv
....
node1
root
node2
root
node1
crs
node2
crs
....
{node1:root}/ #

Update/check entries in hosts.equiv


file

on each node

$HOME/.rhosts
Update/check entries in .rhosts file on
each node for root user :

{node1:root}/
{node1:root}/
{node1:root}/
node1
node2
{node1:root}/

# su root
# cd
# pg$HOME/.rhosts
root
root
#

Update/check entries in .rhosts file on


each node for crs, asm and rdbms user :

{node1:root}/ #
{node1:root}/ # su - crs
{node1:crs}/crs # pg .rhosts
node1
root
node2
root
node1
crs
node2
crs
{node1:crs}/crs #
{node1:root}/ # su - asm
{node1:asm}/oracle/asm #
node1
root
node2
root
node1
asm
node2
asm
{node1:asm}/oracle/asm #

pg .rhosts

{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms #
node1
root
node2
root
node1
rdbms
node2
rdbms
{node1:rdbms}/oracle/rdbms #

pg .rhosts

Note : It is possible, but not advised because of security reasons, to put a + in hosts.equiv and .rhosts files.
Test if the user equivalence is
correctly set up (node2 is the
secondary cluster machine).
You are logged on node1 as root :
Test for crs, asm and rdbms users !!!

11gRAC/ASM/AIX

{node1:root}/ # rsh node2 (=> no password)


{node2:root}/ # rcp /tmp/toto node1:/tmp/toto
{node2:root}/ # su - crs
{node2:crs}/crs/11.1.0 # rsh node1 date
Mon Apr 23 17:26:27 DFT 2007
{node2:crs}/crs/11.1.0 #

[email protected]

93 of 393

10.6.2

SSHimplementation

Before you install and use Oracle Real Application clusters, you must configure secure shell (SSH) for the ocrs, asm
and rdbms users on all cluster nodes. Oracle Universal Installer uses the ssh and scp commands during installation to
run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands
do not prompt for a password.
Note:
This section describes how to configure OpenSSH version 3. If SSH is not available, then
Oracle Universal Installer attempts to use rsh and rcp instead.
To determine if SSH is running, enter the following command:
$ ps -ef | grep sshd
If SSH is running, then the response to this command is process ID numbers. To find out more
about SSH, enter the following command:
$ man ssh

For each user, crs, asm and rdbms repeat the next steps :
Configuring SSH on Cluster Member Nodes
To configure SSH, you must first create RSA and DSA keys on each cluster node, and then copy the keys from all
cluster node members into an authorized keys file on each node. To do this task, complete the following steps:
Create RSA and DSA
keys on each
node: Complete the
following steps on
each node:

1. Log in as the oracle user.


2. If necessary, create the .ssh directory in the oracle user's home directory and set
the correct permissions on it:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
3. Enter the following commands to generate an RSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t rsa
4. At the prompts:
Accept the default location for the key file.
Enter and confirm a pass phrase that is different from the oracle user's password.
This command writes the public key to the ~/.ssh/id_rsa.pub file and the
private key to the ~/.ssh/id_rsa file. Never distribute the private key to
anyone.
Enter the following commands to generate a DSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t dsa
5. At the prompts:
Accept the default location for the key file
Enter and confirm a pass phrase that is different from the oracle user's password
This command writes the public key to the ~/.ssh/id_dsa.pub file and the private
key to the ~/.ssh/id_dsa file. Never distribute the private key to anyone.

11gRAC/ASM/AIX

[email protected]

94 of 393

Add keys to an
authorized key
file: Complete the
following steps:

Note:
Repeat this
process for each
node in the cluster
!!!

1. On the local node, determine if you have an authorized key file


(~/.ssh/authorized_keys). If the authorized key file already exists, then proceed
to step 2. Otherwise, enter the following commands:
$ touch ~/.ssh/authorized_keys
$ cd ~/.ssh
$ ls
You should see the id_dsa.pub and id_rsa.pub keys that you have created.
2. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub
files to the file ~/.ssh/authorized_keys, and provide the Oracle user password
as prompted. This process is illustrated in the following syntax example with a
two-node cluster, with nodes node1 and node2, where the Oracle user path is
/home/oracle:
[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node1's password:
[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node2's password:
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys
oracle@node2's password:

3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file
to the Oracle user .ssh directory on a remote node. The following example is with
SCP, on a node called node2, where the Oracle user path is /home/oracle:
[oracle@node1 .ssh]scp authorized_keys node2:/home/oracle/.ssh/
4. Repeat step 2 and 3 for each cluster node member. When you have added keys
from each cluster node member to the authorized_keys file on the last node you
want to have as a cluster node member, then use SCP to copy the complete
authorized_keys file back to each cluster node member
Note:
The Oracle user's /.ssh/authorized_keys file on every node must
contain the contents from all of the /.ssh/id_rsa.pub and
/.ssh/id_dsa.pub files that you generated on all cluster nodes.
5. Change the permissions on the Oracle user's /.ssh/authorized_keys file on all
cluster nodes:
$ chmod 600 ~/.ssh/authorized_keys

At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass
phrase that you specified when you created the DSA key.

11gRAC/ASM/AIX

[email protected]

95 of 393

Enabling SSH User Equivalency on Cluster Member Nodes


To enable Oracle
Universal Installer
to use the ssh and
scp commands
without being
prompted for a
pass phrase,
follow these steps:

1. On the system where you want to run Oracle Universal Installer, log in as the
oracle user.
2. Enter the following commands:
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
3. At the prompts, enter the pass phrase for each key that you generated.
If you have configured SSH correctly, then you can now use the ssh or scp
commands without being prompted for a password or a pass phrase.
4. If you are on a remote terminal, and the local node has only one visual (which is
typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY 0
For example, if you are using the Bash shell, and if your hostname is node1, then enter
the following command:
$ export DISPLAY=node1:0
5. To test the SSH configuration, enter the following commands from the same
terminal session, testing the configuration of each cluster node, where nodename1,
nodename2, and so on, are the names of nodes in the cluster:
$ ssh nodename1 date
$ ssh nodename2 date
These commands should display the date set on each node.
If any node prompts for a password or pass phrase, then verify that the
~/.ssh/authorized_keys file on that node contains the correct public keys.
If you are using a remote client to connect to the local node, and you see a message
similar to "Warning: No xauth data; using fake authentication data for X11 forwarding,"
then this means that your authorized keys file is configured correctly, but your ssh
configuration has X11 forwarding enabled. To correct this, proceed to step 6.
Note:
The first time you use SSH to connect to a node from a particular system,
you may see a message similar to the following:
The authenticity of host 'node1 (140.87.152.153)' can't be established.
RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?
Enter yes at the prompt to continue. You should not see this message again
when you connect from this system to that node.
If you see any other messages or text, apart from the date, then the
installation can fail. Make any changes required to ensure that only the date
is displayed when you enter these commands.
You should ensure that any parts of login scripts that generate any output, or
ask any questions, are modified so that they act only when the shell is an
interactive shell.

6. To ensure that X11 forwarding will not cause the installation to fail, create a
user-level SSH client configuration file for the Oracle software owner user, as
follows:
a. Using any text editor, edit or create the ~oracle/.ssh/config file.
11gRAC/ASM/AIX

[email protected]

96 of 393

b. Make sure that the ForwardX11 attribute is set to no. For example:
Host *
ForwardX11 no
7. You must run Oracle Universal Installer from this session or remember to
repeat steps 2 and 3 before you start Oracle Universal Installer from a
different terminal session.
Preventing Oracle Clusterware Installation Errors Caused by stty Commands
During an Oracle Clusterware installation, Oracle Universal Installer uses SSH (if available) to run commands and copy
files to the other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause
installation errors if they contain stty commands.
To avoid this problem,
you must modify
these files to
suppress all output
on STDERR, as in the
following
examples:

Bourne, Bash, or Korn shell:


if [ -t 0 ]; then
stty intr ^C
fi

C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif

11gRAC/ASM/AIX

[email protected]

97 of 393

10.7 Time Server Synchronization

Time Synchronisation
MANDATORY
To ensure that RAC operates
efficiently, you must synchronize
the system time on all cluster
nodes.
Oracle recommends that you use
xntpd for this purpose. xntpd is a
complete implementation of the
Network Time Protocol (NTP) version
3 standard and is more accurate than
timed.

To configure xntpd, follow these steps on each cluster node :


1 / Enter the following command to
create required files, if ecessary:

# touch /etc/ntp.drift /etc/ntp.trace /etc/ntp.conf

2 / Using any text editor, edit the


/etc/ntp.conf file:

# vi /etc/ntp.conf

3 / Add entries similar to the


following to the file:

# Sample NTP Configuration file

server
server
server

ip_address1
ip_address2
ip_address3

# Specify the IP Addresses of three clock server systems.


timeserver1
timeserver2
timeserver3

10.3.25.101
10.3.25.102
10.3.25.103

# Most of the routers are broadcasting NTP time information. If your


# router is broadcasting, then the following line enables xntpd
# to listen for broadcasts.
broadcastclient
# Write clock drift parameters to a file. This enables the system
# clock to quickly sychronize to the true time on restart.
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace

4 / To start xntpd, follow these


steps:

11gRAC/ASM/AIX

A - Enter the following command: # /usr/bin/smitty xntpd


B - Choose Start Using the xntpd Subsystem, then choose BOTH.

[email protected]

98 of 393

10.8 crs environment setup


Oracle environment : $HOME/.profile file in Oracles home directory

To be done on
each node.

export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022

(S for system-wide thread scope)

if [ -t 0 ]; then
stty intr ^C
fi

Notes:
On AIX, when using multithreaded applications or LAN-free, especially when running on machines with multiple
CPUs, we strongly recommend setting AIXTHREADSCOPE=S in the environment before starting the application,
for better performance and more solid scheduling.
For example:
EXPORT AIXTHREADSCOPE=S

Setting AIXTHREAD_SCOPE=S means that user threads created with default attributes will be placed into
system-wide contention scope. If a user thread is created with system-wide contention scope, it is bound to a
kernel thread and it is scheduled by the kernel. The underlying kernel thread is not shared with any other user
thread.

AIXTHREAD_SCOPE (AIX 4.3.1 and later)


Purpose:
Controls contention scope. P signifies process-based contention scope (M:N). S signifies system-based
contention scope (1:1).
Values:
Possible Values: P or S. The default is P.
Display:
echo $AIXTHREAD_SCOPE (this is turned on internally, so the initial default value will not be seen with the
echo command)
Change:
AIXTHREAD_SCOPE={P|S}export AIXTHREAD_SCOPE Change takes effect immediately in this shell.
Change is effective until logging out of this shell. Permanent change is made by adding the
AIXTHREAD_SCOPE={P|S} command to the /etc/environment file.
Diagnosis:
If fewer threads are being dispatched than expected, then system scope should be tried.

11gRAC/ASM/AIX

[email protected]

99 of 393

10.9 Oracle Software Requirements


Link to donwload code from : http://otn.oracle.com/
http://www.oracle.com/technology/software/products/database/oracle10g/htdocs/10201aixsoft.html
Orale CDs needed
for the RAC
installation

Oracle Database 11g Release 1 (11.1.0.6.0) Enterprise/Standard Edition for AIX5L-Disk1


http://download.oracle.com/otn/aix/oracle11g/aix.ppc64_11gR1_database_disk1.zip
(650,429,692 bytes) (cksum - 2885574993)
Oracle Database 11g Release 1 (11.1.0.6.0) Enterprise/Standard Edition for AIX5L-Disk2
http://download.oracle.com/otn/aix/oracle11g/aix.ppc64_11gR1_database_disk2.zip
(1,867,281,765 bytes) (cksum - 2533343969)

Clusterware and
Database
Patchset needed

Application of patchset to the database software depends if your application has been
tested with it, its your choice, your desision.

Best is to
implement last
patchset for Oracle
Clusterware and
ASM software

If available, and As needed for your project ...

Extra Clusterware
Patches needed

If available, and As needed for your project ...

Extra ASM
Patches needed

6468666 Oracle Database Family: Patch


HIGH REDUNDANCY DISKGROUPS DO NOT TOLERATE LOSS OF DISKS IN TWO
FAILGROUPS
6494961 Oracle Database Family: Patch
ASM DISKGROUP MOUNT INCURS ORA_600 KFCEMK10
If available, and As needed for your project ...

Extra Database
Patches needed

If available, and As needed for your project ...

Directions to extract contents of .gz files :


1. Unzip the file: gunzip <filename>
2. Extract the file: cpio -idcmv < <filename>
3. Installation guides and general Oracle Database 10g documentation can be found here.
4. Review the certification matrix for this product here.

11gRAC/ASM/AIX

[email protected]

100 of 393

11 PREPARING STORAGE
Storage is to be prepared for :

Local disks on each node to install Oracle Clusterware, ASM and RAC softwares MANDATORY

Shared disks for Oracle clusterware disks MANDATORY

11gRAC/ASM/AIX

[email protected]

101 of 393

Shared disk for Oracle ASM Instances parameters (SPFILE) Recommended but not MANDATORY

Shared disks for Oracle ASM disks MANDATORY

11gRAC/ASM/AIX

[email protected]

102 of 393

11.1 Required local disks (Oracle Clusterware, ASM and RAC software)
The oracle code (clusterware, ASM and RAC) can be located on an internal disk and propagated on the other machines
of the cluster. The Oracle Universal Installer manage the cluster-wide installation, that is done only once. Regular file
systems are used for Oracle code.

NOTA : You can also use virtual I/O disks for :

AIX5L operating system

Oracle clusterware ($CRS_HOME)

Oracle ASM ($ASM_HOME)

Oracle RAC Software ($ORACLE_HOME)

11gRAC/ASM/AIX

[email protected]

103 of 393

On each node, create a volume group oraclevg, or use available space from rootvg to create the
Logical Volumes

To list the internal disks :


{node1:root}/ # lsdev Ccdisk | grep SCSI
hdisk0 Available
Virtual SCSI Disk Drive

Create a volume group called oraclevg :


{node1:root}/ # mkvg -f -y'oraclevg' -S hdisk0

On each node /crs must be created to host :

Oracle Clusterware Software

/crs must be about 4 to 6 GB size.


/crs must be owned by oracle:oinstall or by crs:oinstall

On node 1

Create a 6GB file system /crs in the previous volume group (large file enabled) :
{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size=6G -m'/crs' -A'yes' -p'rw'
{node1:root}/ # mount /crs
{node1:root}/ # chown R crs:oinstall /crs
THEN On node 2

On each node /oracle must be created to host :

Oracle Automated Storage Management (ASM) on


/oracle/asm
Oracle RAC software on /oracle/rdbms

/oracle must be about 10 to 12 GB size.


/oracle must be owned by oracle:oinstall
/oracle/asm must be owned by asm:oinstall or oracle:oinstall
/oracle/rdbms must be owned by rdbms:oinstall or oracle:oinstall
On node 1

Create a 6GB file system /cr in the previous volume group (large file enabled) :
{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size='12G' -m'/oracle' -A'yes' -p'rw'
{node1:root}/ # mount /oracle
{node1:root}/ # chown oracle:oinstall /oracle
{node1:root}/ # chown R asm:oinstall /oracle/asm
{node1:root}/ # chown R rdbms:oinstall /oracle/rdbms

THEN On node 2

THEN do the same for each extra node.

11gRAC/ASM/AIX

[email protected]

104 of 393

11.2 Oracle Clusterware Disks (OCR and Voting Disks)

The Oracle Cluster Registry (OCR) stores cluster and database configuration information. You must have a shared
raw device containing at least 256 MB of free space that is accessible from all of the nodes in the cluster.
OCR protected by external mechanism

OR, OCR mirrored by Oracle Clusterware

'The Oracle Clusterware voting disk contains cluster membership information and arbitrates cluster ownership
among the nodes of your cluster in the event of network failures. You must have a shared raw device containing at
least 256 MB of free space that is accessible from all of the nodes in the cluster.
Voting disk protected by external
mechanism

11gRAC/ASM/AIX

OR, voting disks protected by Oracle Clusterware.


Its always even copies, 1, 3, 5 and so on / 3 is sufficient in most cases.

[email protected]

105 of 393

11.2.1

RequiredLUNs

Using the storage administration console, you have to create :


Either
2 LUNs for OCR disks (300MB size)
3 LUNs for Voting disks (300MB size)
to implement normal redundancy of Oracle clusterware, to protect OCR and Voting disks.
or
1 LUNs for OCR disks (300MB size)
1 LUNs for Voting disks (300MB size)
to implement external redundancy of Oracle clusterware, and let the storage mirroring mechanism to protect OCR
and Voting disks.
The following screen shows the LUN mapping for nodes used in our cluster. The LUN for OCR disks and Voting disks
have ids 1 to 5. These IDs will help us on to identify which hdisk will be used.
Disks
OCR1
OCR2
Voting1
Voting2
Voting3

11gRAC/ASM/AIX

LUNs ID Number
L1
L2
L3
L4
L5

[email protected]

LUNs Size
300 MB
300 MB
300 MB
300 MB
300 MB

106 of 393

11.2.2

HowtoIdentifyifaLUNisusedornot?

Get the List of the hdisks on node1, for example :

L
i
s
t
o
f
a
v
a
i
l
a
b
l
e

On node 1

{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
h hdisk14
none
d {node1:root}/ #
i
NO PVID are assigned apart for the rootvg hdisk
s
k
s

rootvg
None
None
None
None
None
None
Nsd
Nsd1
Nsd2
Nsd3
Nsd4
None
None
None

active

o
n
n
o
d
e
1
:

r
o
o
t
v
g
i
s
h
d
i
s
k
0
!
!
!

11gRAC/ASM/AIX

[email protected]

107 of 393

When hdisk is marked rootvg, header of the disk might look like this :
Use the
following
command to
read hdisk
header :

On node 1 for hdisk0


{node1:root}/ # lspv | grep rootvg
hdisk0
00cd7d3e6d2fa8db
{node1:root}/ #

rootvg

{node1:root}/ # lquerypv -h /dev/rhdisk0


00000000
C9C2D4C1 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00CD7D3E 6D2FA8DB 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|..}>m/..........|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

active

On node 2

11gRAC/ASM/AIX

[email protected]

108 of 393

When hdisk is not marked rootvg, but None, its important to check that its not used at all.
Use the
following
command to
read hdisk
header :
If all lines are
ONLY full of 0
THEN the hdisk is
free to be used.
If its not the case,
check first that
which LUN is
mapped to the
hdisk (next
pages), and if its
the LUN you
should use, you
must then dd
(zeroing) the
/dev/rhdisk2 rdisk

On node 1 for hdisk2

{node1:root}/ #lquerypv -h /dev/rhdisk2


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

On node 2

BUT CAREFULL !!!, any hdisk not marked None as a result of the lspv command may show also a blank
header (GPFS used hdisk for example), so make sute that hdisk has None value out of the lspv command.

List of
all
hdisks
not
marked
rootvg
or
None
on node
1:

Use the
following
command to
read hdisk
header :
All lines are ONLY
full of 0
BUT the hdisk is
not free to be
used !!! Because
its used by IBM
GPFS in our
example

On node 1
{node1:root}/ # lspv
...
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
...
{node1:root}/ #

Nsd
Nsd1
Nsd2
Nsd3
Nsd4

On node 1 for hdisk7

{node1:root}/ #lquerypv -h /dev/rhdisk7


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000

11gRAC/ASM/AIX

[email protected]

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
109 of 393

{node1:root}/ #
11.2.3

RegisterLUNsatAIXlevel

Before
registration of
LUNs :

On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
{node1:root}/ #

rootvg

active

rootvg

active

On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
{node2:root}/ #

As root on each node, update the ODM repository using the following command : "cfgmgr"
You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.
After
registration of
LUNs in ODM
thru cfgmgr
command :

On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #

rootvg
None
None
None
None
None

active

rootvg
None
None
None
None
None

active

On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #

11gRAC/ASM/AIX

[email protected]

110 of 393

11.2.4

IdentifyLUNsandcorrespondinghdiskoneachnode
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

We know the LUNs available for OCR


and Voting disks are L1 to L5.

LUNs ID Number
L1
L2
L3
L4
L5

Identify the disks available for OCR and Voting disks, on each node, knowing the LUNs numbers.
Knowing the LUNs
number to use, we
know need to identify
the corresponding
hdisks on each node of
the cluster as detailed
in the following table :

Disks

LUNs ID
Number

OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

L1
L2
L3
L4
L5

Node 1
Corresponding
hdisk

Node 2
Corresponding
hdisk

Node .
Corresponding
hdisk

There are two methods to identify the corresponding hdisks :

Identify LUN ID assign to hdisk, using lscfg l hdisk? command

Identify hdisks by assigning momently a PVID to each hdisk not having one

We strongly recommend not using PVID to identify hdisk.


If some hdisks are already used, setting a PVID to a used hdisk can
corrupt the the hdisk header, and have generate issues as loosing
data stored on the hdisk.

11gRAC/ASM/AIX

[email protected]

111 of 393

Get the List of the hdisks on node1 :


On node 1

List of
available
hdisks on
node 1 :

rootvg is
hdisk0 !!!

{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #

rootvg
None
None
None
None
None

active

NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :
Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node1:root}/ # for i in 2 3 4 5 6
do
lscfg -vl hdisk$i
done
hdisk2
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000
hdisk3
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000
hdisk4
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L3000000000000
hdisk5
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L4000000000000
hdisk6
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L5000000000000
{node1:root}/ #

1722-600
1722-600
1722-600
1722-600
1722-600

(600)
(600)
(600)
(600)
(600)

Disk
Disk
Disk
Disk
Disk

Array
Array
Array
Array
Array

Device
Device
Device
Device
Device

THEN, We get the following table :


Disks

LUNs ID
Number

OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

L1
L2
L3
L4
L5

Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Node 2
Corresponding
hdisk

Node .
Corresponding
hdisk
hdisk?
hdisk?

No need to assign PVID when using this method.

11gRAC/ASM/AIX

[email protected]

112 of 393

Get the List of the hdisks on node2


List of available
hdisks on node
2:
rootvg is
hdisk1, which is
not same hdisk
as on node 1,
which means
that it could be
the same for
each hdisk. On
both nodes,
same hdisk
names might
not be attached
to same LUN.

On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #

rootvg
None
None
None
None
None

active

NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :

Be carefull hdisk2 on node1 is not necessary hdisk2 on node2.


Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node2:root}/ # for i in 2 3 4 5 6
do
lscfg -vl hdisk$i
done
hdisk2
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L1000000000000
hdisk3
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L2000000000000
hdisk4
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L3000000000000
hdisk5
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L4000000000000
hdisk6
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L5000000000000
{node2:root}/ #

1722-600
1722-600
1722-600
1722-600
1722-600

(600)
(600)
(600)
(600)
(600)

Disk
Disk
Disk
Disk
Disk

Array
Array
Array
Array
Array

Device
Device
Device
Device
Device

THEN, We get the following table :


Disks

LUNs ID
Number

OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

L1
L2
L3
L4
L5

Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Node .
Corresponding
hdisk
hdisk?
hdisk?
hdisk?
hdisk?
Hdisk?

Do the same again for any extra node


No need to assign PVID when using this method.

11gRAC/ASM/AIX

[email protected]

113 of 393

11.2.5

Removingreservelockpolicyonhdisksfromeachnode

Why is it MANDATORY to change the reserve policy on disks from each node accessing the same
LUNs ?

11gRAC/ASM/AIX

[email protected]

114 of 393

Setup reserve_policy on ocr and voting hdisks, on each node :


Example for one hdisk :
Issue the command lsattr E l hdisk2 to vizualize all attributes for hdisk2
{node1:root}/ # lsattr -El hdisk2
PR_key_value
none
cache_method
fast_write
ieee_volname
600A0B800012AB30000017F64787273E
lun_id
0x0001000000000000
max_transfer
0x100000
prefetch_mult 1
pvid
none
q_type
simple
queue_depth
10
raid_level
5
reassign_to
120
reserve_policy single_path
rw_timeout
30
scsi_id
0x660500
size
300
write_cache
yes
{node1:root}/ #

Persistant Reserve Key Value


True
Write Caching method
False
IEEE Unique volume name
False
Logical Unit Number
False
Maximum TRANSFER Size
True
Multiple of blocks to prefetch on read False
Physical volume identifier
False
Queuing Type
False
Queue Depth
True
RAID Level
False
Reassign Timeout value
True
Reserve Policy
True
Read/Write Timeout value
True
SCSI ID
False
Size in Mbytes
False
Write Caching enabled
False

Or only lsattr E l hdisk2 | grep reserve


{node1:root}/ # lsattr -El hdisk2 | grep reserve
reserve_policy single_path
Reserve Policy
{node1:root}/ #

True

On IBM storage (ESS, FasTt, DSXXXX) : Change the reserve_policy attribute to no_reserve

On EMC storage : Change the reserve_lock attribute to no

chdev -l hdisk? -a reserve_policy=no_reserve


chdev -l hdisk? -a reserve_lock=no

On HDS storage with HDLM driver, and no disks in Volume Group : Change the dlmrsvlevel
attribute to no_reserve
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve

Change the
reserve_policy
attributes for
each disks
dedicated to OCR
and voting disks,
on each nodes of
the cluster :
In our case,
we have an
IBM storage !!!

On node 1

On node 2

{node1:root}/ # for i in 2 3 4 5 6
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed
changed
changed
{node1:root}/ #

{node2:root}/ # for i in 2 3 4 5 6
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed
changed
changed
{node2:root}/ #

Example for one hdisk on node1 :


Issue the command lsattr E l hdisk2 |grep reserve to vizualize modified attributes for hdisk2

{node1:root}/ # lsattr -El hdisk2 | grep reserve


11gRAC/ASM/AIX

[email protected]

115 of 393

reserve_policy no_reserve
{node1:root}/ #

11gRAC/ASM/AIX

Reserve Policy

[email protected]

True

116 of 393

11.2.6

IdentifyMajorandMinornumberofhdiskoneachnode

As described before, disks might have different names from one node to another for example
hdisk2 on node1 might be hdisk3 on node2, etc
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

LUNs
ID
Number
L1
L2
L3
L4
L5

Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Major
Num.

Minor
Num.

Next steps will explain procedure to identify Major and Minor number necessary to create the
virtual device pointing to the right hdisk on each node.

11gRAC/ASM/AIX

[email protected]

117 of 393

Identify minor and


major number for
each hdisk, on
each node
On node1 :

Values as 21,

{node1:root}/ # for i in 2 3 4 5 6
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
{node1:root}/ #

OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

11
11
11
11
11
11
11
11
14
14

15:58
15:58
15:58
15:58
15:58
15:58
15:58
15:58
21:18
21:18

/dev/hdisk2
/dev/rhdisk2
/dev/hdisk3
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/hdisk5
/dev/rhdisk5
/dev/hdisk6
/dev/rhdisk6

are the major and minor number for hdisk2 on node1.

21 is the major number, corresponding to a


6 is the minor number, corresponding to an

Disks

21, 6 Jan
21, 6 Jan
21, 7 Jan
21, 7 Jan
21, 8 Jan
21, 8 Jan
21, 9 Jan
21, 9 Jan
21, 10 Jan
21, 10 Jan

LUNs
ID
Number
L1
L2
L3
L4
L5

Identify minor and


major number for
each hdisk, on each
node
On node2 :

Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

type of disk.
order number.

Node 1
Correspondin
g hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Major
Num.

Minor
Num.

21
21
21
21
21

6
7
8
9
10

{node2:root}/ # for i in 2 3 4 5 6
do
ls -la /de> do
ls -la /dev/*hdisk$i
done
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
{node2:root}/ #

21, 6 Jan
21, 6 Jan
21, 7 Jan
21, 7 Jan
21, 8 Jan
21, 8 Jan
21, 9 Jan
21, 9 Jan
21, 10 Jan
21, 10 Jan

Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

11
11
11
11
11
11
11
11
14
14

16:12
16:12
16:12
16:12
16:12
16:12
16:12
16:12
22:20
22:20

Major
Num.

Minor
Num.

/dev/hdisk2
/dev/rhdisk2
/dev/hdisk3
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/hdisk5
/dev/rhdisk5
/dev/hdisk6
/dev/rhdisk6

Disks

LUNs
Device Name
Node 1
Major Minor
Node 2
Major Minor
ID
Correspondin
Num.
Num.
Corresponding
Num. Num.
Number
g hdisk
hdisk
OCR 1
L1
/dev/ocr_disk1
hdisk2
21
6
hdisk2
21
6
OCR 2
L2
/dev/ocr_disk2
hdisk3
21
7
hdisk3
21
7
Voting 1
L3
/dev/voting_disk1
hdisk4
21
8
hdisk4
21
8
Voting 2
L4
/dev/voting_disk2
hdisk5
21
9
hdisk5
21
9
Voting 3
L5
/dev/voting_disk3
hdisk6
21
10
hdisk6
21
10
As disks may have different names from one node to another for example L1 could correspond to hdisk2 on
node1, and hdisk1 on node2, etc

11gRAC/ASM/AIX

[email protected]

118 of 393

11.2.7

CreateUniqueVirtualDevicetoaccesssameLUNfromeachnode

THEN from the table, for node1 we need to create virtual devices :
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3

LUNs
ID
Number
L1
L2
L3
L4
L5

Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

Node 1
Correspondin
g hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

To create same virtual devices on each node


called :
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

we need to use major and minor number of


hdisks which will make the link between the
virtual devices and the hdisks.

Major
Num.

Minor
Num.

21
21
21
21
21

6
7
8
9
10

Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Major
Num.

Minor
Num.

21
21
21
21
21

6
7
8
9
10

Using the command :


mknod Device_Name c MajNum MinNum
For first node, as root user
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#
#

mknod
mknod
mknod
mknod
mknod

/dev/ocr_disk1 c 21
/dev/ocr_disk2 c 21
/dev/voting_disk1 c
/dev/voting_disk2 c
/dev/voting_disk3 c

6
7
21 8
21 9
21 10

From the table, for node2 we need also to create virtual devices :

By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.
To create same virtual devices on each node
called :
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

we need to use major and minor number of


hdisks which will make the link between the
virtual devices and the hdisks.

11gRAC/ASM/AIX

Using the command :


mknod Device_Name c MajNum MinNum
For second node, as root user
{node2:root}/
{node2:root}/
{node2:root}/
{node2:root}/
{node2:root}/

#
#
#
#
#

mknod
mknod
mknod
mknod
mknod

[email protected]

/dev/ocr_disk1 c 21
/dev/ocr_disk2 c 21
/dev/voting_disk1 c
/dev/voting_disk2 c
/dev/voting_disk3 c

6
7
21 8
21 9
21 10

119 of 393

11.2.8

SetOwnership/PermissionsonVirtualDevices

For each node in the cluster :


Set ownership of the
created virtual devices

{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#
#

chown
chown
chown
chown
chown

root:oinstall /dev/ocr_disk1
root:oinstall /dev/ocr_disk2
crs:dba /dev/voting_disk1
crs:dba /dev/voting_disk2
crs:dba /dev/voting_disk3

chmod
chmod
chmod
chmod
chmod

640
640
660
660
660

Then on node2

THEN set read/write


permissions on the
created virtual devices

{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#
#

/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

Then on node2

11gRAC/ASM/AIX

[email protected]

120 of 393

THEN check that settings are applied on node1 :


Checking the
modifications
After Oracle
clusterware
installation,
After Oracle
clusterware
installation,
ownership
and
permissions
of virtual
devices may
change.

{node1:root}/ # ls -la /dev/* | grep


brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
{node1:root}/ #

"21, 6"
21, 6 Jan 11 15:58 /dev/hdisk2
21, 6 Feb 06 17:43 /dev/ocr_disk1
21, 6 Jan 11 15:58 /dev/rhdisk2

Check for each disk using following command :


{node1:root}/ # for i in 6 7 8 9 10
> do
> ls -la /dev/* | grep "21, "$i
> done
brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
{node1:root}/ #

21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,

6
6
6
7
7
7
8
8
8
9
9
9
9
9
9

Jan
Feb
Jan
Jan
Feb
Jan
Jan
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Feb

11
06
11
11
06
11
11
11
06
11
11
06
11
11
06

15:58
17:43
15:58
15:58
17:43
15:58
15:58
15:58
17:43
15:58
15:58
17:43
15:58
15:58
17:43

/dev/hdisk2
/dev/ocr_disk1
/dev/rhdisk2
/dev/hdisk3
/dev/ocr_disk2
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/voting_disk1
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk2
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk3

Then on node2 :
Checking the
modifications
After Oracle
clusterware
installation,
After Oracle
clusterware
installation,
ownership
and
permissions
of virtual
devices may
change.

{node2:root}/ # ls -la /dev/* | grep


brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
{node2:root}/ #

"21, 6"
21, 6 Jan 11 15:58 /dev/hdisk2
21, 6 Feb 06 17:43 /dev/ocr_disk1
21, 6 Jan 11 15:58 /dev/rhdisk2

Check for each disk using following command :


{node2:root}/ # for i in 6 7 8 9 10
> do
> ls -la /dev/* | grep "21, "$i
> done
brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
brw------1 root
system
crw-r----1 root
oinstall
crw------1 root
system
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
brw------1 root
system
crw------1 root
system
crw-rw---1 crs
dba
{node2:root}/ #

11gRAC/ASM/AIX

21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,

6
6
6
7
7
7
8
8
8
9
9
9
9
9
9

[email protected]

Jan
Feb
Jan
Jan
Feb
Jan
Jan
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Feb

11
06
11
11
06
11
11
11
06
11
11
06
11
11
06

15:58
17:43
15:58
15:58
17:43
15:58
15:58
15:58
17:43
15:58
15:58
17:43
15:58
15:58
17:43

/dev/hdisk2
/dev/ocr_disk1
/dev/rhdisk2
/dev/hdisk3
/dev/ocr_disk2
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/voting_disk1
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk2
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk3

121 of 393

11.2.9

Formatingthevirtualdevices(zeroing)

Now, we do format the virtual devices :

Format (Zeroing) and


Verify that you can read
on the disks from each
node :

On node 1

{node1:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #
{node1:root}/ # for i in 1 2 3
> do
> dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #

Verify devices concurrent read/write access by running at the same time dd command from each node :

At the same time :

on node1

on node2

On node 1

{node1:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
On node 2

{node2:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
Same for voting disks

11gRAC/ASM/AIX

[email protected]

122 of 393

11.3 ASM disks


About ASM Instances parameters files, pfile or spfile ?

11gRAC/ASM/AIX

[email protected]

123 of 393

About ASM disks, using rhdisks or virtual devices ?

11gRAC/ASM/AIX

[email protected]

124 of 393

11.3.1

RequiredLUNs

Using the storage administration console, you have to create :

1 LUN for the ASM spfile (100MB size)


o This disk will contain the sahred ASM instance spfile, meaning all ASM instances parameters that youll
be able to modify in a dynamic way (Only possible with 1 storage implementation, with 2 storages youll
have to use local ASM instance pfile).

A bunch of LUNs for disks to be used with ASM.


o Disks/LUNs for ASM can be either
/dev/rhdisk (raw disks)
or
/dev/ASM_DISK1 (virtual devices) for ease of administration.

The following screen shows the LUN mapping for nodes used in our cluster.
These IDs will help us on to identify which hdisk will be used.
Disks
ASM spfile
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

11.3.2

LUNs ID Number
L0
L6
L7
L8
L9
LA (meaning Lun 10)
LB (meaning Lun 11)
LC (meaning Lun 12)
LD (meaning Lun 13)

LUNs Size
100 MB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB

HowtoIdentifyifaLUNisusedornot?
11gRAC/ASM/AIX

[email protected]

125 of 393

When hdisk is marked rootvg, header of the disk might look like this :
Use the
following
command to
read hdisk
header :

On node 1 for hdisk0


{node1:root}/ # lspv | grep rootvg
hdisk0
00cd7d3e6d2fa8db
{node1:root}/ #

rootvg

{node1:root}/ # lquerypv -h /dev/rhdisk0


00000000
C9C2D4C1 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00CD7D3E 6D2FA8DB 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|..}>m/..........|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

active

On node 2

When hdisk is not marked rootvg, but None, its imortant to check that its not used at all.
Use the
following
command to
read hdisk
header :
If all lines are
ONLY full of 0
THEN the hdisk is
free to be used.
If its not the case,
check first that
which LUN is
mapped to the
hdisk (next
pages), and if its
the LUN you
should use, you
must then dd
(zeroing) the
/dev/rhdisk2 rdisk

On node 1 for hdisk2

{node1:root}/ #lquerypv -h /dev/rhdisk2


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

On node 2

11gRAC/ASM/AIX

[email protected]

126 of 393

11.3.3

RegisterLUNsatAIXlevel

Before
registration of
new LUNs for
ASM disks :

On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #

rootvg
None
None
None
None
None

active

rootvg
None
None
None
None
None

active

On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #

As root on each node, update the ODM repository using the following command : "cfgmgr"

11gRAC/ASM/AIX

[email protected]

127 of 393

You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.
After
registration of
LUNs in ODM
thru cfgmgr
command :

On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node1:root}/ #

rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
None

active

On node 2
{node2:root}/ # lspv
hdisk0
none
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node2:root}/ #

11gRAC/ASM/AIX

[email protected]

None
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None

active

128 of 393

11.3.4

IdentifyLUNsandcorrespondinghdiskoneachnode

Disks
We know the LUNs to use
for :

ASM spfile disk


ASM disks

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD

Node 1

Node 2

Node .

Identify the disks available for ASM and ASM spfile disks, on each node, knowing the LUNs
numbers.

There are two methods to identify the corresponding hdisks :

Identify LUN ID assign to hdisk, using lscfg l hdisk? command

Identify hdisks by assigning momently a PVID to each hdisk not having one

We strongly recommend not using PVID to identify hdisk.


If some hdisks are already used, setting a PVID to a used hdisk can
corrupt the the hdisk header, and have generate issues as loosing
data stored on the hdis

11gRAC/ASM/AIX

[email protected]

129 of 393

Get the List of the hdisks on node1 :


On node 1

List of
available
hdisks on
node 1 :

rootvg is
hdisk0 !!!

{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node1:root}/ #

rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
None

active

NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :
Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node1:root}/ # for i in 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lscfg -vl hdisk$i
done
hdisk1
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L0 3552
(500) Disk Array Device
hdisk2
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L1000000000000 3552
(500) Disk Array
hdisk3
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L2000000000000 3552
(500) Disk Array
hdisk4
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L3000000000000 3552
(500) Disk Array
hdisk5
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L4000000000000 3552
(500) Disk Array
hdisk6
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L5000000000000 3552
(500) Disk Array
hdisk7
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L6000000000000 3552
(500) Disk Array
hdisk8
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L7000000000000 3552
(500) Disk Array
hdisk9
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L8000000000000 3552
(500) Disk Array
hdisk10
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L9000000000000 3552
(500) Disk Array
hdisk11
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LA000000000000 3552
(500) Disk Array
hdisk12
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LB000000000000 3552
(500) Disk Array
hdisk13
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LC000000000000 3552
(500) Disk Array
hdisk14
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LD000000000000 3552
(500) Disk Array
{node1:root}/ #

Disks
THEN
We get the
following table :

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD

Node 1
hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

Node 2

Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device

Node .
hdisk?
hdisk?

No need to assign PVID when using this method.

11gRAC/ASM/AIX

[email protected]

130 of 393

Get the List of the hdisks on node2


On node 2

List of available
hdisks on node
2:
rootvg is hdisk1,
which is not
same hdisk as
on node 1,
which means
that it could be
the same for
each hdisk. On
both nodes,
same hdisk
names might not
be attached to
same LUN.

{node2:root}/ # lspv
hdisk0
none
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node2:root}/ #

None
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None

active

NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :

Be carefull hdisk7 on node1 is not necessary hdisk7 on node2.


Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node2:root}/ # for i in 0 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lscfg -vl hdisk$i
done
hdisk0
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L0 3552
(500) Disk Array Device
hdisk1
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L1000000000000 3552
(500) Disk Array
hdisk2
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L2000000000000 3552
(500) Disk Array
hdisk3
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L3000000000000 3552
(500) Disk Array
hdisk4
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L4000000000000 3552
(500) Disk Array
hdisk5
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L5000000000000 3552
(500) Disk Array
hdisk6
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L6000000000000 3552
(500) Disk Array
hdisk7
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L7000000000000 3552
(500) Disk Array
hdisk8
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L8000000000000 3552
(500) Disk Array
hdisk9
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L9000000000000 3552
(500) Disk Array
hdisk10
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LA000000000000 3552
(500) Disk Array
hdisk12
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LB000000000000 3552
(500) Disk Array
hdisk13
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LC000000000000 3552
(500) Disk Array
hdisk14
U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LD000000000000 3552
(500) Disk Array
{node2:root}/ #

Disks
THEN
We get the
following table :

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD

Node 1

Node 2

Node .

hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

hdisk0
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

hdisk?
hdisk?

Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device

No need to assign PVID when using this method.

11gRAC/ASM/AIX

[email protected]

131 of 393

11.3.5

Removingreservelockpolicyonhdisksfromeachnode
As for OCR and Voting disks, SETUP reserve_policy on ASM spfile and ASM hdisks, on each
node :
Example for one hdisk :
Issue the command lsattr E l hdisk7 to vizualize all attributes for hdisk7

{node1:root}/ # lsattr -El hdisk7


PR_key_value
none
cache_method
fast_write
ieee_volname
600A0B80000C54180000026345EE3CCC
lun_id
0x0006000000000000
max_transfer
0x100000
prefetch_mult 1
pvid
none
q_type
simple
queue_depth
10
raid_level
0
reassign_to
120
reserve_policy singl_path
rw_timeout
30
scsi_id
0x690600
size
4096
write_cache
yes
{node1:root}/ #

Persistant Reserve Key Value


True
Write Caching method
False
IEEE Unique volume name
False
Logical Unit Number
False
Maximum TRANSFER Size
True
Multiple of blocks to prefetch on read False
Physical volume identifier
False
Queuing Type
False
Queue Depth
True
RAID Level
False
Reassign Timeout value
True
Reserve Policy
True
Read/Write Timeout value
True
SCSI ID
False
Size in Mbytes
False
Write Caching enabled
False

Or only lsattr E l hdisk3 | grep reserve


{node1:root}/ # lsattr -El hdisk7 | grep reserve
reserve_policy single_path
Reserve Policy
{node1:root}/ #

True

On IBM storage (ESS, FasTt, DSXXXX) : Change the reserve_policy attribute to no_reserve
chdev -l hdisk? -a reserve_policy=no_reserve

On EMC storage : Change the reserve_lock attribute to no


chdev -l hdisk? -a reserve_lock=no

On HDS storage with HDLM driver, and no disks in Volume Group : Change the dlmrsvlevel
attribute to no_reserve
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve

Change the
reserve_p
olicy
attributes
for each
disks
dedicated
to ASM, on
each nodes
of the
cluster :

In our case, we have an IBM storage !!!


On node 1
{node1:root}/ # for i in 1 7 8 9 10 11 12 13 14
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed

{node1:root}/ #

On node 2
{node2:root}/ # for i in 0 6 7 8 9 10 12 13 14
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed

{node2:root}/ #

Example for one hdisk on node1 :


Issue the command lsattr E l hdisk3 |grep reserve to vizualize modified attributes for hdisk3
{node1:root}/ # lsattr -El hdisk7 | grep reserve
reserve_policy no_reserve
Reserve Policy
{node1:root}/ #

True

As described before, disks might have different names from one node to another for example hdisk7 on node1
11gRAC/ASM/AIX

[email protected]

132 of 393

might be hdisk8 on node2, etc

11gRAC/ASM/AIX

[email protected]

133 of 393

11.3.6

IdentifyMajorandMinornumberofhdiskoneachnode
Disks

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

L0

/dev/ASMspf_disk

hdisk1

hdisk0

L6
L7
L8
L9
LA
LB
LC
LD

/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?

hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

We need to get major and minor number for each hdisk of node1 :

To obtain minor
and major
numbers of each
hdisk, on node1
We need to issue
the command :
ls la /dev/hdisk?
On node1 :

{node1:root}/ # for i in 1 7 8 9 10 11 12 13 14
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
21, 5 Mar
crw------1 root
system
21, 5 Mar
brw------1 root
system
21, 11 Mar
crw------1 root
system
21, 11 Mar
brw------1 root
system
21, 12 Mar
crw------1 root
system
21, 12 Mar
brw------1 root
system
21, 13 Mar
crw------1 root
system
21, 13 Mar
brw------1 root
system
21, 14 Mar
crw------1 root
system
21, 14 Mar
brw------1 root
system
21, 15 Mar
crw------1 root
system
21, 15 Mar
brw------1 root
system
21, 16 Mar
crw------1 root
system
21, 16 Mar
brw------1 root
system
21, 17 Mar
crw------1 root
system
21, 17 Mar
brw------1 root
system
21, 18 Mar
crw------1 root
system
21, 18 Mar
{node1:root}/ #

12
12
07
07
07
07
07
07
07
07
07
07
07
07
27
27
27
27

12:18
12:18
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
16:13
16:13
16:13
16:13

To fill in the following table, taking result from command for hdisk1, we get 21,
number, and 5 as minor number.
Disks

ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

/dev/hdisk1
/dev/rhdisk1
/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk11
/dev/rhdisk11
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14

which give us 21 as major

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

L0

/dev/ASMspf_disk

hdisk1

21

hdisk0

L6
L7
L8
L9
LA
LB
LC
LD

/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?

hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21

11
12
13
14
15
16
17
18

hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

11gRAC/ASM/AIX

[email protected]

Major
Num.

Minor
Num.

134 of 393

We need to get major and minor number for each hdisk of node2 :
To obtain minor and
major numbers of
each hdisk, on
node2
We need to issue
the command :
ls la /dev/hdisk?
On node2 :

{node2:root}/ # for i in 0 6 7 8 9 10 12
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
{node2:root}/ #

13 14

5
5
11
11
12
12
13
13
14
14
15
15
16
16
17
17
18
18

Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar

12
12
07
07
07
07
07
07
07
12
07
07
07
07
27
27
27
27

12:17
12:17
10:32
10:32
10:32
10:32
10:32
10:32
10:32
13:55
10:32
10:32
10:32
10:32
16:13
16:13
16:13
16:13

To fill in the following table, taking result from command for hdisk0, we get 2&,
number, and 5 as minor number.
Disks

ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

/dev/hdisk0
/dev/rhdisk0
/dev/hdisk6
/dev/rhdisk6
/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14

which give us 21 as major

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

L0

/dev/ASMspf_disk

hdisk1

21

hdisk0

21

L6
L7
L8
L9
LA
LB
LC
LD

/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?

hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21

11
12
13
14
15
16
17
18

hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21

11
12
13
14
15
16
17
18

11gRAC/ASM/AIX

[email protected]

135 of 393

11.3.7

CreateUniqueVirtualDevicetoaccesssameLUNfromeachnode

As disks could have different names from one node to another for example L0 correspond to hdisk1 on node1,
and could have been hdisk0 or other on node2, etc
For ASM spfile disk, it is madatory to create a vitual device if hdisk name is different on each node. Even so hdisk
could be same for 2 nodes, adding an extra node could introduce a different hdisk name for the same LUN on the new
node, so its best advice to create a virtual device for ASM spfile disk !!!

THEN from the following table, for the ASM spfile disk :
Disks

LUNs
ID
Number

Device Name

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

L0
L6
L7
L8
L9
LA
LB
LC
LD

/dev/ASMspf_disk
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?

hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17
18

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17
18

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

We need to create a virtual device for the asm spfile disk.


To create same virtual
devices on each node called
:
/dev/ASMspf_disk
we need to use major and
minor number of hdisks
which will make the link
between the virtual devices
and the hdisks.

11gRAC/ASM/AIX

Using the command : mknod Device_Name c MajNum MinNum


For first node, as root user
{node1:root}/ # mknod /dev/ASMspf_disk c 21 5
For Second node, as root user
{node2:root}/ # mknod /dev/ASMspf_disk c 21 5

[email protected]

136 of 393

NOW for the ASM disks, we have 2 options possibles :

OPTION 1, Using the /dev/rhdisk, then we dont need to create virtual device. Well just have to set the
right user ownership and unix read/write permissions.
For this option, move to next chapter and follow option 1.

OPTION 2, Creating Virtual Devices like /dev/ASM_Disk?, if wanted for humans/administrator


conveniences.

Or

With option 2, well have the following table :


Disks

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

L0
L6
L7
L8
L9
LA
LB
LC
LD

/dev/ASMspf_disk
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7
/dev/ASM_Disk8

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17
18

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17
18

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

We need to create a virtual device for the asm disks.

By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.

To create same virtual


devices on each node
called :

Using the command : mknod Device_Name c MajNum MinNum

/dev/ASM_Disk

{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
.....

we need to use major and


minor number of hdisks
which will make the link
between the virtual devices
and the hdisks.

For first node, as root user


mknod
mknod
mknod
mknod

/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4

c
c
c
c

21
21
21
21

11
12
13
14

c
c
c
c

21
21
21
21

11
12
13
14

For Second node, as root user


{node2:root}/
{node2:root}/
{node2:root}/
{node2:root}/
.....

11gRAC/ASM/AIX

#
#
#
#

#
#
#
#

mknod
mknod
mknod
mknod

/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4

[email protected]

137 of 393

11.3.8

SetOwnership/PermissionsonVirtualDevices

11gRAC/ASM/AIX

[email protected]

138 of 393

For ASM spfile :


THEN set
ownership of
the created
virtual devices
to oracle:dba

For first node, as root user


{node1:root}/ # chown asm:oinstall /dev/ASMspf_disk

THEN set
read/write
permissions
of the created
virtual devices
to 660

For first node, as root user


{node1:root}/ # chmod 660 /dev/ASMspf_disk

Checking the
modifications

For first node, as root user

After Oracle
clusterware
installation,

For Second node, as root user


{node2:root}/ # chown asm:oinstall /dev/ASMspf_disk

For Second node, as root user


{node2:root}/ # chmod 660 /dev/ASMspf_disk

{node1:root}/ # ls -la /dev/* | grep "21, 5"


crw-rw---1 asm
dba
21, 5 Mar 12 15:06 /dev/ASMspf_disk
brw------1 root
system
21, 5 Mar 12 12:18 /dev/hdisk1
crw------1 root
system
21, 5 Mar 12 12:18 /dev/rhdisk1
{node1:root}/ #

For Second node, as root user

{node2:root}/ # ls -la /dev/* | grep "20, 5"


crw-rw---1 asm
dba
21, 5 Mar 12 15:06 /dev/ASMspf_disk
brw------1 root
system
21, 5 Mar 12 12:17 /dev/hdisk0
crw------1 root
system
21, 5 Mar 12 12:17 /dev/rhdisk0
{node2:root}/ #

For ASM disks :


With option 1, well have the following table, BUT Well not use Major and Minor numbers :
Disks

LUNs
ID
Number

Device Name
seen on node1 (1) and node2 (2)

Node 1
Corresponding
hdisk

Node 2
Corresponding
hdisk

L0
L6
L7
L8
L9
LA
LB
LC

/dev/ASMspf_disk
/dev/rhdisk7 on 1 --- /dev/rhdisk6 on 2
/dev/rhdisk8 on 1 --- /dev/rhdisk7 on 2
/dev/rhdisk9 on 1 --- /dev/rhdisk8 on 2
/dev/rhdisk10 on 1 --- /dev/rhdisk9 on 2
/dev/rhdisk11 on 1 --- /dev/rhdisk10 on 2
/dev/rhdisk12 on 1 --- /dev/rhdisk12 on 2
/dev/rhdisk13 on 1 --- /dev/rhdisk13 on 2

hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM

We just need to set oracle:dba ownerchip to /dev/rhdisk? mapped to LUN for ASM.
And to set 660 read/write permissions to these rhdisk? :
THEN set ownership of the created virtual devices to oracle:dba
For first node, as root user

For Second node, as root user

{node1:root}/ # for i in 7 8 9 10 11 12 13
>do
>chown asm:dba /dev/rhdisk$i
>done
node1:root-/>

{node2:root}/ # for i in 6 7 8 9 10 12 13
>do
>chown asm:dba /dev/rhdisk$i
>done
node2:root-/>

THEN set read/write permissions of the created virtual devices to 660


11gRAC/ASM/AIX

[email protected]

139 of 393

For first node, as root user

For Second node, as root user

{node1:root}/ # for i in 7 8 9 10 11 12 13 14

{node2:root}/ # for i in 6 7 8 9 10 12 13

>do
>chmod 660 /dev/rhdisk$i
>done
node1:root-/>

14
>do
>chmod 660 /dev/rhdisk$i
>done
node2:root-/>

Checking the
modifications
Check also
on second
node ...

For first node, as root user

{node1:root}/ # ls -la /dev/rhdisk? |


crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
{node1:root}/ #

grep oracle
11 Mar 07 10:31
12 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31

/dev/rhdisk7
/dev/rhdisk8
/dev/rhdisk9
/dev/rhdisk10
/dev/rhdisk11
/dev/rhdisk12
/dev/rhdisk13

With option 2, well have the following table :


Disks

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

L0
L6
L7
L8
L9
LA
LB
LC

/dev/ASMspf_disk
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13

21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17

hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13

21
21
21
21
21
21
21
21

5
11
12
13
14
15
16
17

ASM Spfile Disk


Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
THEN set
ownership of the
created virtual
devices to
oracle:dba

For first node, as root user


{node1:root}/ # for i in 1 2 3 4 5

>do
>chown asm:dba /dev/ASM_Disk$i
>done
node1:root-/>

For Second node, as root user


{node2:root}/ # for i in 1 2 3 4 5
>do
>chown asm:dba /dev/ASM_Disk$i
>done
node2:root-/>

THEN set
read/write
permissions of
the created
virtual devices to
660

For first node, as root user


{node1:root}/ # for i in 1 2 3 4

For Second node, as root user


{node2:root}/ # for i in 1 2 3 4 5

5 >do
>chmod 660 /dev/ASM_Disk$i
>done
node1:root-/>

>do
>chmod 660 /dev/ASM_Disk$i
>done
node2:root-/>

Checking the
modifications

For first node, as root user

{node1:root}/ # ls -la /dev/* | grep "21, 11"


brw------1 root
system
21, 11 Mar 07 10:31 /dev/hdisk7
crw------1 root
system
21, 11 Mar 07 10:31 /dev/rhdisk7
crw-rw-r-1 asm
dba
21, 11 Apr 03 14:24 /dev/ASM_Disk1

Check also for all disks, and on second node ...

11gRAC/ASM/AIX

[email protected]

140 of 393

11.3.9

Formatingthevirtualdevices(zeroing)

Now, we need to format the virtual devices, or rhdisk. In both option, zeroing rhdisk is sufficient for :

ASM spfile disk


ASM disks

Format (Zeroing) and Verify


that you can read on the
disks from each
node :

On node 1

{node1:root}/ # for i in 7 8 9 10 11 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...

records
records
records
records

in.
out.
in.
out.

node:root-/> for i in 6 7 8 9 10 12 13 14
Verify devices concurrent read/write access by running at the same time dd command from each node :

At the same time :

on node1

On node 1

{node1:root}/ # for i in 7 8 9 10 11 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...

on node2

records
records
records
records

in.
out.
in.
out.

On node 2

{node2:root}/ # for i in 6 7 8 9 10 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...

11gRAC/ASM/AIX

records
records
records
records

in.
out.
in.
out.

[email protected]

141 of 393

11.3.10 RemovingassignedPVIDonhdisk

PVID MUST NOT BEEN SET on hdisk used for ASM, and should not be set for OCR, voting and ASM spfile
disks !!!!
From following table, make sure that no PVID are assigned to hdisks from each node mapped to LUNs.
Disks

OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM

LUNs
ID
Number

Device Name
seen on node1 (1) and
node2 (2)

Node 1
Corresponding
hdisk

Major
Num.

Minor
Num.

Node 2
Corresponding
hdisk

Major
Num.

Minor
Num.

L1
L2
L3
L4
L5
L0

/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
/dev/asmspf_disk

hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk1

21
21
21
21
21
21

6
7
8
9
10
5

hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk0

21
21
21
21
21
21

6
7
8
9
10
5

L6
L7
L8
L9
LA
LB
LC
LD

/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?

hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21

11
12
13
14
15
16
17
18

hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14

21
21
21
21
21
21
21
21

11
12
13
14
15
16
17
18

To remove PVID
from hdisk, we will
use the chdev
command :
PVID must be
removed from hdisk
on each node.
IMPORTANT !!!!!
Dont remove PVID
to hdisk which are
not yours !!!!

Using the command : chdev l hdisk? a pv=clear


For first node, as root user
{node1:root}/ # for i in 1 2 3 4 5 6 1 7 8 9 10 11 12 13 14
>do
>chdev l hdisk$i a pv=clear
>done
hdisk2 changed
hdisk3 changed
hdisk4 changed
hdisk5 changed
...
For Second node, as root user
{node2:root}/ # for i in 0 2 3 4 5 6 0 7 8 9 10 12 13 14
>do
>chdev l hdisk$i a pv=clear
>done
hdisk2 changed
hdisk3 changed
hdisk4 changed
hdisk5 changed
...

Check whith lspv command as root on each node, if PVID are still assigned or not !!!

11gRAC/ASM/AIX

[email protected]

142 of 393

11.4 Checking Shared Devices


Checking for reserve_policy, and PVID settings.
Checking that no PVID are assigned to hdisks, and no single_path reserve policy are set :
As root user, using the command : lsattr -El hdisk1
{node1:root}/ # lsattr -El hdisk1
PR_key_value
none
cache_method
fast_write
ieee_volname
600A0B80000C54030000022D45F4DF5F
lun_id
0x0000000000000000
max_transfer
0x100000
prefetch_mult 1
pvid
none
q_type
simple
queue_depth
10
raid_level
0
reassign_to
120
reserve_policy no_reserve
rw_timeout
30
scsi_id
0x690600
size
100
write_cache
yes
{node1:root}/ #

Persistant Reserve Key Value


True
Write Caching method
False
IEEE Unique volume name
False
Logical Unit Number
False
Maximum TRANSFER Size
True
Multiple of blocks to prefetch on read False
Physical volume identifier
False
Queuing Type
False
Queue Depth
True
RAID Level
False
Reassign Timeout value
True
Reserve Policy
True
Read/Write Timeout value
True
SCSI ID
False
Size in Mbytes
False
Write Caching enabled
False

To check all hdisks in one shot, use following shell script :


for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done

11gRAC/ASM/AIX

[email protected]

143 of 393

For first node, as root user running shell script


{node1:root}/ # for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done
hdisk1 -> lun_id = 0x0000000000000000 / reserve_policy = no_reserve / pvid = none
hdisk2 -> lun_id = 0x0001000000000000 / reserve_policy = no_reserve / pvid = none
hdisk3 -> lun_id = 0x0002000000000000 / reserve_policy = no_reserve / pvid = none
hdisk4 -> lun_id = 0x0003000000000000 / reserve_policy = no_reserve / pvid = none
hdisk5 -> lun_id = 0x0004000000000000 / reserve_policy = no_reserve / pvid = none
hdisk6 -> lun_id = 0x0005000000000000 / reserve_policy = no_reserve / pvid = none
hdisk7 -> lun_id = 0x0006000000000000 / reserve_policy = no_reserve / pvid = none
hdisk8 -> lun_id = 0x0007000000000000 / reserve_policy = no_reserve / pvid = none
hdisk9 -> lun_id = 0x0008000000000000 / reserve_policy = no_reserve / pvid = none
hdisk10 -> lun_id = 0x0009000000000000 / reserve_policy = no_reserve / pvid = none
hdisk11 -> lun_id = 0x000a000000000000 / reserve_policy = no_reserve / pvid = none
hdisk12 -> lun_id = 0x000b000000000000 / reserve_policy = no_reserve / pvid = none
hdisk13 -> lun_id = 0x000c000000000000 / reserve_policy = no_reserve / pvid = none
{node1:root}/ #

For Second node, as root user, running same shell script :


{node2:root}/ # for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done
hdisk0 -> lun_id = 0x0000000000000000 / reserve_policy = no_reserve / pvid = none
hdisk1 -> lun_id = 0x0001000000000000 / reserve_policy = no_reserve / pvid = none
hdisk2 -> lun_id = 0x0002000000000000 / reserve_policy = no_reserve / pvid = none
hdisk3 -> lun_id = 0x0003000000000000 / reserve_policy = no_reserve / pvid = none
hdisk4 -> lun_id = 0x0004000000000000 / reserve_policy = no_reserve / pvid = none
hdisk5 -> lun_id = 0x0005000000000000 / reserve_policy = no_reserve / pvid = none
hdisk6 -> lun_id = 0x0006000000000000 / reserve_policy = no_reserve / pvid = none
hdisk7 -> lun_id = 0x0007000000000000 / reserve_policy = no_reserve / pvid = none
hdisk8 -> lun_id = 0x0008000000000000 / reserve_policy = no_reserve / pvid = none
hdisk9 -> lun_id = 0x0009000000000000 / reserve_policy = no_reserve / pvid = none
hdisk10 -> lun_id = 0x000a000000000000 / reserve_policy = no_reserve / pvid = none
hdisk12 -> lun_id = 0x000b000000000000 / reserve_policy = no_reserve / pvid = none
hdisk13 -> lun_id = 0x000c000000000000 / reserve_policy = no_reserve / pvid = none
{node2:root}/ #

11gRAC/ASM/AIX

[email protected]

144 of 393

11.5 Recommandations, hints and tips


11.5.1

OCR/Votingdisks

!!! IN ANY case, DONT assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.

AFTER CRS All hdisks prepared for OCR and voting disks have dba, or oinstall group assigned :
Installation :
For OCR :

How to
identify
hdisks used
as OCR, and
votings
disks :

{node1:root}/ # ls -la /dev/ocr*disk*


crw-r----1 root
dba
20,
crw-r----1 root
dba
20,
{node1:root}/ #

6 Mar 12 15:03 /dev/ocr_disk1


7 Mar 12 15:03 /dev/ocr_disk2

Then

{node1:root}/ # ls -la /dev/* |grep "20,


brw------1 root
system
20,
crw-r----1 root
dba
20,
crw------1 root
system
20,
{node1:root}/ #

6"
6 Mar 07 10:31 /dev/hdisk2
6 Mar 12 15:03 /dev/ocr_disk1
6 Mar 07 10:31 /dev/rhdisk2

And using AIX command :


Example with OCR and corresponding rhdisk 2 on node1 :

{node1:crs}/crs/11.1.0/bin ->lquerypv
00000010
C2BA0000 00001000 00012BFF
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv
FFC00000 00000000 00000000'
00000000
00820000 FFC00000 00000000
{node1:root}/ #

-h /dev/rhdisk2|grep 'z{|}'
7A7B7C7D |..........+.z{|}|
-h /dev/rhdisk2|grep '00820000
00000000

|................|

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000
00820000 FFC00000 00000000 00000000
00000010
C2BA0000 00001000 00012BFF 7A7B7C7D
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|..........+.z{|}|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

OR

11gRAC/ASM/AIX

[email protected]

145 of 393

AFTER CRS For Voting :


Installation :
{node1:root}/ # ls -la /dev/vot*_disk*
crw-r--r-1 crs
dba
21, 8 Apr 05 14:12 /dev/voting_disk1
How to
crw-r--r-1 crs
dba
21, 9 Apr 05 14:12 /dev/voting_disk2
identify
crw-r--r-1
crs
dba
21, 10 Apr 05 14:12 /dev/voting_disk3
hdisks used
{node1:root}/
#
as OCR, and
votings
Then
disks :
{node1:root}/ # ls -la /dev/* |grep "21,
brw------1 root
system
21,
crw------1 root
system
21,
crw-r--r-1 crs
dba
21,
{node1:root}/ #

8"
8 Mar 07 10:31 /dev/hdisk4
8 Mar 07 10:31 /dev/rhdisk4
8 Apr 05 14:13 /dev/voting_disk1

And using AIX command :


Example with OCR and corresponding rhdisk 2 on node1 :

{node1:crs}/crs/11.1.0/bin ->lquerypv
00000010
A4120000 00000200 00095FFF
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv
FFC00000 00000000 00000000'
00000000
00220000 FFC00000 00000000
{node1:root}/ #

-h /dev/rhdisk4|grep 'z{|}'
7A7B7C7D |.........._.z{|}|
-h /dev/rhdisk4|grep '00220000
00000000

|."..............|

{node1:root}/ # lquerypv -h /dev/rhdisk4


00000000
00220000 FFC00000 00000000 00000000
00000010
A4120000 00000200 00095FFF 7A7B7C7D
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000

|."..............|
|.........._.z{|}|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

OR

11gRAC/ASM/AIX

[email protected]

146 of 393

You must reset the hdisk header having the OCR or Voting disk stamp :

How to free
hdisk at AIX
level, when
hdisk are not
anymore used
for OCR or
voting disk, or
need to be
reset for a
failed CRS
installation ?

{node1:root}/ #dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 &


25000+0 records in.
25000+0 records out.
THEN query on the hdisk header will return nothing more than all lines full of 0:

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

147 of 393

11.5.2

ASMdisks

!!! IN ANY case, DONT assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.

AFTER ASM
Installation,
and ASM
diskgroup
creation :

All hdisks prepared for ASM are owned by oracle user, and group dba :

{node1:root}/ # ls -la /dev/rhdisk* |


brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
How to
brw-rw---1 asm
dba
21,
identify
1 asm
dba
21,
hdisks used crw-rw---brw-rw---1 asm
dba
21,
by ASM.
crw-rw---1 asm
dba
21,
brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
brw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
{node1:root}/ #

oracle
11 Mar
11 Mar
12 Mar
12 Mar
13 Mar
13 Mar
14 Mar
14 Mar
15 Mar
15 Mar
16 Mar
16 Mar
17 Mar
17 Mar
18 Mar
18 Mar

07
07
07
07
07
07
07
07
07
07
07
07
27
27
27
27

| grep
11 Mar
12 Mar
13 Mar
14 Mar
15 Mar
16 Mar
17 Mar
18 Mar

oracle
07 10:31
07 10:31
07 10:31
07 10:31
07 10:31
07 10:31
27 16:13
27 16:13

10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
16:13
16:13
16:13
16:13

/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk11
/dev/rhdisk11
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14

Or if using virtual devices :

{node1:root}/ # ls -la /dev/ASM*Disk*


crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
crw-rw---1 asm
dba
21,
{node1:root}/ #

/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7
/dev/ASM_Disk8

THEN, for example with /dev/ASM_Disk1, we use major/minor number to identify the equivalent
rhdisk

{node1:root}/ # ls -la /dev/* | 21, 11


brw-rw---1 asm
dba
21, 11 Mar 07 10:31 /dev/hdisk7
crw-rw---1 asm
dba
21, 11 Mar 07 10:31 /dev/rhdisk7
crw-rw---1 oracle
dba
21, 11 Mar 07 10:31 /dev/ASM_Disk1
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

148 of 393

And using AIX command :

How to identify
hdisks used by
ASM.

Example with ASM_Disk1 and corresponding rhdisk7 on node1 :

{node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk2|grep ORCLDISK


00000020
4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
{node1:root}/ #
OR

ORCLDISK
standing for
{node1:root}/ # lquerypv -h /dev/rhdisk2
oracle ASM disk 00000000
00820101 00000000 80000001 D12A3D5B
00000010
00000000 00000000 00000000 00000000
00000020
4F52434C 4449534B 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
0A100000 00010203 41534D44 425F4752
ASMDB_GROU 00000050
4F55505F 30303031 00000000 00000000
P standing for
00000060
00000000 00000000 41534D44 425F4752
ASM Disk
00000070
4F555000 00000000 00000000 00000000
Group used for 00000080
00000000 00000000 41534D44 425F4752
the ASMDB
00000090
4F55505F 30303031 00000000 00000000
Database we
000000A0
00000000 00000000 00000000 00000000
have created in 000000B0
00000000 00000000 00000000 00000000
our example
000000C0
00000000 00000000 01F5874B ED6CE000
(the one you
000000D0
01F588CA 150BA800 02001000 00100000
will create later 000000E0
0001BC80 00001400 00000002 00000001
)
000000F0
00000002 00000002 00000000 00000000
{node1:root}/ #

|.............@|
|................|
|ORCLDISK........|
|................|
|........ASMDB_GR|
|OUP_0001........|
|........ASMDB_GR|
|OUP.............|
|........ASMDB_GR|
|OUP_0001........|
|................|
|................|
|...........K.l..|
|................|
|................|
|................|

NON used ASM disks with query on the hdisk header will return nothing more than all lines full of
0:

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

149 of 393

Example of output with an other rhdisk :


THEN query on the hdisk header will return :
ORCLDISK
standing for
oracle ASM disk {node1:root}/ #lquerypv -h /dev/rhdisk8
00000000
00820101 00000000 80000000 DEC8B940
00000010
00000000 00000000 00000000 00000000
00000020
4F52434C 4449534B 00000000 00000000
ASMDB_FLASH
00000030
00000000 00000000 00000000 00000000
RECOVERY
00000040
0A100000 00000103 41534D44 425F464C
standing for
00000050
41534852 45434F56 4552595F 30303030
ASM Disk Group
00000060
00000000 00000000 41534D44 425F464C
used for the
00000070
41534852 45434F56 45525900 00000000
ASMDB
00000000 00000000 41534D44 425F464C
Database Flash 00000080
41534852 45434F56 4552595F 30303030
Recovery Area 00000090
00000000 00000000 00000000 00000000
we have created 000000A0
00000000 00000000 00000000 00000000
in our example 000000B0
00000000 00000000 01F5874E 4E47F800
(the one you will 000000C0
000000D0
01F588CA 14FA8C00 02001000 00100000
create later )
000000E0
0001BC80 00001400 00000002 00000001
000000F0
00000002 00000002 00000000 00000000
{node1:root}/ #

|...............@|
|................|
|ORCLDISK........|
|................|
|........ASMDB_FL|
|ASHRECOVERY_0000|
|........ASMDB_FL|
|ASHRECOVERY.....|
|........ASMDB_FL|
|ASHRECOVERY_0000|
|................|
|................|
|...........NNG..|
|................|
|................|
|................|

You must reset the hdisk header having ASM disk stamp :

How to free
hdisk at AIX
level, when
hdisk are not
anymore used
for OCR or
voting disk, or
need to be
reset for a
failed CRS
installation ?

{node1:root}/ #dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 &


25000+0 records in.
25000+0 records out.
THEN query on the hdisk header will return nothing more than all lines full of 0:

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000
00000000 00000000 00000000 00000000
00000010
00000000 00000000 00000000 00000000
00000020
00000000 00000000 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
00000000 00000000 00000000 00000000
00000050
00000000 00000000 00000000 00000000
00000060
00000000 00000000 00000000 00000000
00000070
00000000 00000000 00000000 00000000
00000080
00000000 00000000 00000000 00000000
00000090
00000000 00000000 00000000 00000000
000000A0
00000000 00000000 00000000 00000000
000000B0
00000000 00000000 00000000 00000000
000000C0
00000000 00000000 00000000 00000000
000000D0
00000000 00000000 00000000 00000000
000000E0
00000000 00000000 00000000 00000000
000000F0
00000000 00000000 00000000 00000000
{node1:root}/ #

|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|

Assigning PVID on the hdisk used by ASM will give same result, on the disk header !!!!

11gRAC/ASM/AIX

[email protected]

150 of 393

12 ORACLE CLUSTERWARE (CRS) INSTALLATION


Oracle Cluster Ready Services installation is necessary and mandatory.
Oracle Clusterware will be installed in /crs/11.1.0 ($CRS_HOME, $ORA_CRS_HOME) on each node.
On each node :

Oracle Clusterware (CRS) will be started,


with
o Public, Private and Virtual hostname defined
o Public, and Private network defined
o OCR and voting disks configured.
o Cluster Interconnect configured (Private Network).
o Virtual IP (VIP) will be configured and started.
o Oracle Notification Server (ONS) will be started.
o Goblal Service Deamon (GSD) will be started.

11gRAC/ASM/AIX

[email protected]

151 of 393

12.1 Cluster VeriFication utility


12.1.1

UnderstandingandUsingClusterVerificationUtility

NEW since 10gRAC R2 !!!!


Cluster Verification Utility (CVU) is a tool that performs system checks. CVU commands assist you with
confirming that your system is properly configured for :

Oracle Clusterware
Oracle Real Application Clusters installation.

Introduction to Installing and Configuring Oracle Clusterware and Oracle Real Application Clusters
http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/intro.htm#i1026198
Oracle Clusterware and Oracle Real Application Clusters Pre-Installation Procedures
http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/part2.htm
12.1.2

UsingCVUtoDetermineifInstallationPrerequisitesareComplete

On Both nodes, using oracle clusterware Disk1 as root user, DO run


/clusterware/Disk1/rootpre/rootpre.sh
CVU is using libraries installed by rootpre.sh script to run properly.
On node1

On node2

{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:14:39
Kernel extension /etc/pw-syscall.64bit_kernel is loaded.
Unloading the existing extension: /etc/pw-syscall.64bit_kernel....

{node2:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:15:22
Kernel extension /etc/pw-syscall.64bit_kernel is loaded.
Unloading the existing extension: /etc/pw-syscall.64bit_kernel....

Oracle Kernel Extension Loader for AIX


Copyright (c) 1998,1999 Oracle Corporation

Oracle Kernel Extension Loader for AIX


Copyright (c) 1998,1999 Oracle Corporation

Unconfigured the kernel extension successfully


Unloaded the kernel extension successfully
Saving the original files in /etc/ora_save_08-02-07.10:14:39....
Copying new kernel extension to /etc....
Loading the kernel extension from /etc

Unconfigured the kernel extension successfully


Unloaded the kernel extension successfully
Saving the original files in /etc/ora_save_08-02-07.10:15:22....
Copying new kernel extension to /etc....
Loading the kernel extension from /etc

Oracle Kernel Extension Loader for AIX


Copyright (c) 1998,1999 Oracle Corporation

Oracle Kernel Extension Loader for AIX


Copyright (c) 1998,1999 Oracle Corporation

Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x4245600


Successfully configured /etc/pw-syscall.64bit_kernel with kmid:
0x4245600
The kernel extension was successfuly loaded.

Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x419f800


Successfully configured /etc/pw-syscall.64bit_kernel with kmid: 0x419f800
The kernel extension was successfuly loaded.

Configuring Asynchronous I/O....


Asynchronous I/O is already defined

Configuring POSIX Asynchronous I/O....


Posix Asynchronous I/O is already defined

Configuring POSIX Asynchronous I/O....


Posix Asynchronous I/O is already defined
Checking if group services should be configured....
Nothing to configure.
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #

11gRAC/ASM/AIX

Configuring Asynchronous I/O....


Asynchronous I/O is already defined

Checking if group services should be configured....


Nothing to configure.
{node2:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #

[email protected]

152 of 393

IMPORTANT Extract from :


Oracle Database Release Notes
10g Release 2 (10.2) for AIX 5L Based Systems (64-Bit)
Part Number B19074-03
This 10g release note applies also to CVU from 11g !!!

http://download-uk.oracle.com/docs/cd/B19306_01/relnotes.102/b19074/toc.htm
Third Party Clusterware

If your deployment environment does not use HACMP, ignore the HACMP version and patches
errors reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2, the expected patch for
HACMP v5.2 is IY60759. On AIX 5L version 5.3, the expected patches for HACMP v5.2 are
IY60759, IY61034, IY61770, and IY62191.

If your deployment environment does not use GPFS, ignore the GPFS version and patches errors
reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2 and version 5.3, the expected
patches for GPFS 2.3.0.3 are IY63969, IY69911, and IY70276.

Check Kernel Parameter Settings


CVU does not check kernel parameter settings.
This issue is tracked with Oracle bug 4565046.
Missing Patch Error Message
When CVU finds a missing patch, it reports a xxxx patch is unknown error. This should be read as xxxx
patch is missing.
This issue is tracked with Oracle bug 4566437.
Verify GPFS is Installed
Use the following commands to check for GPFS :
cluvfy stage -pre cfs -n node_list -s storageID_list [-verbose]
cluvfy stage -post cfs -n node_list -f file_system [-verbose]
This issue is tracked with Oracle bug 456039.
Oracle Cluster Verification Utility
Cluster Verification Utility (CVU) is a utility that is distributed with Oracle Clusterware 10g. It was developed to assist in
the installation and configuration of Oracle Clusterware as well as Oracle Real Application Clusters 10g (RAC). The
wide domain of deployment of CVU ranges from initial hardware setup through fully operational cluster for RAC
deployment and covers all the intermediate stages of installation and configuration of various components.
Cluster Verification Utility (CVU) download for all supported RAC 10g platforms will be posted as they become
available.
Cluster Verification Utility Frequently Asked Questions (PDF)
http://www.oracle.com/technology/products/database/clustering/cvu/faq/cvu_faq.pdf
Download CVU for Oracle RAC 10g :
http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html
Download CVU readme file for Oracle RAC 10g on AIX5L :
http://www.oracle.com/technology/products/database/clustering/cvu/readme/AIX_readme.pdf
Subject: Shared disk check with the Cluster Verification Utility Doc ID: Note:372358.1

11gRAC/ASM/AIX

[email protected]

153 of 393

MAKE SURE You have unzip tool or symbolic link to unzip in /usr/bin on both node
On node1
Execute the
following script
As oracle user :

Setup and export your TMP, TEMP and TMPDIR variables


export TMP=/tmp With /tmp or other destination having enough free space
export TEMP=/tmp
export TMPDIR=/tmp
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware/cluvfy # ls
cvupack.zip jrepack.zip
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware/cluvfy #

./runcluvfy.sh stage -pre crsinst -n node1,node2 verbose


If you want the result in a text file do the following :
./runcluvfy.sh stage -pre crsinst -n node1,node2 verbose > /tmp/cluvfy_oracle.txt

And analyze the results :


At this stage, node connectivity is checked !!!
Performing pre-checks for cluster services setup
Checking node reachability...

this is equivalent to hostname ping

Check: Node reachability from node "node1"


Destination Node
Reachable?
------------------------------------ -----------------------node1
yes
node2
yes
Result: Node reachability check passed from node "node1".
Checking user equivalence...

this is ssh or rsh tests for oracle user

Check: User equivalence for user "oracle"


Node Name
Comment
------------------------------------ -----------------------node2
passed
node1
passed
Result: User equivalence check passed for user "oracle".

From
From
From
From

node1 to node2
node1 to node1
node2 to node1
node2 to node2

Checking administrative privileges...

this is user and group existence tests on


node1

Check: Existence of user "oracle"


Node Name User Exists
Comment
------------ ------------------------ -----------------------node1
yes
passed
node2
yes
passed
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
Node Name Status
Group ID
------------ ------------------------ -----------------------node1
exists
501
node2
exists
501
Result: Group existence check passed for "oinstall".

The following message is not a big issue :

Check: Membership of user "oracle" in group "oinstall" [as Primary]


Node Name
User Exists Group Exists User in Group Primary
Comment
---------------- ------------ ------------ ------------ ------------ -----------tbas11b
yes
yes
yes
no
failed
tbas11a
yes
yes
yes
no
failed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] failed.

Result: Group existence check failed for


"oinstall".

Just create oinstall group at AIX


level, using smitty as root

Administrative privileges check failed

At this stage, node connectivity is checked !!!


Checking node connectivity...
11gRAC/ASM/AIX

[email protected]

154 of 393

Interface information for node "node1"


Interface Name
IP Address
Subnet
------------------------------ ------------------------------ ---------------en0
10.3.25.81
10.3.25.0
en1
10.10.25.81
10.10.25.0
en2
20.20.25.81
20.20.25.0

this is the network interface tests on


node1 and node2

Interface information for node "node2"


Interface Name
IP Address
Subnet
------------------------------ ------------------------------ ---------------en0
10.3.25.82
10.3.25.0
en1
10.10.25.82
10.10.25.0
en2
20.20.25.82
20.20.25.0
Check: Node connectivity of subnet "10.3.25.0"
Source
Destination
Connected?
------------------------------ ------------------------------ ---------------node1:en0
node2:en0
yes
Result: Node connectivity check passed for subnet "10.3.25.0" with node(s) node1,node2.

Node connectivity of subnet on node1


and node2, for each network interface

Check: Node connectivity of subnet "10.10.25.0"


Source
Destination
Connected?
------------------------------ ------------------------------ ---------------node1:en1
node2:en1
yes
Result: Node connectivity check passed for subnet "10.10.25.0" with node(s) node1,node2.
Check: Node connectivity of subnet "20.20.25.0"
Source
Destination
Connected?
------------------------------ ------------------------------ ---------------node1:en2
node2:en2
yes
Result: Node connectivity check passed for subnet "20.20.25.0" with node(s) node1,node2.

Testing suitable interfaces for VIP


(public network), and private network

Suitable interfaces for VIP on subnet "20.20.25.0":


node1 en2:20.20.25.81
node2 en2:20.20.25.82

Dont worry if you have messase as

Suitable interfaces for the private interconnect on subnet "10.3.25.0":


node1 en0:10.3.25.81
node2 en0:10.3.25.82
Suitable interfaces for the private interconnect on subnet "10.10.25.0":
node1 en1:10.10.25.81
node2 en1:10.10.25.82
Result: Node connectivity check passed.

ERROR:
Couldnotfindasuitablesetof
interfacesforVIPs.
Result:Nodeconnectivitycheck
failed.
Just carry on (it will not affect the
installation).

ERROR:CouldnotfindasuitablesetofinterfacesforVIPs.
This is due to a CVU issue as explain bellow :

Metalink Node ID 338924.1


CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs
Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address that
begins with any of the following octets is non-routable and hence may not be fit for being used as a
VIP: 172.16.x.x192.168.x.x10.x.x.x However

11gRAC/ASM/AIX

[email protected]

155 of 393

At this
stage, node
system
requirement
s for crs is
checked !!!

Checking system requirements for 'crs'...


Check: Kernel version
Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
AIX 5.3
AIX 5.2
passed
node2
AIX 5.3
AIX 5.2
passed
Result: Kernel version check passed.
Check: System architecture
Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
powerpc
powerpc
passed
node2
powerpc
powerpc
passed
Result: System architecture check passed.
Check: Total memory
Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
2GB (2097152KB)
512MB (524288KB)
passed
node2
2GB (2097152KB)
512MB (524288KB)
passed
Result: Total memory check passed.
Check: Swap space
Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
1024GB (1073741824KB) 1GB (1048576KB)
passed
node2
1024GB (1073741824KB) 1GB (1048576KB)
passed
Result: Swap space check passed.
Check: Free disk space in "/tmp" dir
Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
400.35MB (409960KB)
400MB (409600KB)
passed
node2
400.35MB (409960KB)
400MB (409600KB)
passed
Result: Free disk space check passed.

Kernel version test


Detecting AIX release

System architecture test


Detecting Power processor

Checking memory
requirements

Checking swap space


requirements

Checking /tmp free space


requirements

Checking oracle user home


directory free space
requirements

Check: Free disk space in "/oracle" dir


Node Name Available
Required
Comment
------------ ------------------------ ------------------------ ---------node1
4.63GB (4860044KB)
4GB (4194304KB)
passed
node2
4.63GB (4860044KB)
4GB (4194304KB)
passed
Result: Free disk space check passed.

At this
stage, node
system
requiremen
ts for crs is
checked !!!

Checking system requirements for 'crs'... (Continued )


Check: Package existence for "vacpp.cmp.core:7.0.0.2"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
vacpp.cmp.core:6.0.0.0
failed
node2
vacpp.cmp.core:6.0.0.0
failed
Result: Package existence check failed for "vacpp.cmp.core:7.0.0.2".

Check ONLY for


necessary
requirements for ASM
as explained in the
cookbook (chapter 7).

Check: Operating system patch for "IY65361 "


Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
unknown
IY65361
failed
node2
unknown
IY65361
failed
Result: Operating system patch check failed for "IY65361 ".

CVU is testing
existence of all
prerequirements
needed for all
implementations as :

Check: Package existence for "vac.C:7.0.0.2"


Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
vac.C:7.0.0.2
passed
node2
vac.C:7.0.0.2
passed
Result: Package existence check passed for "vac.C:7.0.0.2".

Check: Package existence for "xlC.aix50.rte:7.0.0.4"


Node Name
Status
Comment
11gRAC/ASM/AIX

[email protected]

RAC implementation
ASM implementation
GPFS implementation
HACMP implementation
(Concurrent raw devices
or cohabitation between
ASM or GPFS with
HACMP)

xlC MUST be at
156 of 393

------------------------------ ------------------------------ ---------------node1


xlC.aix50.rte:7.0.0.4
passed
node2
xlC.aix50.rte:7.0.0.4
passed
Result: Package existence check passed for "xlC.aix50.rte:7.0.0.4".

minimum release 7.0.0.1


With Release 6. The
Oracle Clusterware will not
start

Check: Package existence for "xlC.rte:7.0.0.1"


Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
xlC.rte:7.0.0.1
passed
node2
xlC.rte:7.0.0.1
passed
Result: Package existence check passed for "xlC.rte:7.0.0.1".

At this
stage, node
system
requiremen
ts for crs is
checked !!!

Checking system requirements for 'crs'... (Continued )


Check: Package existence for "gpfs.base:2.3.0.3"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
gpfs.base:2.3.0.5
passed
node2
gpfs.base:2.3.0.5
passed
Result: Package existence check passed for "gpfs.base:2.3.0.3".
Check: Operating system patch for "IY63969"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY63969:gpfs.baseIY63969:gpfs.docs.dataIY63969:gpfs.msg.en_US IY63969
node2
IY63969:gpfs.baseIY63969:gpfs.docs.dataIY63969:gpfs.msg.en_US IY63969
Result: Operating system patch check passed for "IY63969".

passed
passed

Check: Operating system patch for "IY69911"


Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY69911:gpfs.base
IY69911
passed
node2
IY69911:gpfs.base
IY69911
passed
Result: Operating system patch check passed for "IY69911".
Check: Operating system patch for "IY70276"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY70276:gpfs.base
IY70276
passed
node2
IY70276:gpfs.base
IY70276
passed
Result: Operating system patch check passed for "IY70276".
Check: Package existence for "cluster.license:5.2.0.0"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "cluster.license:5.2.0.0".
Check: Operating system patch for "IY60759"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
unknown
IY60759
failed
node2
unknown
IY60759
failed
Result: Operating system patch check failed for "IY60759".
Check: Operating system patch for "IY61034"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY61034:bos.mpIY61034:bos.mp64 IY61034
node2
IY61034:bos.mpIY61034:bos.mp64 IY61034
Result: Operating system patch check passed for "IY61034".

passed
passed

Checking system requirements for 'crs'... (Continued )


Check: Operating system patch for "IY61770"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------11gRAC/ASM/AIX

[email protected]

157 of 393

node1
IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770
passed
Node2
IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770
passed
Result: Operating system patch check passed for "IY61770".
Check: Operating system patch for "IY62191"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191
node2
IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191
Result: Operating system patch check passed for "IY62191".

passed
passed

Check: Package existence for "ElectricFence-2.2.2-1:2.2.2"


Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "ElectricFence-2.2.2-1:2.2.2".
Check: Package existence for "xlfrte:9.1"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
xlfrte:8.1.1.4
failed
node2
xlfrte:8.1.1.4
failed
Result: Package existence check failed for "xlfrte:9.1".
Check: Package existence for "gdb-6.0-1:6.0"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "gdb-6.0-1:6.0".
Check: Package existence for "make-3.80-1:3.80"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "make-3.80-1:3.80".
Check: Package existence for "freeware.gnu.tar.rte:1.13.0.0"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "freeware.gnu.tar.rte:1.13.0.0".
Check: Package existence for "Java14_64.sdk:1.4.2.1"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "Java14_64.sdk:1.4.2.1".
Check: Package existence for "Java131.rte.bin:1.3.1.16"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "Java131.rte.bin:1.3.1.16".

Checking system requirements for 'crs'... (Continued )


Check: Package existence for "Java14.sdk:1.4.2.2"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
Java14.sdk:1.4.2.10
failed
node2
Java14.sdk:1.4.2.10
failed
Result: Package existence check failed for "Java14.sdk:1.4.2.2".
11gRAC/ASM/AIX

[email protected]

158 of 393

Check: Operating system patch for "IY65305"


Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY65305:Java14.sdk
IY65305
passed
node2
IY65305:Java14.sdk
IY65305
passed
Result: Operating system patch check passed for "IY65305".
Check: Operating system patch for "IY58350"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
unknown
IY58350
failed
node2
unknown
IY58350
failed
Result: Operating system patch check failed for "IY58350".
Check: Operating system patch for "IY63533"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
unknown
IY63533
failed
node2
unknown
IY63533
failed
Result: Operating system patch check failed for "IY63533".
Check: Package existence for "mqm.server.rte:5.3"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "mqm.server.rte:5.3".
Check: Package existence for "mqm.client.rte:5.3"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "mqm.client.rte:5.3".
Check: Package existence for "sna.rte:6.1.0.4"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "sna.rte:6.1.0.4".
Check: Package existence for "bos.net.tcp.server"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.net.tcp.server:5.3.0.30 passed
node2
bos.net.tcp.server:5.3.0.30 passed
Result: Package existence check passed for "bos.net.tcp.server".
Check: Operating system patch for "IY44599"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
unknown
IY44599
failed
node2
unknown
IY44599
failed
Result: Operating system patch check failed for "IY44599".

Checking system requirements for 'crs'... (Continued )


Check: Operating system patch for "IY60930"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY60930:bos.mpIY60930:bos.mp64 IY60930
node2
IY60930:bos.mpIY60930:bos.mp64 IY60930
Result: Operating system patch check passed for "IY60930".
Check: Operating system patch for "IY58143"
Node Name Applied
Required
11gRAC/ASM/AIX

passed
passed

Comment

[email protected]

159 of 393

------------ ------------------------ ------------------------ ---------node1


IY58143:X11.Dt.libIY58143:X11.base.rteIY58143:bos.acctIY58143:bos.adt.includeI
Y58143:bos.adt.libmIY58143:bos.adt.profIY58143:bos.alt_disk_install.rteIY58143:bos.diag.com
IY58143:bos.mp64IY58143:bos.mpIY58143:bos.net.ewlm.rteIY58143:bos.net.ipsec.keymgt
IY58143:bos.net.ipsec.rteIY58143:bos.net.nfs.clientIY58143:bos.net.nfs.serverIY58143:bos.net.tcp.client
IY58143:bos.net.tcp.serverIY58143:bos.net.tcp.smitIY58143:bos.perf.libperfstatIY58143:bos.perf.perfstat
IY58143:bos.perf.toolsIY58143:bos.rte.archiveIY58143:bos.rte.bind_cmdsIY58143:bos.rte.boot
IY58143:bos.rte.controlIY58143:bos.rte.filesystemIY58143:bos.rte.installIY58143:bos.rte.libc
IY58143:bos.rte.lvmIY58143:bos.rte.manIY58143:bos.rte.methodsIY58143:bos.rte.security
IY58143:bos.rte.serv_aidIY58143:bos.sysmgt.nim.clientIY58143:bos.sysmgt.quotaIY58143:bos.sysmgt.serv_aid
IY58143:bos.sysmgt.sysbrIY58143:devices.chrp.base.rteIY58143:devices.chrp.pci.rteIY58143:devices.chrp.vdevice.rte
IY58143:devices.common.IBM.atm.rteIY58143:devices.common.IBM.ethernet.rteIY58143:devices.common.IBM.fc.rte
IY58143:devices.common.IBM.fda.diagIY58143:devices.common.IBM.mpio.rteIY58143:devices.fcp.disk.rte
IY58143:devices.pci.00100f00.rteIY58143:devices.pci.14100401.diagIY58143:devices.pci.14103302.rte
IY58143:devices.pci.14106602.rteIY58143:devices.pci.14106902.rteIY58143:devices.pci.14107802.rte
IY58143:devices.pci.1410ff01.rteIY58143:devices.pci.22106474.rteIY58143:devices.pci.33103500.rte
IY58143:devices.pci.4f111100.comIY58143:devices.pci.77101223.comIY58143:devices.pci.99172704.rte
IY58143:devices.pci.c1110358.rteIY58143:devices.pci.df1000f7.comIY58143:devices.pci.df1000f7.diag
IY58143:devices.pci.df1000fa.rteIY58143:devices.pci.e414a816.rteIY58143:devices.scsi.disk.rte
IY58143:devices.vdevice.IBM.l-lan.rteIY58143:devices.vdevice.IBM.vscsi.rteIY58143:devices.vdevice.hvterm1.rte
IY58143:devices.vtdev.scsi.rteIY58143:sysmgt.websm.appsIY58143:sysmgt.websm.framework
IY58143:sysmgt.websm.rteIY58143:sysmgt.websm.webaccess IY58143
passed
Node2
IY58143:X11.Dt.libIY58143:X11.base.rteIY58143:bos.acctIY58143:bos.adt.includeI
Y58143:bos.adt.libmIY58143:bos.adt.profIY58143:bos.alt_disk_install.rteIY58143:bos.diag.com
IY58143:bos.mp64IY58143:bos.mpIY58143:bos.net.ewlm.rteIY58143:bos.net.ipsec.keymgt
IY58143:bos.net.ipsec.rteIY58143:bos.net.nfs.clientIY58143:bos.net.nfs.serverIY58143:bos.net.tcp.client
IY58143:bos.net.tcp.serverIY58143:bos.net.tcp.smitIY58143:bos.perf.libperfstatIY58143:bos.perf.perfstat
IY58143:bos.perf.toolsIY58143:bos.rte.archiveIY58143:bos.rte.bind_cmdsIY58143:bos.rte.boot
IY58143:bos.rte.controlIY58143:bos.rte.filesystemIY58143:bos.rte.installIY58143:bos.rte.libc
IY58143:bos.rte.lvmIY58143:bos.rte.manIY58143:bos.rte.methodsIY58143:bos.rte.security
IY58143:bos.rte.serv_aidIY58143:bos.sysmgt.nim.clientIY58143:bos.sysmgt.quotaIY58143:bos.sysmgt.serv_aid
IY58143:bos.sysmgt.sysbrIY58143:devices.chrp.base.rteIY58143:devices.chrp.pci.rteIY58143:devices.chrp.vdevice.rte
IY58143:devices.common.IBM.atm.rteIY58143:devices.common.IBM.ethernet.rteIY58143:devices.common.IBM.fc.rte
IY58143:devices.common.IBM.fda.diagIY58143:devices.common.IBM.mpio.rteIY58143:devices.fcp.disk.rte
IY58143:devices.pci.00100f00.rteIY58143:devices.pci.14100401.diagIY58143:devices.pci.14103302.rte
IY58143:devices.pci.14106602.rteIY58143:devices.pci.14106902.rteIY58143:devices.pci.14107802.rte
IY58143:devices.pci.1410ff01.rteIY58143:devices.pci.22106474.rteIY58143:devices.pci.33103500.rte
IY58143:devices.pci.4f111100.comIY58143:devices.pci.77101223.comIY58143:devices.pci.99172704.rte
IY58143:devices.pci.c1110358.rteIY58143:devices.pci.df1000f7.comIY58143:devices.pci.df1000f7.diag
IY58143:devices.pci.df1000fa.rteIY58143:devices.pci.e414a816.rteIY58143:devices.scsi.disk.rte
IY58143:devices.vdevice.IBM.l-lan.rteIY58143:devices.vdevice.IBM.vscsi.rteIY58143:devices.vdevice.hvterm1.rte
IY58143:devices.vtdev.scsi.rteIY58143:sysmgt.websm.appsIY58143:sysmgt.websm.framework
IY58143:sysmgt.websm.rteIY58143:sysmgt.websm.webaccess IY58143
passed
Result: Operating system patch check passed for "IY58143".
Check: Operating system patch for "IY66513"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY66513:bos.mpIY66513:bos.mp64 IY66513
node2
IY66513:bos.mpIY66513:bos.mp64 IY66513
Result: Operating system patch check passed for "IY66513".
Check: Operating system patch for "IY70159"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY70159:bos.mpIY70159:bos.mp64 IY70159
node2
IY70159:bos.mpIY70159:bos.mp64 IY70159
Result: Operating system patch check passed for "IY70159".

passed
passed

passed
passed

Checking system requirements for 'crs'... (Continued )


Check: Operating system patch for "IY59386"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY59386:bos.rte.bind_cmds IY59386
passed
node2
IY59386:bos.rte.bind_cmds IY59386
passed
Result: Operating system patch check passed for "IY59386".
Check: Package existence for "bos.adt.base"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.adt.base:5.3.0.30
passed
node2
bos.adt.base:5.3.0.30
passed
11gRAC/ASM/AIX

[email protected]

160 of 393

Result: Package existence check passed for "bos.adt.base".


Check: Package existence for "bos.adt.lib"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.adt.lib:5.3.0.30
passed
node2
bos.adt.lib:5.3.0.30
passed
Result: Package existence check passed for "bos.adt.lib".
Check: Package existence for "bos.adt.libm"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.adt.libm:5.3.0.30
passed
node2
bos.adt.libm:5.3.0.30
passed
Result: Package existence check passed for "bos.adt.libm".
Check: Package existence for "bos.perf.libperfstat"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.perf.libperfstat:5.3.0.30 passed
node2
bos.perf.libperfstat:5.3.0.30 passed
Result: Package existence check passed for "bos.perf.libperfstat".
Check: Package existence for "bos.perf.perfstat"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.perf.perfstat:5.3.0.30
passed
node2
bos.perf.perfstat:5.3.0.30
passed
Result: Package existence check passed for "bos.perf.perfstat".
Check: Package existence for "bos.perf.proctools"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
bos.perf.proctools:5.3.0.30 passed
node2
bos.perf.proctools:5.3.0.30 passed
Result: Package existence check passed for "bos.perf.proctools".
Check: Package existence for "rsct.basic.rte"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
rsct.basic.rte:2.4.3.0
passed
node2
rsct.basic.rte:2.4.3.0
passed
Result: Package existence check passed for "rsct.basic.rte".
Check: Package existence for "perl.rte:5.0005"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
perl.rte:5.8.2.30
passed
node1
perl.rte:5.8.2.30
passed
Result: Package existence check passed for "perl.rte:5.0005".
Check: Package existence for "perl.rte:5.6"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
perl.rte:5.8.2.30
passed
node2
perl.rte:5.8.2.30
passed
Result: Package existence check passed for "perl.rte:5.6".

Checking system requirements for 'crs'... (Continued )


Check: Package existence for "perl.rte:5.8"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
perl.rte:5.8.2.30
passed
node2
perl.rte:5.8.2.30
passed
Result: Package existence check passed for "perl.rte:5.8".
Check: Package existence for "python-2.2-4:2.2"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "python-2.2-4:2.2".
Check: Package existence for "freeware.zip.rte:2.3"
11gRAC/ASM/AIX

[email protected]

161 of 393

Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "freeware.zip.rte:2.3".
Check: Package existence for "freeware.gcc.rte:3.3.2.0"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "freeware.gcc.rte:3.3.2.0".
Check: Group existence for "dba"
Node Name Status
Comment
------------ ------------------------ -----------------------node1
exists
passed
node2
exists
passed
Result: Group existence check passed for "dba".
Check: User existence for "nobody"
Node Name Status
Comment
------------ ------------------------ -----------------------node1
exists
passed
node2
exists
passed
Result: User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.

Dont worry about the Pre-check for cluster services setup was unsuccessful on all the nodes.
this is a normal message as we dont want all APAR and FILESETS to be installed.

11gRAC/ASM/AIX

[email protected]

162 of 393

12.2 Installation
This installation just have to be done only starting from one node. Once the first node is installed, Oracle OUI
automatically starts the copy of the mandatory files on the others nodes, using rcp command. This step should not last
long. But in any case, dont think the OUI is stalled, and look at the network traffic before canceling the
installation !

As root user on each node, DO Create a symbolic link from /usr/sbin/lsattr to /etc/lsattr
ln -s /usr/sbin/lsattr /etc/lsattr
/etc/lsattr is used in vip check action

On each node :

Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced


libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node1:root}/ #
Then on node2

From first node As


root user, execute :

Under VNC Client session, or other graphical interface, execute :

On each node, set right


ownership and permissions
to following directories :

{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#
#
#

chown crs:oinstall /crs


chmod 665 /crs
chown oracle:oinstall /oracle
chmod 665 /oracle

Login as crs or oracle user (crs in our case) and follow the procedure hereunder

Setup and export your


DISPLAY, TMP and TEMP
variables

IF AIX5L release 5.3 is

used, do modify the file


oraparam.ini, and cluster.ini
in Disk1/installer

11gRAC/ASM/AIX

With /tmp or other destination having enough free space, about


500Mb on each node.
{node1:crs}/
{node1:crs}/
{node1:crs}/
{node1:crs}/

#
#
#
#

export
export
export
export

DISPLAY=node1:1
TMP=/tmp
TEMP=/tmp
TMPDIR=/tmp

update entries AIX5200 to AIX5300 on both files, and execute :


$/<cdrom_mount_point>/runInstaller

Or execute : ./runInstaller -ignoreSysPrereqs

[email protected]

163 of 393

Make sure to

{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ls
cluvfy
install
rootpre
rpm
runcluvfy.sh upgrade
doc
response
rootpre.sh runInstaller stage
welcome.html
{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ./runInstaller
********************************************************************************

make an NFS mount


of the CRS Disk1 on
other nodes, or remote
copy files to other
nodes, THEN run
rootpre.sh on each
node !!!

Your platform requires the root user to perform certain pre-installation


OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with Oracle installation. rootpre.sh can be found at the top level
of the CD or the stage area.

execute rootpre.sh on
each node before you
click to the next step (If
not done yet with CVU).

Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle
installation.
Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.
********************************************************************************
Has 'rootpre.sh' been run by root? [y/n] (n)

Starting Oracle Universal Installer...


Checking Temp space: must be greater than 190 MB. Actual 1219 MB Passed
Checking swap space: must be greater than 150 MB. Actual 512 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-07_11-04-35AM. Please wait
...{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.

At the OUI Welcome screen


You can check if any oracle product is
already installed, click on Installed
Products :

Just click Next ...

11gRAC/ASM/AIX

[email protected]

164 of 393

By default, Oracle Universal installer


may try to create /oraInventory
directory, but OUI has no rights to do
so.
THEN you may have this message.

Just click OK to modify Inventory


location destination.

Specify Inventory directory and credentials :

where you want to create the


inventory directory :
/oracle/oraInventoty

Operating system group name should


be set as oinstall

Then click Next ... to Continue ...

Specify Home Details


Specify an ORACLE_HOME name and
destination directory for the CRS installation.
The destination directory should be out of the
$ORACLE_BASE
Add Product Languages if necessay.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

165 of 393

Product-Specific
Prerequisite Checks :

All checks should be in


status Succeeded.

Then click Next ...

This step will :

Check operating system requirements ...


Check operating system package requirements ...
Check recommended operating system patches
Check physical memory requirements ...
Check for Oracle Home incompatibilities ....
Check Oracle Home path for spaces...
Check Cluster files...
Check kernel...
Check uid/gid...
Check maxmimum command line length argument, ncarg...
Check local Cluster Synchronization Services (CSS) status ...
Check whether Oracle 9.2 RAC is available on all selected nodes
Check Oracle 9i OCR partition size ...

11gRAC/ASM/AIX

[email protected]

166 of 393

Details of the
prerequisite
checks done
by
runInstaller

Checking operating system requirements ...


Expected result: One of 5300.05,6100.00
Actual Result: 5300.07
Check complete. The overall result of this check is: Passed
========================================================
Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.7.0). Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.60).
Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.7.0).
Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.7.0). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.7.0).
Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.7.0). Passed
Checking for rsct.basic.rte(0.0); found rsct.basic.rte(2.4.8.0). Passed
Checking for rsct.compat.clients.rte(0.0); found rsct.compat.clients.rte(2.4.8.0).
Checking for bos.mp64(5.3.0.56); found bos.mp64(5.3.7.1). Passed
Checking for bos.rte.libc(5.3.0.55); found bos.rte.libc(5.3.7.1). Passed
Checking for xlC.aix50.rte(8.0.0.7); found xlC.aix50.rte(9.0.0.1).
Passed
Checking for xlC.rte(8.0.0.7); found xlC.rte(9.0.0.1). Passed
Check complete. The overall result of this check is: Passed
========================================================

Passed

Checking recommended operating system patches


Checking for IY89080(bos.rte.aio,5.3.0.51); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY92037(bos.rte.aio,5.3.0.52); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY94343(bos.rte.lvm,5.3.0.55); found (bos.rte.lvm,5.3.7.0). Passed
Check complete. The overall result of this check is: Passed
========================================================
Checking physical memory requirements ...
Expected result: 922MB
Actual Result: 2048MB
Check complete. The overall result of this check is: Passed
========================================================
Checking for Oracle Home incompatibilities ....
Actual Result: NEW_HOME
Check complete. The overall result of this check is: Passed
========================================================
Checking Oracle Home path for spaces...
Check complete. The overall result of this check is: Passed
========================================================
Checking Cluster files...
Check complete. The overall result of this check is: Passed
========================================================
Checking kernel...
Check complete. The overall result of this check is: Passed
========================================================
Checking uid/gid...
Check complete. The overall result of this check is: Passed
========================================================
Checking maxmimum command line length argument, ncarg...
Check complete. The overall result of this check is: Passed
========================================================
Checking local Cluster Synchronization Services (CSS) status ...
Check complete. The overall result of this check is: Passed
========================================================
Checking whether Oracle 9.2 RAC is available on all selected nodes
Check complete. The overall result of this check is: Passed
========================================================
Checking Oracle 9i OCR partition size ...
Check complete. The overall result of this check is: Passed
========================================================

11gRAC/ASM/AIX

[email protected]

167 of 393

Just to remember !!!


Public, Private, and Virtual Host Name
layout

Network
Interface name
en0

en1

en0

en1

11gRAC/ASM/AIX

Host Type

Defined Name

Assigned IP

Observation

Public
Hostname

node1

10.3.25.81

RAC Public node name

Virtual
Hostname

node1-vip

Private
Hostame

node1-rac

Public
Hostname

node2

Virtual
Hostname

node2-vip

Private
Hostame

node2-rac

(Public Network)
10.3.25.181

RAC VIP node name


(Private Network)

10.10.25.81

RAC Interconnect node name


(Private Network)

10.3.25.82

RAC Public node name


(Public Network)

10.3.25.182

RAC VIP node name


(Private Network)

10.10.25.82

RAC Interconnect node name


(Private Network)

[email protected]

168 of 393

Specify Cluster configuration


:
You must :

Give a unique name to the


cluster
crs_cluster

Edit the default node to


validate entries

Add the entries for any new


node, node2 in our case

For each cluster node, you must :


specify one by one

Public Node Name,

Public correspond to IP address


linked to the public network,
usually IP linked to hostname.
Public Node Name =
node1
Public Node Name =
node2

Private Node Name

Private correspond to IP address


linked to the RAC interconnect.
Private Node Name =
node1-rac
Private Node Name =
node2-rac

Virtual Host Name

Virtual Host Name correspond to


a free reserved IP address on
the public network to be use for
Oracle clusterware VIP.
Virtual Host Name =
node1-vip
Virtual Host Name =
node2-vip
When done !!! click Next ...

If you have problems at this stage when clicking on Next (error


messages) Check your network configuration, and User equivalence

11gRAC/ASM/AIX

[email protected]

169 of 393

Just to remember !!!


Public, Private, and Virtual Host Name
layout

Network
Interface
name
en0

en1

en0

en1

Host Type

Defined
Name

Assigned IP

SUBNET

Observation

Public
Hostname

node1

10.3.25.81

10.3.25.0

RAC Public node name

Virtual
Hostname

node1-vip

Private
Hostame

node1-rac

Public
Hostname

node2

Virtual
Hostname

node2-vip

Private
Hostame

node2-rac

(Public Network)
10.3.25.181

10.3.25.0

RAC VIP node name


(Private Network)

10.10.25.81

10.10.25.0

RAC Interconnect node name


(Private Network)

10.3.25.82

10.3.25.0

RAC Public node name


(Public Network)

10.3.25.182

10.3.25.0

RAC VIP node name


(Private Network)

10.10.25.82

10.10.25.0

RAC Interconnect node name


(Private Network)

Specify Network Interface Usage


11gRAC/ASM/AIX

[email protected]

170 of 393

For each entry (en0,en1,en2), Click Edit to Specify Interface Type for each network card correspond to the
public network, and which one correspond to the private network (RAC Interconnect).

In our example, with or without RAC


Interconnect backup implementation :

en0 (10.3.25.0) must exist as


Public on each node.

en1 (10.10.25.0) must exist as


Private on each node.

en2 (20.20.25.0) Do Not Use.

An other en? Do Not Use

Check and Then click Next ...

11gRAC/ASM/AIX

[email protected]

171 of 393

At this stage, you must have already configured the shared storage. At least for :

1 Oracle Cluster Registry Disk

1 Voting Disk

OCR and voting disks protected by


external mechanism.
No Oracle Clusterware redundancy
mechanism

/dev/ocr_disk1
/dev/voting_disk1

/dev/ocr_disk1
/dev/ocr_disk2

/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

OR

2 Oracle Cluster Registry Disks


(OCR Mirroring)

3 Voting Disks (Voting


Redundancy)

Oracle Clusterware redundancy


mechanism for OCR and Voting disks

In our example, We will implement 2 Oracle Cluster Registry Disks, and 3 Voting Disks.
ALL virtual devices must be reachable by all nodes participating to the RAC cluster in concurrency mode.

11gRAC/ASM/AIX

[email protected]

172 of 393

Specify OCR Configuration, by selecting :

/dev/ocr_disk1
/dev/ocr_disk2

External Redundancy (No OCR mirroring


by Oracle Clusterware, should be provided
by others options, disks management, etc
)
If
External
Redunda
ncy
selected,
specify
raw disk
location as
follow :

/dev/ocr_disk1

OR

In our case, we will specify NORMAL Redundancy as described


bellow :

Normal Redundancy (OCR mirroring by


Oracle Clusterware)
If
NORMA
L
Redunda
ncy
selected,
specify
raw disk
location as
follow :

/dev/ocr_disk1

/dev/ocr_disk2

Specify the OCR location : this must be


a shared location on the shared storage
reachable from all nodes.
And you must have the read/wright
permissions on this shared location from all
nodes.
Then click Next ...
If problems happens at this stage, do verifiy
that location specified does exist, and is
reachable from each AIX node, with right
read/write access, and user/group owner.

11gRAC/ASM/AIX

[email protected]

173 of 393

Specify Voting Disk Configuration, by


selecting :

/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3

External Redundancy (No Voting copies


managed by Oracle Clusterware, should
be provided by others options, disks
management, etc )
If
External
Redunda
ncy
selected,
specify
raw disk
location as
follow :

/dev/voting_disk1

OR

In our case, we will specify Normal Redundancy as described


bellow :

Normal Redundancy (Voting copies


managed by Oracle Clusterware)
If
NORMA
L
Redunda
ncy
selected,
specify
raw disk
location as
follow :

/dev/voting_disk1

/dev/voting_disk2

/dev/voting_disk3

Specify the Voting Disk location, this


must be a shared location on the shared
storage reachable from all nodes.
And you must have the read/wright
permissions on this shared location from all
nodes.
Then click Next ...
If problems happens at this stage, do verifiy
that location specified does exist, and is
reachable from each AIX node, with right
read/write access, and user/group owner.

11gRAC/ASM/AIX

[email protected]

174 of 393

Summary :
Check Cluster Nodes and Remote Nodes
lists.
The OUI will install the Oracle CRS
software on to the local node, and then
copy this information to the other selected
nodes.

Check that all nodes are listed in Cluster


Nodes :

Then click Install ...

Install :
The Oracle Universal Installer will
proceed the installation on the first
node, then will copy automatically the
code on the 2 others selected nodes.

Just wait for the next screen ...

11gRAC/ASM/AIX

[email protected]

175 of 393

Execute Configuration
Scripts :
KEEP THIS WINDOWS
OPEN
AND Do execute scripts
in the following order,
waiting for each to
succeed before running
the next one !!!
AS root :
1

On node1, Execute
orainstRoot.sh

On node2, Execute
orainstRoot.sh

On node1, Execute
root.sh

On node2, Execute
root.sh

orainstRoot.sh :
Execute the orainstRoot.sh on all nodes.
The file is located in $ORACLE_BASE/oraInventory (OraInventory home) on each nodes
/oracle/oraInventory in our case.

On node1 as root, execute


/oracle/oraInventory/orainstRoot.s
h

{node1:root}/oracle/oraInventory # ./orainstRoot.sh
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node1:root}/oracle/oraInventory #

THEN On node2 as root, execute

{node2:root}/oracle/oraInventory # ./orainstRoot.sh
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node2:root}/oracle/oraInventory #

/oracle/oraInventory/orainstRoot.sh

11gRAC/ASM/AIX

[email protected]

176 of 393

Before running root.sh script on each node, please check the following :
Check if your public network card (en0 in our case) is a standard network adapter, or a virtual network ethernet.
Issue the following command as root :
{node1:root} / -> entstat d en0

You should get the


following if en0 is a
normal network
interface on each
node (node1, node2) :

------------------------------------------------------------ETHERNET STATISTICS (en0) :


Device Type: 2-Port Gigabit Ethernet-SX PCI-X Adapter (14108802)
Hardware Address: 00:09:6b:ee:61:fc
Elapsed Time: 0 days 20 hours 3 minutes 49 seconds
Transmit Statistics:
-------------------Packets: 0
Bytes: 0
Interrupts: 0
Transmit Errors: 0
Packets Dropped: 0
Max Packets on S/W Transmit Queue: 1
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 1
Broadcast Packets: 0
Multicast Packets: 0
No Carrier Sense: 0
DMA Underrun: 0
Lost CTS Errors: 0
Max Collision Errors: 0
Late Collision Errors: 0
Deferred: 0
SQE Test: 0
Timeout Errors: 0
Single Collision Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 1

Receive Statistics:
------------------Packets: 0
Bytes: 0
Interrupts: 0
Receive Errors: 0
Packets Dropped: 0
Bad Packets: 0

Broadcast Packets: 0
Multicast Packets: 0
CRC Errors: 0
DMA Overrun: 0
Alignment Errors: 0
No Resource Errors: 0
Receive Collision Errors: 0
Packet Too Short Errors: 0
Packet Too Long Errors: 0
Packets Discarded by Adapter: 0
Receiver Start Count: 0

General Statistics:
------------------No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 2000
Driver Flags: Up Broadcast Simplex
Limbo 64BitSupport ChecksumOffload
PrivateSegment LargeSend DataRateSet
2-Port Gigabit Ethernet-SX PCI-X Adapter (14108802) Specific Statistics:
-------------------------------------------------------------------Link Status : Up
Media Speed Selected: Auto negotiation
Media Speed Running: Unknown
PCI Mode: PCI-X (100-133)
PCI Bus Width: 64-bit
Latency Timer: 144
Cache Line Size: 128
Jumbo Frames: Disabled
TCP Segmentation Offload: Enabled
TCP Segmentation Offload Packets Transmitted: 0
TCP Segmentation Offload Packet Errors: 0
Transmit and Receive Flow Control Status: Disabled
Transmit and Receive Flow Control Threshold (High): 45056
Transmit and Receive Flow Control Threshold (Low): 24576
Transmit and Receive Storage Allocation (TX/RX): 16/48

You should get the following if en0 (public network


interface) is a normal network interface :
IF so, go to the next step, executing the root.sh script

11gRAC/ASM/AIX

{node1:root}/ #
entstat -d en0 | grep -iE ".*link.*status.*:.*up.*"
Link Status : Up
{node1:root}/ #

[email protected]

177 of 393

OR

{node1:root} / -> entstat d en0


ETHERNET STATISTICS (en0) :
Device Type: Virtual I/O Ethernet Adapter (l-lan)
Hardware Address: ee:51:60:00:10:02
Elapsed Time: 0 days 20 hours 3 minutes 24 seconds

You should get the


following if en0 is a
normal network
interface :

Transmit Statistics:
-------------------Packets: 136156
Bytes: 19505561
Interrupts: 0
Transmit Errors: 0
Packets Dropped: 0

Receive Statistics:
------------------Packets: 319492
Bytes: 142069339
Interrupts: 285222
Receive Errors: 0
Packets Dropped: 0
Bad Packets: 0

Max Packets on S/W Transmit Queue: 0


S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 0
Broadcast Packets: 208
Multicast Packets: 2
No Carrier Sense: 0
DMA Underrun: 0
Lost CTS Errors: 0
Max Collision Errors: 0
Late Collision Errors: 0
Deferred: 0
SQE Test: 0
Timeout Errors: 0
Single Collision Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 0

Broadcast Packets: 241831


Multicast Packets: 0
CRC Errors: 0
DMA Overrun: 0
Alignment Errors: 0
No Resource Errors: 0
Receive Collision Errors: 0
Packet Too Short Errors: 0
Packet Too Long Errors: 0
Packets Discarded by Adapter: 0
Receiver Start Count: 0

General Statistics:
------------------No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex 64BitSupport DataRateSet
Virtual I/O Ethernet Adapter (l-lan) Specific Statistics:
--------------------------------------------------------RQ Length: 4481
No Copy Buffers: 0
Trunk Adapter: False
Filter MCast Mode: False
Filters: 255 Enabled: 1 Queued: 0 Overflow: 0
LAN State: Operational
Buffers
tiny
small
medium
large
huge

OR You should get the following if


en0 is a virtual network interface :
IF so, dont worry, 11g ragvip script
has been updated to take care of
this case..

Reg
512
512
128
24
24

Alloc
512
512
128
24
24

Min
512
512
128
24
24

Max
2048
2048
256
64
64

MaxA
512
553
128
24
24

LowReg
509
502
128
24
24

entstat -d en0 | grep -iE ".*lan state:.*operational.*"


{node1:root}/ # entstat d en0 | grep -iE ".*lan
State:.*operational.*"
LAN State: Operational
{node1:root}/ #
in the $CRS_HOME/bin/racgvip script, you will have to modify the following :
$ENTSTAT -d $_IF | $GREP -iEq ".*link.*status.*:.*up.*"
To be replaced by
$ENTSTAT -d $_IF | $GREP -iEq ".*lan state:.*operational.*"

With 10gRAC it was know as Bug 4437469: RACGVIP NOT WORKING ON SERVER WITH ETHERNET
VIRTUALISATION (This was only required if using AIX Virtual Interfaces for the Oracle Database 10.2.0.1 RAC
11gRAC/ASM/AIX

[email protected]

178 of 393

public network) ItsFixedsince10gRACrelease2patchset10.2.0.3


At this stage, you should execute root.sh script :
Start with node 1 and wait for the result before executing on node 2.
This file is located in $CRS_HOME directory on each node (/crs/11.1.0/install in our case).
ONLY For information, The root.sh script is executing two sub scripts, and one is the rootconfig.sh script
which has interresting information to have a look at :
DO not modify the file unless its necessary before running the root.sh script !!!
If you have to make a modification, do it on all nodes !!!
What does the rootconfig.sh script executed by root.sh :

# rootconfig.sh for Oracle CRS homes


#
#
This is run once per node during the Oracle CRS install.
#
This script does the following:
#
1) Stop if any GSDs are running from 9.x oracle homes
#
2) Initialize new OCR device or upgrade the existing OCR device
#
3) Setup OCR for running CRS stack
#
4) Copy the CRS init script to init.d for init process to start
#
5) Start the CRS stack
#
6) Configure NodeApps if CRS is up and running on all nodes

Variables used by root.sh script,


the values are the result of your inputs in the Oracle Clusterware Universal Installer.
You can check the values to see if there are OK.
SILENT=false
ORA_CRS_HOME=/crs/11.1.0
CRS_ORACLE_OWNER=crs
CRS_DBA_GROUP=dba
CRS_VNDR_CLUSTER=false
CRS_OCR_LOCATIONS=/dev/ocr_disk1,/dev/ocr_disk2
CRS_CLUSTER_NAME=crs
CRS_HOST_NAME_LIST=node1,1,node2,2
CRS_NODE_NAME_LIST=node1,1,node2,2
CRS_PRIVATE_NAME_LIST=node1-rac,1,node2-rac,2
CRS_LANGUAGE_ID='AMERICAN_AMERICA.WE8ISO8859P1'
CRS_VOTING_DISKS=/dev/voting_disk1,/dev/voting_disk2,/dev/voting_disk3
CRS_NODELIST=node1,node2
CRS_NODEVIPS='node1/node1-vip/255.255.255.0/en0,node2/node2-vip/255.255.255.0/en0'

11gRAC/ASM/AIX

[email protected]

179 of 393

FIRST On node1
As root, Execute /crs/11.1.0/root.sh
When finished, CSS deamon should be active on node 1.
Check for line CSS is active on these nodes.

node1

{node1:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
Now formatting voting device: /dev/voting_disk1
Now formatting voting device: /dev/voting_disk2
Now formatting voting device: /dev/voting_disk3
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
Cluster Synchronization Services is inactive on these nodes.
node2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
{node1:root}/crs/11.1.0 #

Dont worry about WARNING: directory '/crs' is not owned by root


This is just a message to forget

IF CSS is not active at the end of the root.sh script :


Check your network , shared disks configuration, and owner and access permissions (read/write) on OCR and Voting
disks from each participating node. And execute again the root.sh script on node having the problem.
If CCS start on one node, but not on the others, Check shared Disks (OCR/Voting) for concurrent read/write access
from all nodes, using unix dd command.
If ASM or GPFS is implemented with HACMP installed and configured for other purposes then having database on
concurrent raw devices, You must declare disks ressources in HACMP to be able to start the CRS (CSS).
If ASM or GPFS is implemented, and HACMP is installed but not used at all, THEN remove HACMP or declare disks
ressources in HACMP to be able to start the CRS (CSS).

11gRAC/ASM/AIX

[email protected]

180 of 393

The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if youve chosen an
implementation without HACMP. If this layer is implemented for other purpose, disks ressources necessary to
install and run CRS data will have to be part of an HACMP volume group resource.
If you have previously installed HACMP, you must remove :

HACMP filesets (cluster.es.*)

rsct.hacmp.rte

rsct.compat.basic.hacmp.rte

rsct.compat.clients.hacmp.rte

If you did run a first installation of the Oracle Clusterware (CRS) with HACMP installed,
Check if /opt/ORCLcluster directory does exist and if so, remove it on all nodes.
TO BE ABLE TO RUN AGAIN the root.sh script on the node, you must :
Either
Clean the failed CRS installation, and start again the CRS installation procedure.
Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install (same procedure for 10g and
11g)
Only Supported method by Oracle.
OR

Do the following just to find out and solve the problem without installing again at each try, And when
solved, follow again the Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install to
clean properly the system, and start again the installation as supported by oracle.
As root user on each node :
Do execute

{node1:root}/ # $CRS_HOME/bin/crsctl stop crs


to clean any remaining crs deamons
{node1:root}/ # rmitab h1
this will remove oracle CRS entry in the /etc/inittab
{node1:root}/ # rmitab h2
{node1:root}/ # rmitab h3
{node1:root}/ # rm Rf /opt/ORCL*
kill remaining process from output : ps-ef|grep crs and ps-ef|grep d.bin
{node1:root}/ # rm R /etc/oracle/*

OR execute

{node1:root}/ # $CRS_HOME/install/rootdelete.sh on each node

THEN

Change owner, group (oracle:dba) and permission (660) for /dev/ocr_disk1 and /dev/voting_disk1
on each node of the cluster on node1 and node2

For all OCR


and Voting
Disks :

11gRAC/ASM/AIX

Erase OCR and Voting diks content Format (Zeroing) on the disks from one node :
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#
#

dd
dd
dd
dd
dd

if=/dev/zero
if=/dev/zero
if=/dev/zero
if=/dev/zero
if=/dev/zero

of=/dev/ocr_disk1 bs=1024 count=300 &


of=/dev/ocr_disk2 bs=1024 count=300 &
of=/dev/voting_disk1 bs=1024 count=300 &
of=/dev/voting_disk3 bs=1024 count=300 &
of=/dev/voting_disk3 bs=1034 count=300 &

[email protected]

181 of 393

When finished, CSS deamon should be active on node 1, 2.

THEN On node2
As root, Execute
/crs/11.1.0/root.sh

You should have the following final result :


CSS is active on these nodes.
node1
node2

If CSS is not active on all nodes, or on one of the nodes, this means that you

could have a problem with the network configuration, or the shared disks configuration for
accessing OCR and Voting Disks.
Check your network , shared disks configuration, and owner and access permissions
(read/write) on OCR and Voting disks from each participating node. And execute again the
root.sh script on node having the problem.
Check also as oracle user the following command from each node :
{node1:root}/ # su - crs
{node1:crs}/crs/11.1.0 # /crs/11.1.0/bin/olsnodes
node1
node2
{node1:crs}/crs/11.1.0 # rsh node2
{node2:crs}/crs/11.1.0 # olsnodes
node1
node2
{node2:crs}/crs/11.1.0 #

11gRAC/ASM/AIX

[email protected]

182 of 393

THEN On node2
As root, Execute /crs/11.1.0/root.sh
When finished, CSS deamon should be active on node 1 and 2.
Check for line CSS is active on these nodes.

node1
node2

{node2:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
node2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating
Creating
Creating
Starting
Starting
Starting

VIP
GSD
ONS
VIP
GSD
ONS

application
application
application
application
application
application

resource
resource
resource
resource
resource
resource

on
on
on
on
on
on

(2)
(2)
(2)
(2)
(2)
(2)

nodes...
nodes...
nodes...
nodes...
nodes...
nodes...

Done.
{node2:root}/crs/11.1.0 #

Dont worry about WARNING: directory '/crs' is not owned by root


This is just a message to forget

11gRAC/ASM/AIX

[email protected]

183 of 393

On the second node, at he end of the root.sh script :

You should have the following lines :


...
Running vipca(silent) for configuring nodeapps
Creating
Creating
Creating
Starting
Starting
Starting

VIP
GSD
ONS
VIP
GSD
ONS

application
application
application
application
application
application

resource
resource
resource
resource
resource
resource

on
on
on
on
on
on

(2)
(2)
(2)
(2)
(2)
(2)

nodes...
nodes...
nodes...
nodes...
nodes...
nodes...

Done.
{node2:root}/crs/11.1.0 #
IF NOT, Check for the line :
...
Running vipca(silent) for configuring nodeapps
"en0 is not public. Public interfaces should be used to configure virtual IPs,
{node2:root}/crs/11.1.0 #

Ifyouhave:
"en0 is not public. Public interfaces should be used to configure virtual IPs
THEN
Read next pages to understand and solve it !!!
RUN VIPCA manualy as explained on next page !!!
OTHERWISE, skip the next VIPCA pages

11gRAC/ASM/AIX

[email protected]

184 of 393

At this stage, if you get "en0isnotpublic.PublicinterfacesshouldbeusedtoconfigurevirtualIps,


Followingexecutionofroot.shonlastnode,node2inourcase,THENreadthefollowing:
Note 316583.1 VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC
Symptoms
During CRS install while running root.sh, The following messages are displayed
Oracle CRS stack installed and running under init(1M)
VIP should have been
Running vipca(silent) for configuring nodeapps
configured in silent mode The given interface(s), "en0" is not public. Public interfaces should be used to
with the root.sh scripts configure virtual IPs.
executed on node2
Cause
When verifying the IP addresses, VIP uses calls to determine if a IP address is valid or
THIS IS NOT THE
not. In this case, VIP finds that the IPs are non routable (For example IP addresses like
CASE
192.168.* and 10.10.*.)
Oracle is aware that the IP's can be made public but since mostly such IP's are used for
Private, it display this error message.
Solution
The workaround is to re-run vipca manually as root
#./vipca
or add the VIP using srvctl add nodeapps
YOU MUST CONFIGURE VIP by running vipca script as root user.
logon as root on second node

On first or second node as root user, you

{node1:root}/ # export DISPLAY


{node1:root}/ # cd $CRS_HOME/bin
{node1:root}/crs/11.1.0/bin # ./vipca

must setup the DISPLAY

before running the vipca script located in


/crs/11.1.0/bin

The VIP Welcome graphical screen


will appear at the end of the root.sh script

Then click Next ...

11gRAC/ASM/AIX

[email protected]

185 of 393

Just to remember !!!

Public, Private, and Virtual Host Name layout


Public

Network card on each node

VIP

en0

RAC
Interconnect

RAC Interconnect

en1

en2

en0

Backup

1 of 2 : Select one and only one


network interface.

Select the network interface


corresponding to the Public Network
Remember that each public network card
on each node must have the same name,
en0 for example in our case.
en1 is the RAC Interconnect, or private
network.
Please check with ifconfig a on each
node as root.
Select en0 in our case
Then click Next ...

Just to remember

!!!

Public, Private, and Virtual Host Name layout

Public

VIP

RAC Interconnect (Private Network)

en0

en1

Node Name

IP

Node Name

IP

Node Name

IP

node1

10.3.25.81

node1-vip

10.3.25.181

node1-rac

10.10.25.81

node2

10. 3.25.82

node2-vip

10. 3.25.182

node1-rac

10.10.25.82

2 of 2 :
In the Virtual IPs for cluster nodes
screen, you must provide the VIP node
name for node1 and stroke the TAB key to
automatically feel the rest.
Check validity of the entries before
proceeding.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

186 of 393

The Summary screen will appear,


please validate the entries, or go back to
modify.
Then click Finish ...

The VIP configuration Assistant will


proceed with creation, configuration and
startup of all application resources on all
selected nodes.
VIP, GSD and ONS will be the application
resources to be created.
Wait while progressing ...
If you dont get any errors, youll be
prompted to click OK as the configuration is
100% completed.
Then click OK ...

Check the Configuration results.


Then click Exit ...

11gRAC/ASM/AIX

[email protected]

187 of 393

Using ifconfig a on each node,

check that each network card configured for Public network is mapping a virtual IP.

On node 1 :
{node1:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.81 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.81 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.81 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node1:root}/ #

11gRAC/ASM/AIX

[email protected]

188 of 393

Oracle clusterware VIP for node1 is


assigned/mapped to network interface
en0 corresponding to the public
network.
VIP from node1 is 10.3.25.181

On node 2 :
{node2:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #

Using ifconfig a on each node,

11gRAC/ASM/AIX

check that each network card configured for Public network is mapping a virtual IP.

[email protected]

189 of 393

Oracle clusterware VIP for node2 is


assigned/mapped to network interface
en0 corresponding to the public
network,
VIP from node2 is 10.3.25.182

If node 1 is rebooted :
On node1 failure or reboot,

THEN On node 2 : Well see VIP from node1, VIP from node1 is then still reachable.
{node2:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #

11gRAC/ASM/AIX

[email protected]

190 of 393

Oracle clusterware VIP for node1 will


be switched to node2, and still
assigned/mapped to network interface
en0 corresponding to the public
network,
VIP from node1 is 10.3.25.181
VIP from node2 is 10.3.25.182
When node1 will come back to normal
operations, THEN vip from node1
hosted temporary on node2, will switch
back to its home node, on node1.

Coming back to this


previous screen.

Just click OK to
continue ...

Configuration
Assistants :

3 configuration assistants will be automatically executed.

Oracle notification Server Configuration Assistant


Oracle Private Interconnect Assistant
Oracle Cluster Verification Utility

Then click Next ...

If successful, Next screen End of Installation will appear automatically !!!


If not Check for the result to be successful.

11gRAC/ASM/AIX

[email protected]

1 of 393

Output generated from configuration assistants


Output generated from configuration assistant "Oracle Notification Server Configuration Assistant":
Command = /crs/11.1.0/install/onsconfig add_config node1:6251 node2:6251
The ONS configuration is created successfully
Stopping ONS resource 'ora.node1.ons'
Attempting to stop `ora.node1.ons` on member `node1`
Stop of `ora.node1.ons` on member `node1` succeeded.
The resource ora.node1.ons stopped successfully for restart
Attempting to start `ora.node1.ons` on member `node1`
Start of `ora.node1.ons` on member `node1` succeeded.
The resource ora.node1.ons restarted successfully
Stopping ONS resource 'ora.node2.ons'
Attempting to stop `ora.node2.ons` on member `node2`
Stop of `ora.node2.ons` on member `node2` succeeded.
The resource ora.node2.ons stopped successfully for restart
Attempting to start `ora.node2.ons` on member `node2`
Start of `ora.node2.ons` on member `node2` succeeded.
The resource ora.node2.ons restarted successfully
Configuration assistant "Oracle Notification Server Configuration Assistant" succeeded
----------------------------------------------------------------------------Output generated from configuration assistant "Oracle Private Interconnect Configuration Assistant":
Command = /crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.10.25.0:cluster_interconnect
Configuration assistant "Oracle Private Interconnect Configuration Assistant" succeeded
----------------------------------------------------------------------------Output generated from configuration assistant "Oracle Cluster Verification Utility":
Command = /crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "node1".
Checking user equivalence...
User equivalence check passed for user "crs".
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application (required)
11gRAC/ASM/AIX

[email protected]

2 of 393

Check passed.

Checking existence of ONS node application (optional)


Check passed.
Checking existence of GSD node application (optional)
Check passed.
Post-check for cluster services setup was successful.
Configuration assistant "Oracle Cluster Verification Utility" succeeded
----------------------------------------------------------------------------The "/crs/11.1.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the
configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note
that you may have to update this script with passwords (if any) before executing the same.
-----------------------------------------------------------------------------

If all or parts of assistants are failed or not executed, check for problems in log

/oracle/oraInventory/logs/installActions?????.log (as shown on the runInstaller window) and solve them.


Check also /crs/11.1.0/cfgtoollogs/configToolFailedCommands
{node1:crs}/crs/11.1.0/cfgtoollogs # cat configToolAllCommands
# Copyright (c) 1999, 2005, Oracle. All rights reserved.
/crs/11.1.0/bin/racgons add_config node1:6200 node2:6200
/crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.10.25.0:cluster_interconnect
/crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2
{node1:crs}/crs/11.1.0/cfgtoollogs #

If you miss those steps or closed the previous screen of the runInstaller, you will have to run them manually before
moving to the next step. Just adapt the lines with your own setting (node names, public/private network).
On one node as crs user :
For Oracle notification Server Configuration Assistant
/crs/11.1.0/bin/racgons add_config node1:6200 node2:6200
For crs Private Interconnect Assistant
/crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.25.25.0:cluster_interconnect
For crs Cluster Verification Utility
/crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2

End of Installation

Then click Exit ...

11gRAC/ASM/AIX

[email protected]

3 of 393

12.3 Post Installation operations


12.3.1

UpdatetheClusterwareunixuser.profile

To be done on each node for crs (in our case) or oracle unix user.
vi $HOME/.profile file in crss home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
then echo "$MAILMSG"
fi

# This is at Shell startup. In normal


# operation, the Shell checks
# periodically.

ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORA_CRS_HOME
export LD_LIBRARY_PATH=$CRS_HOME/lib:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$CRS_HOME/bin:$PATH
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from crs user, and reconnect to load modified $HOME/.profile
Content of
.kshrc file

11gRAC/ASM/AIX

{node1:crs}/home/crs # cat .kshrc


export VISUAL=vi
export PS1='{'$(hostname)':'$LOGIN'}$PWD # '
{node1:crs}/home/crs #

[email protected]

4 of 393

12.3.2

VerifyparameterCSSmisscount

MISSCOUNT DEFINITION AND DEFAULT VALUES


The CSS misscount parameter represents the maximum time, in seconds, that a heartbeat can be missed before
entering into a cluster reconfiguration to evict the node. The following are the default values for the misscount
parameter and their respective versions when using Oracle Clusterware*:
10gR1 & 10gR2:
Linux
60 Seconds
Unix

30 Seconds

VMS

30 Seconds

Windows 30 Seconds

Check css
misscount

*CSS misscount default value when using vendor (non-Oracle) clusterware is 600
seconds. This is to allow the vendor clusterware ample time to resolve any possible
split brain scenarios.
Subject:CSSTimeoutComputationinRAC10g(10gRelease1and10gRelease2)
DocID:Note:294430.1

{node1:root}/crs/11.1.0 # crsctl get css misscount

Configuration parameter misscount is not defined.


we should have a defined value

Check css
disktimeout

{node1:root}/crs/11.1.0 # crsctl get css disktimeout

Check css
reboottime

{node1:root}/crs/11.1.0 # crsctl get css reboottime

200

To co ampute the right values, do read metalink note 294430.1, and use following note to change the value :
Subject:10gRAC:StepsToIncreaseCSSMisscount,ReboottimeandDisktimeoutDocID:Note:284752.1
Set css misscount
And check

Keep only one node up and running, stop the others


Backup the content of your OCR !!!
Do modify the CSS parameters with the crsctl command as root user
{node1:root}/crs/11.1.0 # crsctl set css misscount 30

Configuration parameter misscount is now set to 30.


{node1:root}/crs/11.1.0 # crsctl get css misscount

30
Restart all other nodes !!!

11gRAC/ASM/AIX

[email protected]

5 of 393

12.3.3

ClusterReadyServicesHealthCheck

Check CRS processes on each nodes :


{node1:crs}/crs # ps -ef|grep crs
crs
90224 393346
1
Mar 07
- 25:17 /crs/11.1.0/bin/ocssd.bin
crs 188644 237604
0
Mar 07
- 0:05 /crs/11.1.0/bin/evmlogger.bin -o
/crs/11.1.0/evm/log/evmlogger.info -l /crs/11.1.0/evm/log/evmlogger.log
crs 225408 246014
0
Mar 07
- 1:57 /crs/11.1.0/bin/oclsomon.bin
crs 237604 327796
0
Mar 07
- 1:22 /crs/11.1.0/bin/evmd.bin
crs 246014 262190
0
Mar 07
- 0:00 /bin/sh -c cd
/crs/11.1.0/log/node1/cssd/oclsomon; ulimit -c unlimited; /crs/11.1.0/bin/oclsomon || exit $?
root 282844 290996
5
Mar 07
- 251:00 /crs/11.1.0/bin/crsd.bin reboot
root 290996
1
0
Mar 07
- 0:00 /bin/sh /etc/init.crsd run
root 315392 331980
0
Mar 07
- 0:21 /crs/11.1.0/bin/oprocd run -t 1000 -m 500
-f
crs 393346 319628
0
Mar 07
- 0:00 /bin/sh -c ulimit -c unlimited; cd
/crs/11.1.0/log/node1/cssd; /crs/11.1.0/bin/ocssd || exit $?
crs 397508
1
0
0:00 <defunct>
asm 544778
1
0
Mar 10
- 0:00 /crs/11.1.0/bin/oclskd.bin
crs 622672
1
0
Mar 15
- 0:00 /crs/11.1.0/opmn/bin/ons -d
crs 626764 622672
0
Mar 15
- 0:01 /crs/11.1.0/opmn/bin/ons -d
rdbms 4673770
1
0
Mar 15
- 0:00 /crs/11.1.0/bin/oclskd.bin
crs 4681948 4767816
0 16:34:59 pts/0 0:00 grep crs
crs 4767816 675942
0 16:06:06 pts/0 0:00 -ksh
crs 4796524 4767816
7 16:34:59 pts/0 0:00 ps -ef
{node1:crs}/crs # ps -ef|grep d.bin
crs
90224 393346
1
Mar 07
- 25:17 /crs/11.1.0/bin/ocssd.bin
crs 237604 327796
0
Mar 07
- 1:22 /crs/11.1.0/bin/evmd.bin
root 282844 290996
7
Mar 07
- 251:00 /crs/11.1.0/bin/crsd.bin reboot
asm 544778
1
0
Mar 10
- 0:00 /crs/11.1.0/bin/oclskd.bin
rdbms 4673770
1
0
Mar 15
- 0:00 /crs/11.1.0/bin/oclskd.bin
crs 4796528 4767816
0 16:35:11 pts/0 0:00 grep d.bin
{node1:crs}/crs #

You have completed the CRS install. Now you want to verify if the install is valid.
To Ensure that the CRS install on all the nodes is valid, the following should be checked on all the nodes.
1. Ensure that you have successfully completed running root.sh on all nodes during the install. (Please
do not re-run root.sh, this is very dangerous and might corrupt your installation, The object of this step is to only
confirm if the root.sh was run successfully after the install
2. Run the command $ORA_CRS_HOME/bin/crs_stat. Please ensure that this command does not error out but
dumps the information for each resource. It does not matter what CRS stat returns for each resource. If the
crs_stat exits after printing information about each resource then it means that the CRSD daemon is up and
the client crs_stat utility can communicate with it.

This will also indicate that the CRSD can read the OCR.

If the crs_stat errors out with CRS-0202: No resources are registered, Then this means that there are
no resources registered, and at this stage you missed the VIP configuration. This is not an error but is
mostly because at this stage you missed the VIP configuration.

If the crs_stat errors out with CRS-0184: Cannot communicate with the CRS daemon, Then this means
the CRS deamons are not started.

11gRAC/ASM/AIX

[email protected]

6 of 393

Execute
crs_stat t
on one node
as oracle user :

Execute
crs_stat ls
on one node
as oracle user :

{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:crs}/crs/11.1.0/bin #

{node2:crs}/crs/11.1.0/bin # crs_stat -ls


Name
Owner
Primary PrivGrp
Permission
----------------------------------------------------------------ora.node1.gsd crs
oinstall
rwxr-xr-ora.node1.ons crs
oinstall
rwxr-xr-ora.node1.vip root
oinstall
rwxr-xr-ora.node2.gsd crs
oinstall
rwxr-xr-ora.node2.ons crs
oinstall
rwxr-xr-ora.node2.vip root
oinstall
rwxr-xr-{node2:crs}/crs/11.1.0/bin #

{node1:crs}/crs # crs_stat -help


Usage: crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
crs_stat -p [resource_name [...]] [-q]
crs_stat [-a] application -g
crs_stat [-a] application -r [-c cluster_member]
crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
crs_stat -ls [resource_name [...]] [-q]
{node1:crs}/crs #
3. Run the command $ORA_CRS_HOME/bin/olsnodes. This should return all the nodes of the cluster.
Successful run of this command would mean that the css is up and running. Also the CSS from each node can
talk to the CSS of other nodes.
Execute
olsnodes
on both node
as oracle user :

11gRAC/ASM/AIX

{node1:root}/ # su - crs
{node1:crs}/crs # olsnodes
node1
node2
{node1:crs # rsh node2
{node2:crs}/crs # olsnodes
node1
node2
{node2:crs}/crs #

[email protected]

7 of 393

{node1:crs}/crs # olsnodes -help


Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v]
where
-n print node number with the node name
-p print private interconnect name with the node name
-i print virtual IP name with the node name
<node> print information for the specified node
-l print information for the local node
-g turn on logging
-v run in verbose mode
{node1:crs}/crs
{node1:crs}/crs
node1
1
node2
2
{node1:crs}/crs

#
# olsnodes -n -p -i
node1-rac
node1-vip
node2-rac
node2-vip
#

4. Output of crsctl check crs / cssd / crsd / evmd returns ". daemon appears healthy"

CRS health
check

cssd, crsd,
evmd
health
check

CRS
software
version
query

{node1:crs}/crs # crsctl check crs


Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
{node1:crs}/crs #
{node2:crs}/crs # crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
{node1:crs}/crs #

{node1:crs}/crs # crsctl check cssd


Cluster Synchronization Services appears healthy
{node1:crs}/crs #
{node1:crs}/crs # crsctl check crsd
Cluster Ready Services appears healthy
{node1:crs}/crs #
{node1:crs}/crs # crsctl check evmd
Event Manager appears healthy
{node1:crs}/crs #
{node2:crs}/crs # crsctl check cssd
Cluster Synchronization Services appears healthy
{node2:crs}/crs #
{node2:crs}/crs # crsctl check crsd
Cluster Ready Services appears healthy
{node2:crs}/crs #
{node2:crs}/crs # crsctl check evmd
Event Manager appears healthy
{node2:crs}/crs #

{node1:root}/crs # crsctl query crs activeversion


Oracle Clusterware active version on the cluster is [11.1.0.6.0]
{node1:root}/crs #
{node2:crs}/crs # crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.6.0]
{node2:crs}/crs #

11gRAC/ASM/AIX

[email protected]

8 of 393

12.3.4

Addingenhancedcrsstatscript

Add the following shell script in $ORA_CRS_HOME/bin directory


As crs user :
{node1:crs}/crs # cd $ORA_CRS_HOME/bin
{node1:crs}/crs/11.1.0/bin #
{node1:crs}/crs/11.1.0/bin # vi crstat
Add the following lines :
#--------------------------- Begin Shell Script ---------------------------#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
#
- Returns formatted version of crs_stat -t, in tabular
#
format, with the complete rsc names and filtering keywords
#
- The argument, $RSC_KEY, is optional and if passed to the script, will
#
limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
#
- $ORA_CRS_HOME should be set in your environment
RSC_KEY=$1
QSTAT=-u
AWK=/usr/bin/awk

# if not available use /usr/bin/awk

# Table header:echo ""


$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'
# Table body:
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
#--------------------------- End Shell Script ------------------------------

Check that AWK=/usr/bin/awk is the right path on your system !!!


Check that ORA_CRS_HOME is set in your crs .profile file on each node !!!
Save the file and exit
{node1:crs}/crs/11.1.0/bin # chmod +x crsstat
{node1:crs}/crs/11.1.0/bin #
Remote copy the file on al lnodes :
{node1:crs}/crs/11.1.0/bin # rcp crsstat node2:/crs/11.1.0/bin
{node1:crs}/crs/11.1.0/bin # crsstat
HA Resource
----------ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:crs}/crs/11.1.0/bin #

11gRAC/ASM/AIX

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

[email protected]

on
on
on
on
on
on

node1
node1
node1
node2
node2
node2

9 of 393

11gRAC/ASM/AIX

[email protected]

10 of 393

12.3.5

InterconnectNetworkconfigurationCheckup
After CRS installation is completed, verify that the public and cluster interconnect have been set to the desired
values by entering the following commands as root:

Note : oifcfg is found in the <CRS HOME>/bin/oifcfg


{node1:crs}/crs # oifcfg
Name:
oifcfg - Oracle Interface Configuration Tool.
Usage:

oifcfg iflist [-p [-n]]


oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>]

]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]
<nodename>
<if_name>
<subnet>
<if_type>

name of the host, as known to a communications network


name by which the interface is configured in the system
subnet address of the interface
type of the interface { cluster_interconnect | public | storage }

{node1:crs}/crs #

oifcfg getif
Subject:HowtoChangeInterconnect/PublicInterfaceIPSubnetina10gClusterDocID:Note:283684.1
This command should return values for global public and global cluster_interconnect; for example:
{node1:root}/crs/11.1.0/bin # oifcfg getif
en0 10.3.25.0 global public
en1 10.10.25.0 global cluster_interconnect
{node1:root}/crs/11.1.0/bin #
If the command does not return a value for global cluster_interconnect, enter the following
commands:
{node1:crs}/crs/11.1.0/bin # oifcfg delif -global
# oifcfg setif -global <interface name>/<subnet>:public
# oifcfg setif global <interface name>/<subnet>:cluster_interconnect
For example:
{node1:crs}/crs/11.1.0/bin # oifcfg delif -global
{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en0/10.3.25.0.0:public
{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en1/10.10.25.0.0:cluster_interconnect

If necessary and only for troubleshooting purpose, disable the automatic reboot of AIX nodes when node
fail to communicate with CRS daemons, or fail to access OCR and Voting disk.

Subject: 10g RAC: Stopping Reboot Loops When CRS Problems Occur Doc ID: Note:239989.1
Subject: 10g RAC: Troubleshooting CRS Reboots Doc ID: Note:265769.1
If one node crashed after running dbca, netca tools, with CRS codedump and Authentication OSD error,
check crsd.log file for missing $CRS_HOME/crs/crs/auth directory.
THEN you need to re-create manually the missing directory, create the auth directory with correct owner,
group, and permission using following metalink note :
Subject: Crs Crashed With Authentication Osd Error Doc ID: Note:358400.1

11gRAC/ASM/AIX

[email protected]

11 of 393

12.3.6

OracleCLusterRegistrycontentCheckandBackup

{node1:crs}/crs # ocrcheck
Check Oracle Status of Oracle Cluster Registry
Version
Cluster
Total space (kbytes)
Registry
Integrity
Used space (kbytes)
Available space (kbytes)
As oracle
ID
user,
Device/File Name
Execute
ocrcheck
Device/File Name

is as follows :
:
2
:
306972
:
5716
:
301256
: 1928316120
: /dev/ocr_disk1
Device/File integrity check succeeded
: /dev/ocr_disk2
Device/File integrity check succeeded

Cluster registry integrity check succeeded


{node1:crs}/crs #

Check OCR
disks
locations :

{node1:crs}/crs # cat /etc/oracle/ocr.loc


ocrconfig_loc=/dev/ocr_disk1
ocrmirrorconfig_loc=/dev/ocr_disk2
local_only=FALSE
{node1:crs}/crs #

Check Voting {node1:crs}/crs # crsctl query


0.
0
/dev/voting_disk1
Disks
1.
0
/dev/voting_disk2
2.
0
/dev/voting_disk3
As oracle
user,
Located 3 voting disk(s).
Execute :
{node1:crs}/crs #

11gRAC/ASM/AIX

css votedisk

[email protected]

12 of 393

Using ocrconfig crs tool to export OCR content:

11gRAC/ASM/AIX

[email protected]

13 of 393

{node1:crs}/crs # ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a file
-import <filename>
- Import cluster registry contents from a file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname>
- Configure periodic backup location
-showbackup [auto|manual]
- Show backup information
-manualbackup
- Perform OCR backup
-restore <filename>
- Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite
- Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename>
- Repair local OCR configuration
-help
- Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
{node1:crs}/crs #
Export content of OCR :
{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export.dmp1 -s online
{node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
-rw-r--r-1 root
system
106420 Jan 30 18:30 /oracle/ocr_export.dmp
{node1:root}/crs/11.1.0/bin #
you must not edit/modify this exported file

11gRAC/ASM/AIX

[email protected]

14 of 393

Using ocrconfig crs tool to backup OCR content :


{node1:crs}/crs # ocrconfig -showbackup
{node1:crs}/crs #
{node1:crs}/crs # ocrconfig -manualbackup
PROT-20: Insufficient permission to proceed. Require privileged user
{node1:crs}/crs # su
root's Password:
{node1:root}/crs # ocrconfig -manualbackup
node1
2008/02/24 08:08:48
{node1:root}/crs #

/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr

After few hours, weeks, months, ocrconfig showbackup could display the following :
View OCR automatic periodic backup managed by Oracle Clusterware
{node1:crs}/crs # ocrconfig -showbackup
node2

2008/03/16 14:45:48

/crs/11.1.0/cdata/crs_cluster/backup00.ocr

node2

2008/03/16 10:45:47

/crs/11.1.0/cdata/crs_cluster/backup01.ocr

node2

2008/03/16 06:45:47

/crs/11.1.0/cdata/crs_cluster/backup02.ocr

node2

2008/03/15 06:45:46

/crs/11.1.0/cdata/crs_cluster/day.ocr

node2

2008/03/07 02:52:46

/crs/11.1.0/cdata/crs_cluster/week.ocr

node1

2008/02/24 08:09:21

/crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr

node1
2008/02/24 08:08:48
{node1:crs}/crs #

/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr

11gRAC/ASM/AIX

[email protected]

15 of 393

List of options for srvctl command :


{node1:root}/crs/11.1.0/bin # srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY
| PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl add instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl add service -d <name> -s <service_name> -r "<preferred_list>" [-a "<available_list>"] [-P <TAF_policy>]
Usage: srvctl add service -d <name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl add nodeapps -n <node_name> -A <name|ip>/netmask[/if1[|if2|...]]
Usage: srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> [-p <spfile>]
Usage: srvctl add listener -n <node_name> -o <oracle_home> [-l <listener_name>]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

config
config
config
config
config
config

database
database -d <name> [-a] [-t]
service -d <name> [-s <service_name>] [-a] [-S <level>]
nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
asm -n <node_name>
listener -n <node_name>

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

disable
disable
disable
disable

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

enable
enable
enable
enable

database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

getenv
getenv
getenv
getenv

database -d <name> [-t "<name_list>"]


instance -d <name> -i <inst_name> [-t "<name_list>"]
service -d <name> -s <service_name> [-t "<name_list>"]
nodeapps -n <node_name> [-t "<name_list>"]

database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]

Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY |
PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
11gRAC/ASM/AIX

[email protected]

16 of 393

Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]
Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

remove
remove
remove
remove
remove
remove

database -d <name> [-f]


instance -d <name> -i <inst_name> [-f]
service -d <name> -s <service_name> [-i <inst_name>] [-f]
nodeapps -n "<node_name_list>" [-f]
asm -n <node_name> [-i <asm_inst_name>] [-f]
listener -n <node_name> [-l <listener_name>]

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

setenv
setenv
setenv
setenv

database -d <name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}


instance -d <name> [-i <inst_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
service -d <name> [-s <service_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
nodeapps -n <node_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}

Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

start
start
start
start
start
start

Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl

status
status
status
status
status

Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

stop
stop
stop
stop
stop
stop

database -d <name> [-o <start_options>]


instance -d <name> -i "<inst_name_list>" [-o <start_options>]
service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-o <start_options>]
nodeapps -n <node_name>
asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
listener -n <node_name> [-l <lsnr_name_list>]
database -d <name> [-f] [-v] [-S <level>]
instance -d <name> -i "<inst_name_list>" [-f] [-v] [-S <level>]
service -d <name> [-s "<service_name_list>"] [-f] [-v] [-S <level>]
nodeapps -n <node_name>
asm -n <node_name>

database -d <name> [-o <stop_options>]


instance -d <name> -i "<inst_name_list>" [-o <stop_options>]
service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-f]
nodeapps -n <node_name> [-r]
asm -n <node_name> [-i <asm_inst_name>] [-o <stop_options>]
listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl unsetenv database -d <name> -t "<name_list>"


Usage: srvctl unsetenv instance -d <name> [-i <inst_name>] -t "<name_list>"
Usage: srvctl unsetenv service -d <name> [-s <service_name>] -t "<name_list>"
Usage: srvctl unsetenv nodeapps -n <node_name> -t "<name_list>"
{node1:root}/crs/11.1.0/bin #
11gRAC/ASM/AIX

[email protected]

17 of 393

11gRAC/ASM/AIX

[email protected]

18 of 393

12.4 Some usefull commands

AS root user
Command to start/stop
the CRS deamons :

To start the CRS :


{node1:root}/crs/11.1.0/bin # crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
{node1:root}/crs/11.1.0/bin #
To stop the CRS :
{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/ # cd /crs/11.1.0/bin
{node1:root}/crs/11.1.0/bin # crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
{node1:root}/crs/11.1.0/bin #

All crsctl command available :


{node1:root}/crs/11.1.0/bin # crsctl
Usage: crsctl check crs
- checks the viability of the CRS stack
crsctl check cssd
- checks the viability of CSS
crsctl check crsd
- checks the viability of CRS
crsctl check evmd
- checks the viability of EVM
crsctl set
css <parameter> <value> - sets a parameter override
crsctl get
css <parameter> - gets the value of a CSS parameter
crsctl unset css <parameter> - sets CSS parameter to its default
crsctl query css votedisk
- lists the voting disks used by CSS
crsctl add
css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs
- enables startup for all CRS daemons
crsctl disable crs
- disables startup for all CRS daemons
crsctl start crs - starts all CRS daemons.
crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster.
crsctl start resources - starts CRS resources.
crsctl stop resources - stops CRS resources.
crsctl debug statedump evm - dumps state info for evm objects
crsctl debug statedump crs - dumps state info for crs objects
crsctl debug statedump css - dumps state info for css objects
crsctl debug log css [module:level]{,module:level} ...
- Turns on debugging for CSS
crsctl debug trace css - dumps CSS in-memory tracing cache
crsctl debug log crs [module:level]{,module:level} ...
- Turns on debugging for CRS
crsctl debug trace crs - dumps CRS in-memory tracing cache
crsctl debug log evm [module:level]{,module:level} ...
- Turns on debugging for EVM
crsctl debug trace evm - dumps EVM in-memory tracing cache
crsctl debug log res <resname:level> turns on debugging for resources
crsctl query crs softwareversion [<nodename>] - lists the version of CRS software
installed
crsctl query crs activeversion - lists the CRS software operating version
crsctl lsmodules css - lists the CSS modules that can be used for debugging
crsctl lsmodules crs - lists the CRS modules that can be used for debugging
crsctl lsmodules evm - lists the EVM modules that can be used for debugging
If necesary any of these commands can be run with additional tracing by
adding a "trace" argument at the very front.

11gRAC/ASM/AIX

[email protected]

19 of 393

Example: crsctl trace check css


{node1:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

20 of 393

12.5 Accessing CRS logs

To view CRS
logs

cd /crs/11.1.0/log/nodename/.
In our case nodename will be node1 for CRS logs on node1
cd /crs/11.1.0/log/node1
And nodename will be node2 for CRS logs on node2
cd /crs/11.1.0/log/node2
Contents example of ../crs/log/node1 with node1 :
{node1:root}/crs/11.1.0/log/node1 # ls -la
total 256
drwxr-xr-t
8 root
dba
256 Mar 12 16:21 .
drwxr-xr-x
4 oracle
dba
256 Mar 12 16:21 ..
drwxr-x--2 oracle
dba
256 Mar 12 16:21 admin
-rw-rw-r-1 root
dba
24441 Apr 17 12:27 alertnode1.log
drwxr-x--2 oracle
dba
98304 Apr 17 12:27 client
drwxr-x--2 root
dba
256 Apr 16 13:56 crsd
drwxr-x--4 oracle
dba
256 Mar 22 21:56 cssd
drwxr-x--2 oracle
dba
256 Mar 12 16:22 evmd
drwxrwxr-t
5 oracle
dba
4096 Apr 16 12:48 racg
{node1:root}/crs/11.1.0/log/node1 #

Look at Metalink Note :


Subject: Oracle Clusterware consolidated logging in 10gR2 Doc ID: Note:331168.1
Extract from the note :

CRS logs are in $ORA_CRS_HOME/log/<hostname>/crsd/

Oracle Clusterware
consolidated logging in
10gR2

CSS logs are in $ORA_CRS_HOME/log/<hostname>/cssd/

EVM logs are in $ORA_CRS_HOME/log/<hostname>/evmd


$ORA_CRS_HOME/evm/log/

Resource specific logs are in


$ORA_CRS_HOME/log/<hostname>/racg and the
$ORACLE_HOME/log/<hostname>/racg

RVM logs are in $ORA_CRS_HOME/log/<hostname>/client and


the $ORACLE_HOME/log/<hostname>/client

Cluster Network Communication logs are in the


$ORA_CRS_HOME/log directory

OPMN logs are in the $ORA_CRS_HOME/opmn/logs

New in 10g Release 2 is a alert<nodename>.log present in


the $ORA_CRS_HOME/log/<hostname>

!!! Same for 11g !!!

Look at following Metalink Note to get all logs necessary for needed support diagnostics :
Subject: CRS 10g R2 Diagnostic Collection Guide Doc ID: Note:330358.1

11gRAC/ASM/AIX

[email protected]

21 of 393

12.6 Clusterware Basic Testing


Prior to go further, you should test how the clusterware is behaving at simple tests to validate your oracle
clusterware installation.
Action to be done :
( 1 ) Node 1 and Node 2 are UP and
Running

Reboot node1

What should happen !!!!

Before reboot of node1 :


{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:crs}/crs/11.1.0/bin #
While node1 is rebooting :
Check on node2, and VIP from node1 should appear while node1 is out of order.
{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora.node1.gsd application
ONLINE
OFFLINE
ora.node1.ons application
ONLINE
OFFLINE
ora.node1.vip application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:crs}/crs/11.1.0/bin #
When node1 is back with CRS up and running, the VIP will come back to its original
position on node1.
After reboot of node1 :
{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:crs}/crs/11.1.0/bin #

( 2 ) Node 1 and Node 2 are UP and


Running
Reboot node2
( 2 ) Node 1 and Node 2 are UP and
Running
Reboot node1 and node2

11gRAC/ASM/AIX

Check on node1, and VIP from node2 should appear while node2 is out of order.
When node2 is back with CRS up and running, the VIP will come back to its original
position on node2.

After reboot, both nodes will come back with CRS up and running, with VIP from
both nodes on their respective positions, VIP (node1) on node1 and VIP (node2) on
node2.

[email protected]

22 of 393

12.7 What Has Been Done ?

At this stage :

The Oracle Cluster Registry and Voting Disk are created and configured
The Oracle Cluster Ready Services is installed, and started on all nodes.
The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.

12.8 VIP and CRS TroubleShouting

If problems occurs with VIP configuration assistant, please use the


metalink notes specified in this chapter.
Metalink Note 296856.1- Configuring the IBM AIX 5L Operating System for the Oracle 10g VIP
Metalink Note 294336.1- Changing the check interval for the Oracle 10g VIP
Metalink Note 276434.1- Modifying the VIP of a Cluster Node
srvctlmodifynodeappsn<node_name>[o<oracle_home>][A<new_vip_address>]

OptionsDescription:
n<node_name>Nodename.
o<oracle_home>Oraclehomefortheclusterdatabase.
A<new_vip_address>ThenodelevelVIPaddress(<name|ip>/netmask[/if1[|if2|...]]).
Anexampleofthe'modifynodeapps'commandisasfollows:
$srvctlstopnodeappsnnode1
$srvctlmodifynodeappsnnode1A10.3.25.181/255.255.255.0/en0
$srvctlstartnodeappsnnode1
Note:Thiscommandshouldberunasroot.
Metalink Note 298895.1- Modifying the default gateway address used by the Oracle 10g VIP
Metalink Note 264847.1- How to Configure Virtual IPs for 10g RAC

How to delete VIP IP


allias on public network
card, if they are
persistents even after
the CRS shutdown :

11gRAC/ASM/AIX

Example for our case :


On node1 as root Ifconfig en0 delete 10.3.25.181
On node2 as root Ifconfig en0 delete 10.3.25.182

[email protected]

23 of 393

12.9 How to clean a failed CRS installation

Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
On both Nodes :
As root user on each node :
$ORA_CRS_HOME/bin/crsctl stop crs (to clean any remaining crs deamons)
rmitab h1 this will remove oracle CRS entry in the /etc/inittab
rmitab h2
rmitab h3
rm Rf /opt/ORCL*
kill remaining process from output : ps-ef|grep crs and ps-ef|grep d.bin
rm R /etc/oracle/*
You should now remove the CRS installation, 2 options :
1/ You want to keep the oraInventory as its used for other oracle products which are installed and
used, THEN run runInstaller as oracle user to uninstall the CRS installation. When done remove the
content of the CRS directory on both nodes : rm Rf /crs/11.1.0/*
OR
2/ You dont care about the oraInventory, and theres no other oracle products installed on
the nodes, THEN remove the full content of the $ORACLE_BASE including the oraInventory
directory : rm Rf /oracle/*
Change owner, group and permission for /dev/ocr_disk and /dev/vote_disk on each node of the cluster :
On node1

{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/

#
#
#
#

chown
chown
chmod
chmod

On node2

{node1:root}/
{node2:root}/
{node2:root}/
{node2:root}/
{node2:root}/

#
#
#
#
#

rsh node2
chown root.oinstall /dev/ocr_disk1*
chown crs.oinstall /dev/voting_disk*
chmod 660 /dev/ocr_disk*
chmod 660 /dev/voting_disk*

root.oinstall /dev/ocr_disk*
crs.oinstall /dev/voting_disk*
660 /dev/ocr_disk*
660 /dev/voting_disk*

To erase ALL OCR and Voting diks content Format (Zeroing) on the disks from one node :
for i in 1 2
do
dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.
.
for i in 1 2 3
do
dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.

11gRAC/ASM/AIX

[email protected]

24 of 393

13 INSTALL AUTOMATED STORAGE MANAGEMENT SOFTWARE

At this stage, Oracle Clusterware is installed and MUST be started on all nodes.
ASM is an Oracle cluster files system (Oracle Automated Storage Management), and as a CFS it can be
independent from the database software, and updated to upper release independently from database software.
That is the reason why we will install ASM software in its own ORACLE_HOME directory that well define as
ASM_HOME ( /oracle/asm/11.1.0 ).
Now, in order to make available ASM cluster files systems to Oracle RAC database, we need to :

Install ASM software in its own ORACLE_HOME

Create and configure a Listener on each node


Either Thru Netca (Oracle Network Configuration Assistant)
Or Manually

Create and configure ASM instance on each node


Either Thru DBCA (Oracle Database Configuration Assistant)
Or Manually

Prepare LUNs / Disks for ASM (Done in previous Storage chapter)

Create and configure an ASM Diskgroup


Either Thru DBCA (Oracle Database Configuration Assistant)
Or SQL commands, using SQLPlus.

11gRAC/ASM/AIX

[email protected]

25 of 393

13.1 Installation
Oracle ASM installation just have to be done only starting from one node. Once the first node is installed, Oracle
OUI automatically starts the copy of the mandatory files on the second node, using rcp or scp command. This step
could last long, depending on the network speed (one hour), without any message. So, dont think the OUI is
stalled, and look at the network traffic before canceling the installation !
You can also create a staging area. The name of the subdirectories is in the format Disk1 to Disk3.
On each node :

Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced


libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node2:root}/ # /usr/sbin/slibclean

From first node As


root user, execute :

Under VNC Client session, or other graphical interface, execute :

On each node, set right


ownership and permissions
to following directories :

{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

{node1:root}/ # chown crs:oinstall /oracle/asm


{node1:root}/ # chmod 665 /oracle/asm
{node1:root}/ #

Login as asm (in our case), or oracle user and follow the procedure hereunder

Setup and
export your
DISPLAY, TMP and
TEMP variables

With /tmp or other destination having enough free space, about 500Mb on
each node.
{node1:asm}/ # export DISPLAY=node1:0.0
If not set in asm .profile, do :
{node1:asm}/ # export TMP=/tmp
{node1:asm}/ # export TEMP=/tmp
{node1:asm}/ # export TMPDIR=/tmp

Check that Oracle Clusterware (including VIP, ONS and GSD) is started on each node !!!
As asm user from
node1 :

11gRAC/ASM/AIX

{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

[email protected]

26 of 393

11gRAC/ASM/AIX

[email protected]

27 of 393

{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ls
doc
install
response
runInstaller stage
welcome.html
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer
Checking Temp space: must be greater than 190 MB. Actual 1950 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-13_03-47-34PM. Please wait
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.
At the OUI Welcome screen
You can check the installed Products :

Just click Next

Select the installation type :


You have the option to choose Enterprise,
Standard Edition, or Custom to proceed.
Choose the Custom option to avoid creating
a database by default.

Then click Next ...

11gRAC/ASM/AIX

[email protected]

28 of 393

Specify File Locations :


Do not change the Source field
Specify a different ORACLE_HOME Name with
its own directory for the Oracle software
installation.

This ORACLE_HOME must be different


then the CRS ORACLE_HOME.
OraAsm11g_home1
/oracle/asm/11.1.0
Then click Next ...

If you dont see the following screen with


Node selection, it might be that your CRS is
down on one or all nodes. Please check if
CRS is up and running on all nodes.
Specify Hardware Cluster Installation Mode :
Select Cluster Installation
AND the other nodes on to which the Oracle
RDBMS software will be installed. It is not
necessary to select the node on which the OUI is
currently running. Click Next.
Then click Next ...

The installer will check some productspecific Prerequisite.


Dont take care of the lines with checking at
status Not executed, These are just warnings
because AIX maintenance level might be higher
then 5300, which is the case in our example
(ML03).
Then click Next ...

11gRAC/ASM/AIX

[email protected]

29 of 393

Details of the
prerequisite
checks done
by
runInstaller

Checking operating system requirements ...


Expected result: One of 5200.004,5300.002
Actual Result: 5300.002
Check complete. The overall result of this check is: Passed
=======================================================================
Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.0.51). Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.50). Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.0.40). Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.0.50). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.0.50). Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.0.50). Passed
Check complete. The overall result of this check is: Passed
=======================================================================
Checking recommended operating system patches
Checking for IY59386(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.51). Passed
Checking for IY60930(bos.mp,5.3.0.1); found (bos.mp,5.3.0.54). Passed
Checking for IY60930(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.54). Passed
Checking for IY66513(bos.mp64,5.3.0.20); found (bos.mp64,5.3.0.54). Passed
Checking for IY66513(bos.mp,5.3.0.20); found (bos.mp,5.3.0.54). Passed
Checking for IY70159(bos.mp,5.3.0.22); found (bos.mp,5.3.0.54). Passed
Checking for IY70159(bos.mp64,5.3.0.22); found (bos.mp64,5.3.0.54). Passed
Checking for IY58143(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.54). Passed
Checking for IY58143(bos.acct,5.3.0.1); found (bos.acct,5.3.0.51). Passed
Checking for IY58143(bos.adt.include,5.3.0.1); found (bos.adt.include,5.3.0.53). Passed
Checking for IY58143(bos.adt.libm,5.3.0.1); found (bos.adt.libm,5.3.0.40). Passed
Checking for IY58143(bos.adt.prof,5.3.0.1); found (bos.adt.prof,5.3.0.53). Passed
Checking for IY58143(bos.alt_disk_install.rte,5.3.0.1); found (bos.alt_disk_install.rte,5.3.0.51). Passed
Checking for IY58143(bos.cifs_fs.rte,5.3.0.1); found (bos.cifs_fs.rte,5.3.0.50). Passed
Checking for IY58143(bos.diag.com,5.3.0.1); found (bos.diag.com,5.3.0.51). Passed
Checking for IY58143(bos.perf.libperfstat,5.3.0.1); found (bos.perf.libperfstat,5.3.0.50). Passed
Checking for IY58143(bos.perf.perfstat,5.3.0.1); found (bos.perf.perfstat,5.3.0.50). Passed
Checking for IY58143(bos.perf.tools,5.3.0.1); found (bos.perf.tools,5.3.0.52). Passed
Checking for IY58143(bos.rte.boot,5.3.0.1); found (bos.rte.boot,5.3.0.51). Passed
Checking for IY58143(bos.rte.archive,5.3.0.1); found (bos.rte.archive,5.3.0.51). Passed
Checking for IY58143(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.51). Passed
Checking for IY58143(bos.rte.control,5.3.0.1); found (bos.rte.control,5.3.0.50). Passed
Checking for IY58143(bos.rte.filesystem,5.3.0.1); found (bos.rte.filesystem,5.3.0.51). Passed
Checking for IY58143(bos.rte.install,5.3.0.1); found (bos.rte.install,5.3.0.54). Passed
Checking for IY58143(bos.rte.libc,5.3.0.1); found (bos.rte.libc,5.3.0.53). Passed
Checking for IY58143(bos.rte.lvm,5.3.0.1); found (bos.rte.lvm,5.3.0.53). Passed
Checking for IY58143(bos.rte.man,5.3.0.1); found (bos.rte.man,5.3.0.50). Passed
Checking for IY58143(bos.rte.methods,5.3.0.1); found (bos.rte.methods,5.3.0.51). Passed
Checking for IY58143(bos.rte.security,5.3.0.1); found (bos.rte.security,5.3.0.53). Passed
Checking for IY58143(bos.rte.serv_aid,5.3.0.1); found (bos.rte.serv_aid,5.3.0.52). Passed
Check complete. The overall result of this check is: Passed
=======================================================================
Validating ORACLE_BASE location (if set) ...
Check complete. The overall result of this check is: Passed
=======================================================================
Checking for proper system clean-up....
Check complete. The overall result of this check is: Passed
=======================================================================
Checking for Oracle Home incompatibilities ....
Actual Result: NEW_HOME
Check complete. The overall result of this check is: Passed
=======================================================================
Checking Oracle Clusterware version ...
Check complete. The overall result of this check is: Passed
=======================================================================

11gRAC/ASM/AIX

[email protected]

30 of 393

Available Product Components :


Select the product components for Oracle 11g
ASM software that you want to install.

Then click Next ...

Privileged Operating Systems Groups :


Verify the UNIX primary group name of the user
which controls the installation of the Oracle11g
ASM software. (Use unix command id to find out)
And specify the Privileged Operating System
Groups to the value found.
In our example, this must be :
dba for Database Administrator (OSDBA)
group
oper for Database Operator (OSOPER)
group
asm for administrator (SYSASM) group
(Primary group of unix asm user) to be set for
both entries.
Then click Next ...

Create Database :
Choose Install database Software Only
Choose Configure Automatic Storage
Management (ASM), we want to install the
ASM software in its own ORACLE_HOME at this
stage.

Then click Next ...

11gRAC/ASM/AIX

[email protected]

31 of 393

Summary :
The Summary screen will be presented.
Check Cluster Nodes and Remote Nodes lists.
The OUI will install the Oracle 11g ASM
software on to the local node, and then copy this
information to the other selected nodes.

Then click Install ...

Install :
The Oracle Universal Installer will proceed
the installation on the first node, then will
copy automatically the code on the others
selected nodes.
At 50%, it may take time to pass over the 50%,
dont be affrais, installation is processing,
running a test script on remote nodes
Just wait for the next screen ...

At this stage, you could hit message about Failed Attached Home !!!

if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on OK
:
From node2 :
{node2:root}/oracle/asm/11.1.0 # su asm
{node2:asm}/oracle/asm -> /oracle/asm/11.1.0/oui/bin/runInstaller attachHome
noClusterEnabled ORACLE_HOME=/oracle/asm/11.1.0 ORACLE_HOME_NAME=OraAsm11g_home1
CLUSTER_NODES=node1,node2 INVENTORY_LOCATION=/oracle/oraInventory LOCAL_NODE=node2
Starting Oracle Universal Installer
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be
executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
AttachHome was successful.
{node2:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

32 of 393

Execute Configuration
Scripts :
/oracle/asm/11.1.0/root.sh to
execute on each node as root
user, on node1, then on node2.

When Done, come back on


this screen and click on
OK... !!!

11gRAC/ASM/AIX

[email protected]

33 of 393

On node1 :
{node1:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node1:root}/oracle/asm/11.1.0 #

On node2 :
{node2:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node2:root}/oracle/asm/11.1.0 #

End of Installation :
Click on Installed Products to chek with the
oraInventory that ASM is installed on each
node.

11gRAC/ASM/AIX

[email protected]

34 of 393

Just click on Close, then Exit ...

11gRAC/ASM/AIX

[email protected]

35 of 393

13.2 Update the ASM unix user .profile


To be done on each node for asm (in our case) or oracle unix user.
vi $HOME/.profile file in asms home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
then echo "$MAILMSG"
fi

# This is at Shell startup. In normal


# operation, the Shell checks
# periodically.

ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORACLE_BASE/asm/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_SID=+ASM1
if [ -t 0 ]; then
stty intr ^C
fi
On node2, ORACLE_SID will be set as bellow :
...
export ORACLE_SID=+ASM2

Do disconnect from asm user, and reconnect to load modified $HOME/.profile

11gRAC/ASM/AIX

[email protected]

36 of 393

13.3 Checking Oracle Clusterware

At this stage, Oracle ASM software is installed in its own ORACLE_HOME directory that well define as
ASM_HOME ( /oracle/asm/11.1.0 ).
Now, we need to :

Create and configure a Listener on each node

Create and configure ASM instance on each node

Create and configure an ASM Diskgroup

11gRAC/ASM/AIX

[email protected]

37 of 393

13.4 Configure default node listeners

In order to configure ASM instances, and then create ASM Disk Groups, we need to configure and start a default
listener on each node.
Listener on each node must :

Configured with ORACLE_HOME set to /oracle/asm/11.1.0

Listen to Static IP and VIP

Using TCP and IPC protocols

Starting the installation !!!


{node1:asm}/oracle/asm/11.1.0/bin # id
uid=502(asm) gid=500(oinstall)
groups=1(staff),501(crs),502(dba),503(oper),504(asm),505(asmdba)
{node1:asm}/oracle/asm/11.1.0/bin # pwd
/oracle/asm/11.1.0/bin
{node1:asm}/oracle/asm/11.1.0/bin # ./netca

11gRAC/ASM/AIX

[email protected]

38 of 393

Oracle Net Configuration Assistant,


Configuration :

Select Cluster Configuration as


type of Oracle Net Services.

Click Next

Oracle Net Configuration Assistant,


Real Application Clusters, Active Nodes
:

Select All nodes to configure.

Click Next

Oracle Net Configuration Assistant,


Welcome :

Select Listener Configuration

Click Next

11gRAC/ASM/AIX

[email protected]

39 of 393

Oracle Net Configuration Assistant,


Listener Configuration, Listener :

Select Add

Click Next

Oracle Net Configuration Assistant,


Listener Name :

Keep by default name as


LISTENER.

Click Next

Oracle Net Configuration Assistant,


Listener Configuration, Select
Protocols :

Select Protocols TCP, and IPC

Click Next

11gRAC/ASM/AIX

[email protected]

40 of 393

Oracle Net Configuration Assistant,


Listener Configuration, TCP / IP
Protocol :

Use the standard port number of


1521, or other port for your
project.

Port 1521 means automatic registering


of ASM or databases instances within
the listeners.
Click Next

Using other port will mean that configuration of LOCAL and REMOTE Listeners will be MANDATORY.
Later, If you need to configure more than a listener, for example one for each database in the cluster,
you should :

Have a listener for ASM on each node :


Keep a default listener in the $ASM_HOME (/oracle/asm/11.1.0).
Using port different than 1521 to avoid automatic registering of all the resources on this listener.
Configure local and remote listeners for ASM instances.

Have listener(s) for RDBMS on each node :


Set your extra listeners out of the $ASM_HOME, in their own ORACLE_HOME
Using port different than 1521 and different from the one from ASM listeners.
Configure local and remote listeners for each Database instances.

In case of extra listeners, they should be configured from their own $ORACLE_HOME directory, and not from the
$ASM_HOME disrectory. So creation of extra listeners will have to be done when RDBMS will be installed ...

Oracle Net Configuration Assistant,


Listener Configuration,
IPC Protocol :

Enter IPC Key value, EXTPROC


for example.

Click Next

11gRAC/ASM/AIX

[email protected]

41 of 393

Oracle Net Configuration Assistant,


Listener Configuration, More
Listeners ? :

Select Protocols TCP, and IPC


Click Next

THEN
On the telnet session, we can see :
Oracle Net Services
Configuration:
Configuring Listener:LISTENER
node1...
node2...
Listener configuration complete.
Oracle Net Configuration Assistant,
Listener Configuration, Listener
Configuration Done :

Listener Configuration complete !


Click Next

Oracle Net Configuration Assistant,


Listener Configuration, Welcome :
Click on Finish to exit !!!
Following message will appear on
exit :
Oracle Net Services configuration
successful. The exit code is 0

11gRAC/ASM/AIX

[email protected]

42 of 393

Checking if Listeners are registered in Oracle Clusterware :


1 listener on each node must be registered !!!

{node1:asm}/oracle/asm/11.1.0/bin # srvctl stop listener -n node1


To stop the default node
listener on specified node : {node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t

Node1 for example

Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
OFFLINE
OFFLINE
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm/11.1.0/bin #

{node1:asm}/oracle/asm/11.1.0/bin # srvctl start listener -n node1


To start the default node
listener on specified node : {node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t

Node1 for example

11gRAC/ASM/AIX

Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm/11.1.0/bin #

[email protected]

43 of 393

To get
information
about the
listener
resource on
one node, from
the Oracle
Clusterware
Registry :
Node1 for
example

{node1:asm}/oracle/asm/11.1.0/bin # crs_stat |grep lsnr


NAME=ora.node1.LISTENER_NODE1.lsnr
NAME=ora.node2.LISTENER_NODE2.lsnr
{node1:asm}/oracle/asm/11.1.0/bin #
{node1:asm}/oracle/asm/11.1.0/bin # crs_stat -p ora.node1.LISTENER_NODE1.lsnr
NAME=ora.node1.LISTENER_NODE1.lsnr
TYPE=application
ACTION_SCRIPT=/oracle/asm/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=600
DESCRIPTION=CRS application for listener on node
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=ora.node1.vip
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=600
STOP_TIMEOUT=600
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
{node1:asm}/oracle/asm/11.1.0/bin #
{node1:asm}/oracle/asm/11.1.0/bin # crs_stat ora.node1.LISTENER_NODE1.lsnr
NAME=ora.node1.LISTENER_NODE1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on node1
{node1:asm}/oracle/asm/11.1.0/bin #
{node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t ora.node1.LISTENER_NODE1.lsnr
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
{node1:asm}/oracle/asm/11.1.0/bin #
{node1:asm}/oracle/asm/11.1.0/bin # crs_stat -ls ora.node1.LISTENER_NODE1.lsnr
Name
Owner
Primary PrivGrp
Permission
----------------------------------------------------------------ora....E1.lsnr asm
oinstall
rwxrwxr-{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

44 of 393

{node1:asm}/oracle/asm/11.1.0/bin # srvctl config listener -n node1


node1 LISTENER_NODE1
{node1:asm}/oracle/asm/11.1.0/bin #

Check that

{node1:asm}/oracle/asm/11.1.0/network/admin # cat listener.ora


# listener.ora.node1 Network Configuration File:
/oracle/asm/11.1.0/network/admin/listener.ora.node1
# Generated by Oracle configuration tools.

entry is OK,
showing
LISTENER_NODE LISTENER_NODE1 =
1, and with vip from
(DESCRIPTION_LIST =
node1, and static IP
(DESCRIPTION =
from node1.
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))

(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521)(IP = FIRST))


(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.81)(PORT = 1521)(IP =
FIRST))
)
)
{node1:asm}/oracle/asm/11.1.0/network/admin #

Check that entry is OK, showing LISTENER_NODE2, and with vip from node2, and static IP from node2.
It may happens that
on remote nodes,
the listener.ora file
get not properly
configured, and
may show same
content as the one
from node1.
Chech on extra
nodes, if any

{node2:asm}/oracle/asm/11.1.0/network/admin # cat listener.ora


# listener.ora.node2 Network Configuration File:
/oracle/asm/11.1.0/network/admin/listener.ora.node2
# Generated by Oracle configuration tools.
LISTENER_NODE2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.82)(PORT = 1521)(IP =
FIRST))
)
)
{node2:asm}/oracle/asm/11.1.0/network/admin #

To get a status from the listener :


export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl status listener
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 13-FEB-2008 21:52:26
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
13-FEB-2008 21:39:08
Uptime
0 days 0 hr. 13 min. 20 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml

11gRAC/ASM/AIX

[email protected]

45 of 393

Listening Endpoints Summary...


(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
The listener supports no services
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

46 of 393

To get a status of the


services managed by
the listener :

{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl services listener


LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production on 13-FEB-2008 21:53:49
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The listener supports no services
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

To get help on
listeners commands :

{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl help


LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on
13-FEB-2008 21:55:05
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

The following operations are available


An asterisk (*) denotes a modifier or extended command:
start
services
save_config
change_password
set*

stop
version
trace
quit
show*

status
reload
spawn
exit

{node1:asm}/oracle/asm/11.1.0/bin #

To get the version release of the listener software :


{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl version
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 13-FEB-2008
21:58:10
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
TNS for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Unix Domain Socket IPC NT Protocol Adaptor for IBM/AIX RISC System/6000:
Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Oracle Bequeath NT Protocol Adapter for IBM/AIX RISC System/6000: Version
11.1.0.6.0 - Production,,
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX

[email protected]

47 of 393

13.5 Create ASM Instances


Create, configure and start the ASM instances

And create the ASM diskgroups.

11gRAC/ASM/AIX

[email protected]

48 of 393

13.5.1

ThruDBCA

As asm unix user, from one node :


{node1:asm}/oracle/asm # export DISPLAY=node1 :0.0
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # cd $ORACLE_HOME/bin
{node1:asm}/oracle/asm/11.1.0/bin # dbca &

Database Configuration
Assistant : Welcome

Select Real Application


Clusters Database
THEN click Next
for next screen

Database Configuration
Assistant : Operations

Select Configure
Automatic Storage
Management
THEN click Next
for next screen

Database Configuration
Assistant : Node Selection

Click on Select ALL,


Node1 and node2 in our
case.
THEN click Next
for next screen

11gRAC/ASM/AIX

[email protected]

49 of 393

Database Configuration
Assistant : Create ASM Instance
On this screen, you can :

Modify/Set ASM instances


parameters.

Enter ASM instance


password for SYS user.

Choose the type of ASM


instances parameter files,
a,d specify file name or
disk location.

Click on

More details on next page

Set the password for SYS user


administering ASM instances :

Choose the type of parameter


file for ASM instances :
More details on next page

11gRAC/ASM/AIX

[email protected]

50 of 393

About ASM Parameters :

ASM_DISKGROUPS *
Description: This value is the list of the Disk Group names to be mounted by the ASM at startup or when ALTER DISKGROUP ALL MOUNT
command is used.
ASM_DISKSTRING *
Description: A comma separated list of paths used by the ASM to limit the set of disks considered for discovery when a new disk is added to a
Disk Group. The disk string should match the path of the disk, not the directory containing the disk. For example: /dev/rdsk/*.
ASM_POWER_LIMIT *
Description: This value is the maximum power on the ASM instance for disk rebalancing.
Range of Values: 1 to 11
Default Value: 1
CLUSTER_DATABASE
Description : Set CLUSTER_DATABASE to TRUE to enable Real Application Clusters option.
Range of Values: TRUE | FALSE
Default Value : FALSE
DB_UNIQUE_NAME
DIAGNOSTIC_DEST
Replace USER_DUMP_DEST, CORE_DUMP_DEST, BACKGROUND_DUMP_DEST
Specifies the pathname for a directory where the server will write debugging trace files on behalf of a user process.
Specifies the pathname (directory or disc) where trace files are written for the back ground processes (LGWR, DBW n, and so on) during
Oracle operations. It also defines the location of the database alert file which logs significant events and messages.
The directory name specifying the core dump location (for UNIX).

11gRAC/ASM/AIX

[email protected]

51 of 393

INSTANCE_TYPE
LARGE_POOL_SIZE
Description : Specifies the size of the large pool allocation heap, which is used by Shared Server for session memory, parallel execution for
message buffers, and RMAN backup and recovery for disk I/O buffers.
Range of Values: 600K (minimum); >= 20000M (maximum is operating system specific).
Default Value : 0, unless parallel execution or DBWR_IO_SLAVES are configured
LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))
SHARED_POOL_SIZE
Description : Specifies the size of the shared pool in bytes. The shared pool contains objects such as shared cursors, stored procedures,
control structures, and Parallel Execution message buffers. Larger values can improve performance in multi-user systems.
Range of Values: 300 Kbytes - operating system dependent.
Default Value : If 64 bit, 64MB, else 16MB
SPFILE

Description: Specifies the name of the current server parameter file in use.
Range of Values: static parameter
Default Value: The SPFILE parameter can be defined in a client side PFILE to indicate the name of the server parameter file to use. When the
default server parameter file is used by the server, the value of SPFILE will be internally set by the server.

INSTANCE_NUMBER
Description : A Cluster Database parameter that assigns a unique number for mapping the instance to one free list group owned by a
database object created with storage parameter FREELIST GROUPS. Use this value in the INSTANCE clause of the ALTER TABLE ...
ALLOCATE EXTENT statement to dynamically allocate extents to this instance.
Range of Values: 1 to MAX_INSTANCES (specified at database creation).
Default Value : Lowest available number (depends on instance startup order and on the INSTANCE_NUMBER values assigned to other
instances)
LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))

PleasereadfollowingMetalinknote:
Subject:SGAsizingforASMinstancesanddatabasesthatuseASMDocID:Note:282777.1

11gRAC/ASM/AIX

[email protected]

52 of 393

Choose the type of parameter file that you would like to use for the new ASM instances

OPTION 1
Create initialization
parameter file (IFILE)
And specify the filename
..

OPTION 2
Select Create server
parameter file (SPFILE),
Replace
{ORACLE_HOME}/dbs/s
pfile+ASM.ora by
/dev/ASMspf_disk

THEN Click on Next

11gRAC/ASM/AIX

[email protected]

53 of 393

Database Configuration
Assistant :
Click on OK, DBCA will create
an ASM Instance on each node

Database Configuration
Assistant :
DBCA is creating the ASM
instances.

When done, check the content of


the init+ASMora file on each
node.
You should also find a +ASM
directory in the $ORACLE_BASE.

{node1:asm}/oracle/asm/11.1.0/dbs # cat init+ASM1.ora


SPFILE='/dev/ASMspf_disk'
{node1:asm}/oracle/asm/11.1.0/dbs #
{node2:asm}/oracle/asm/11.1.0/dbs # cat init+ASM2.ora
SPFILE='/dev/ASMspf_disk'
{node2:asm}/oracle/asm/11.1.0/dbs #

Database Configuration
Assistant :
ASM instances are created,
Click on Create New to create
an ASM Disk Group.

11gRAC/ASM/AIX

[email protected]

54 of 393

If you dont see any


candidate disks,
Do click on Change Disk
Discovery Path

and change to the right path.

If you still dont see any


candidates disks, do check
the disks preparation.

11gRAC/ASM/AIX

[email protected]

55 of 393

About ASM EXTERNAL Redundancy


Redundancy will be done by external
mechanism, and not by ASM.

Thru SQLPlus on node1,


it will be the following syntax :

su - asm
export ORACLE_SID=+ASM1
export ORACLE_HOME=/oracle/asm/11.1.0
sqlplus /nolog
connect /as sysdba
Then execute following SQL :
CREATE DISKGROUP DATA_DISKGROUP
NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

11gRAC/ASM/AIX

[email protected]

56 of 393

About ASM NORMAL Redundancy


without Failure groups
NORMAL Redundancy stand for 2 copies of
the block (1 original, and 1 copy).
Redundancy will be done by ASM.
Block will be mirrored between the disks,
ensuring that original and copy are not on the
same disk.
If one copy is lost, ASM will ensure to recreate the copy, making sure that always 2
copies are available on different disks.

Thru SQLPlus on node1,


it will be the following syntax :

11gRAC/ASM/AIX

CREATE DISKGROUP DATA_DISKGROUP


NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

[email protected]

57 of 393

About ASM HIGH Redundancy without


Failure groups
HIGH Redundancy stand for 3 copies of the
block (1 original, and 2 copies).
Redundancy will be done by ASM.
Block will be mirrored between the disks,
ensuring that original and copies are not on
the same disk.
If one copy is lost, ASM will ensure to recreate the copy, making sure that always 3
copies are available on different disks.

Thru SQLPlus on node1,


it will be the following syntax :

11gRAC/ASM/AIX

CREATE DISKGROUP DATA_DISKGROUP


HIGH REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

[email protected]

58 of 393

About ASM NORMAL Redundancy with


2 Failure groups
NORMAL Redundancy stand for 2 copies
of the block (1 original, and 1 copy).
Redundancy will be done by ASM.
Block will be mirrored between the disks,
ensuring that original and copies are not
on the same disk.
If one copy is lost, ASM will ensure to recreate the copy, making sure that always 2
copies are available on different disks.
Failure groups will ensure that original and
copie are in different group within the
diskgroup, meaning that if the 2 failure
groups are made of disks from 2 differents
storage, original and copie will be on
different storage.

Thru SQLPlus on node1,


it will be the following syntax :

11gRAC/ASM/AIX

CREATE DISKGROUP DATA_DISKGROUP


NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk4',
'/dev/ASM_disk5',
'/dev/ASM_disk6';

[email protected]

59 of 393

About ASM HIGH Redundancy with 3


Failure groups
NORMAL Redundancy stand for 3 copies
of the block (1 original, and 2 copies).
Redundancy will be done by ASM.
Block will be mirrored between the disks,
ensuring that original and copies are not
on the same disk.
If one copy is lost, ASM will ensure to recreate the copy, making sure that always 2
copies are available on different disks.
Failure groups will ensure that original and
copie are in different group within the
diskgroup, meaning that if the 2 failure
groups are made of disks from 2 differents
storage, original and copie will be on
different storage.

Thru SQLPlus on node1,


it will be the following syntax :

11gRAC/ASM/AIX

CREATE DISKGROUP DATA_DISKGROUP


HIGH REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk3',
'/dev/ASM_disk4',
FAILGROUP GROUP3 DISK
'/dev/ASM_disk5',
'/dev/ASM_disk6';

[email protected]

60 of 393

About Setting or NOT ASM Name to disks

Syntax with DISK naming

Thru SQLPlus, it will be the following syntax :

In this case, we force the name of


each ASM disks to ease
administration.

CREATE DISKGROUP DATA_DISKGROUP


NORMAL REDUNDANCY
DISK
'/dev/ASM_disk1' NAME ASM_disk1,
'/dev/ASM_disk2' NAME ASM_disk2,
'/dev/ASM_disk3' NAME ASM_disk3;

If one disk is dropped, and same disk


to be re-added to same or new Disk
Group, ASM Disk name has to be
diferrent from the previous !!!

Syntax with automatic DISK


naming
ASM instance will generate
automatically an ASM disk name to
each disk, ASM disk name will be
unique. And if a disk is dropped and
added again in same or new
diskgroup, a new ASM disk name will
be generated !!!!

11gRAC/ASM/AIX

Thru SQLPlus, it will be the following syntax :


CREATE DISKGROUP DATA_DISKGROUP
NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

[email protected]

61 of 393

For cookbook purpose, we will create an ASM DISKGROUP called DATA_DG1


With 2 Failures Groups called :

GROUP1 with 2 disks


o /dev/ASM_disk1
o /dev/ASM_disk2
and

GROUP2
o /dev/ASM_disk4
o /dev/ASM_disk5

Click on
start

then ASM disk Group creation will

Thru SQLPlus, it will be the following syntax :


CREATE DISKGROUP DATA_DG1
NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk4',
'/dev/ASM_disk5';

ASM alert log history with command executed by DBCA :


Thu Feb 14 11:53:42 2008
SQL> CREATE DISKGROUP DATA_DG1 Normal REDUNDANCY FAILGROUP GROUP2 DISK '/dev/ASM_disk4' SIZE 4096M ,
'/dev/ASM_disk5' SIZE 4096M FAILGROUP GROUP1 DISK '/dev/ASM_disk1' SIZE 4096M ,
'/dev/ASM_disk2' SIZE 4096M
NOTE: Assigning number (1,0) to disk (/dev/ASM_disk4)
NOTE: Assigning number (1,1) to disk (/dev/ASM_disk5)
NOTE: Assigning number (1,2) to disk (/dev/ASM_disk1)
NOTE: Assigning number (1,3) to disk (/dev/ASM_disk2)
NOTE: initializing header on grp 1 disk DATA_DG1_0000
NOTE: initializing header on grp 1 disk DATA_DG1_0001
NOTE: initializing header on grp 1 disk DATA_DG1_0002
NOTE: initializing header on grp 1 disk DATA_DG1_0003
NOTE: initiating PST update: grp = 1
kfdp_update(): 1
Thu Feb 14 11:53:42 2008
kfdp_updateBg(): 1
NOTE: group DATA_DG1: initial PST location: disk 0000 (PST copy 0)
NOTE: group DATA_DG1: initial PST location: disk 0002 (PST copy 1)
NOTE: PST update grp = 1 completed successfully
NOTE: cache registered group DATA_DG1 number=1 incarn=0x966b9ace
NOTE: cache opening disk 0 of grp 1: DATA_DG1_0000 path:/dev/ASM_disk4
NOTE: cache opening disk 1 of grp 1: DATA_DG1_0001 path:/dev/ASM_disk5
NOTE: cache opening disk 2 of grp 1: DATA_DG1_0002 path:/dev/ASM_disk1
NOTE: cache opening disk 3 of grp 1: DATA_DG1_0003 path:/dev/ASM_disk2
Thu Feb 14 11:53:42 2008

11gRAC/ASM/AIX

[email protected]

62 of 393

* allocate domain 1, invalid = TRUE


kjbdomatt send to node 1
Thu Feb 14 11:53:42 2008
NOTE: attached to recovery domain 1
NOTE: cache creating group 1/0x966B9ACE (DATA_DG1)
NOTE: cache mounting group 1/0x966B9ACE (DATA_DG1) succeeded
NOTE: allocating F1X0 on grp 1 disk DATA_DG1_0000
NOTE: allocating F1X0 on grp 1 disk DATA_DG1_0002
NOTE: diskgroup must now be re-mounted prior to first use
NOTE: cache dismounting group 1/0x966B9ACE (DATA_DG1)
NOTE: lgwr not being msg'd to dismount
kjbdomdet send to node 1
detach from dom 1, sending detach message to node 1
freeing rdom 1
NOTE: detached from domain 1
NOTE: cache dismounted group 1/0x966B9ACE (DATA_DG1)
kfdp_dismount(): 2
kfdp_dismountBg(): 2
kfdp_dismount(): 3
kfdp_dismountBg(): 3
NOTE: De-assigning number (1,0) from disk (/dev/ASM_disk4)
NOTE: De-assigning number (1,1) from disk (/dev/ASM_disk5)
NOTE: De-assigning number (1,2) from disk (/dev/ASM_disk1)
NOTE: De-assigning number (1,3) from disk (/dev/ASM_disk2)
SUCCESS: diskgroup DATA_DG1 was created
NOTE: cache registered group DATA_DG1 number=1 incarn=0xba2b9ad0
NOTE: Assigning number (1,3) to disk (/dev/ASM_disk2)
NOTE: Assigning number (1,0) to disk (/dev/ASM_disk4)
NOTE: Assigning number (1,1) to disk (/dev/ASM_disk5)
NOTE: Assigning number (1,2) to disk (/dev/ASM_disk1)
NOTE: start heartbeating (grp 1)
Thu Feb 14 11:53:52 2008
kfdp_query(): 6
Thu Feb 14 11:53:52 2008
kfdp_queryBg(): 6
NOTE: cache opening disk 0 of grp 1: DATA_DG1_0000 path:/dev/ASM_disk4
NOTE: F1X0 found on disk 0 fcn 0.0
NOTE: cache opening disk 1 of grp 1: DATA_DG1_0001 path:/dev/ASM_disk5
NOTE: cache opening disk 2 of grp 1: DATA_DG1_0002 path:/dev/ASM_disk1
NOTE: F1X0 found on disk 2 fcn 0.0
NOTE: cache opening disk 3 of grp 1: DATA_DG1_0003 path:/dev/ASM_disk2
NOTE: cache mounting (first) group 1/0xBA2B9AD0 (DATA_DG1)
* allocate domain 1, invalid = TRUE
kjbdomatt send to node 1
NOTE: attached to recovery domain 1
NOTE: cache recovered group 1 to fcn 0.0
Thu Feb 14 11:53:53 2008
NOTE: opening chunk 1 at fcn 0.0 ABA
NOTE: seq=2 blk=0
NOTE: cache mounting group 1/0xBA2B9AD0 (DATA_DG1) succeeded
Thu Feb 14 11:53:53 2008
kfdp_query(): 7
kfdp_queryBg(): 7
NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
SUCCESS: diskgroup DATA_DG1 was mounted
SUCCESS: CREATE DISKGROUP DATA_DG1 Normal REDUNDANCY FAILGROUP GROUP2 DISK '/dev/ASM_disk4' SIZE 4096M ,
'/dev/ASM_disk5' SIZE 4096M FAILGROUP GROUP1 DISK '/dev/ASM_disk1' SIZE 4096M ,
'/dev/ASM_disk2' SIZE 4096M
ALTER SYSTEM SET asm_diskgroups='DATA_DG1' SCOPE=BOTH SID='+ASM1';
ALTER SYSTEM SET asm_diskgroups='DATA_DG1' SCOPE=BOTH SID='+ASM2';
NOTE: enlarging ACD for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 11:54:05 2008
SUCCESS: ACD enlarged for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:25:39 2008

11gRAC/ASM/AIX

[email protected]

63 of 393

We now have an ASM disk


group called DATA_DG1

With 2 failures groups :

GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2

GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5

Database Configuration
Assistant : ASM Disk Groups
ASM disk group is then
displayed with information as :

Disk Group Name


Size (MB), total capacity
Free (MB), Available space
Redundancy type
Mount status

From the menu, its possible to :

Create a new ASM disk group


Add disk (s) to an existing ASM disk group
Manage ASM template
Mount a selected ASM disk group
Mount all ASM disk group

11gRAC/ASM/AIX

[email protected]

64 of 393

From
If you want to create a new ASM diskgroup, click on
, and ONLY candidate disks will be displayed :
/dev/ASM_disk1, /dev/ASM_disk2, /dev/ASM_disk4 and /dev/ASM_disk5 are not anymore Candidate !!!

If you select Show All, All disks will be shown :

/dev/ASM_disk1 and /dev/ASM_disk2 as Member


(member of an ASM DiskGroup within Failure Group GROUP1.
ASM Disk name DATA_DG1_002 as been assigned by ASM instance to device /dev/ASM_disk1
ASM Disk name DATA_DG1_003 as been assigned by ASM instance to device /dev/ASM_disk2

/dev/ASM_disk4 and /dev/ASM_disk5 as Member


(member of an ASM DiskGroup within Failure Group GROUP2.
ASM Disk name DATA_DG1_000 as been assigned by ASM instance to device /dev/ASM_disk4
ASM Disk name DATA_DG1_001 as been assigned by ASM instance to device /dev/ASM_disk5

/dev/ASM_disk3, /dev/ASM_disk6 and /dev/ASM_disk7 as Candidate

11gRAC/ASM/AIX

[email protected]

65 of 393

From
If you want to add disks in an ASM diskGroup, Select the DiskGroup DATA_DG1, and click on

ONLY Candidate disks will


be displayed,
and ASM REDUNDANCY
type can not be modified !!!

add in GROUP1 disk


o /dev/ASM_disk3
and

add in GROUP2 disk


o /dev/ASM_disk6

Thru SQLPlus, it will be the following syntax :


ALTER DISKGROUP DATA_DG1
ADD
FAILGROUP GROUP1 DISK '/dev/ASM_disk3'
FAILGROUP GROUP2 DISK '/dev/ASM_disk2';

ASM alert log history with command executed by DBCA :


SQL> ALTER DISKGROUP ALTER DISKGROUP DATA_DG1 ADD FAILGROUP GROUP1 DISK '/dev/ASM_disk3' SIZE 4096M FAILGROUP
GROUP2 DISK '/dev/ASM_disk6' SIZE 4096M;
NOTE: Assigning number (1,4) to disk (/dev/ASM_disk6)
NOTE: Assigning number (1,5) to disk (/dev/ASM_disk3)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DATA_DG1_0004
NOTE: initializing header on grp 1 disk DATA_DG1_0005
NOTE: cache opening disk 4 of grp 1: DATA_DG1_0004 path:/dev/ASM_disk6
NOTE: cache opening disk 5 of grp 1: DATA_DG1_0005 path:/dev/ASM_disk3
NOTE: requesting all-instance disk validation for group=1
Thu Feb 14 12:25:42 2008
NOTE: disk validation pending for group 1/0xba2b9ad0 (DATA_DG1)
SUCCESS: validated disks for 1/0xba2b9ad0 (DATA_DG1)
NOTE: initiating PST update: grp = 1
kfdp_update(): 8
Thu Feb 14 12:25:45 2008
kfdp_updateBg(): 8
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xba2b9ad0 (DATA_DG1)
kfdp_query(): 9

11gRAC/ASM/AIX

[email protected]

66 of 393

kfdp_queryBg(): 9
kfdp_query(): 10
kfdp_queryBg(): 10
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
SUCCESS: ALTER DISKGROUP ALTER DISKGROUP DATA_DG1 ADD FAILGROUP GROUP1 DISK '/dev/ASM_disk3' SIZE 4096M FAILGROUP
GROUP2 DISK '/dev/ASM_disk6' SIZE 4096M;
NOTE: starting rebalance of group 1/0xba2b9ad0 (DATA_DG1) at power 1
Starting background process ARB0
Thu Feb 14 12:25:49 2008
ARB0 started with pid=20, OS id=606380
NOTE: assigning ARB0 to group 1/0xba2b9ad0 (DATA_DG1)
NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 4:2
Thu Feb 14 12:26:13 2008
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:26:16 2008
NOTE: requesting all-instance membership refresh for group=1
NOTE: initiating PST update: grp = 1
kfdp_update(): 11
Thu Feb 14 12:26:19 2008
kfdp_updateBg(): 11
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp = 1
kfdp_update(): 12
kfdp_updateBg(): 12
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xba2b9ad0 (DATA_DG1)
kfdp_query(): 13
kfdp_queryBg(): 13
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:44:31 2008

Adding new disks


We now have the ASM disk
group DATA_DG1
With 2 failures groups :

GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2
o /dev/ASM_disk3

GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5
o /dev/ASM_disk6

11gRAC/ASM/AIX

[email protected]

67 of 393

From

Click on

to :

define/manage the type of tripping and the


level of redundancy for each type of object
to be stored on ASM disk groups.

add new type of object with its own


attributes.

From
Click on

to mount selected ASM disks group,

or click on

to mount all ASM disks group

Database Configuration
Assistant :
Click on Finish, then
Yes to exit

11gRAC/ASM/AIX

[email protected]

68 of 393

13.5.2

ManualSteps

As asm unix user, from one node :


{node1:root}/ # su - asm
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # cd $ORACLE_HOME/dbs
{node1:asm}/oracle/asm/11.1.0/dbs #
{node1:asm}/oracle/asm/11.1.0/dbs # vi init+ASM1.ora
Add the line and save :
SPFILE='/dev/ASMspf_disk'
{node1:asm}/oracle/asm/11.1.0/dbs # rcp init+ASM1.ora node2:
/oracle/asm/11.1.0/dbs/init+ASM2.ora

On node1, do create a
pfile initASM.ora in
/oracle/asm/11.1.0/dbs
With the specified
values :

On each node,
node1, then node2
AS asm user, Do
create the following
directories

+ASM1.__oracle_base='/oracle'#ORACLE_BASE set from environment


+ASM2.__oracle_base='/oracle'#ORACLE_BASE set from environment
+ASM1.asm_diskgroups=''
+ASM2.asm_diskgroups=''
+ASM1.asm_diskstring='/dev/ASM_disk*'
+ASM2.asm_diskstring='/dev/ASM_disk*'
*.cluster_database=true
*.diagnostic_dest='/oracle'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.processes=100

On node1
{node1:asm}/oracle/asm # cd ..
{node1:asm}/oracle # mkdir admin
{node1:asm}/oracle # mkdir admin/+ASM
{node1:asm}/oracle # mkdir admin/+ASM/hdump
{node1:asm}/oracle # mkdir admin/+ASM/pfile
On node2
{node2:asm}/oracle/asm # cd ..
{node2:asm}/oracle # mkdir admin
{node2:asm}/oracle # mkdir admin/+ASM
{node2:asm}/oracle # mkdir admin/+ASM/hdump
{node2:asm}/oracle # mkdir admin/+ASM/pfile

11gRAC/ASM/AIX

[email protected]

69 of 393

On node1, let create the ASM instance


On node1
{node1:asm}/oracle/asm # cd 11.1.0/dbs
{node1:asm}/oracle/asm/11.1.0/dbs # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm/11.1.0/dbs # export ORACLE_SID=+ASM1
{node1:asm}/oracle/asm/11.1.0/dbs # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 22:31:09 2008
Copyright (c) 1982, 2007, Oracle.
SQL> connect
Connected to
SQL>
SQL> startup
ASM instance

All rights reserved.

/as sysasm
an idle instance.
nomunt pfile=/oracle/asm/11.1.0/initASM.ora ;
started

Total System Global Area


Fixed Size
Variable Size
Database Buffers
Redo Buffers
SQL>

125829120
1301456
124527664
0
0

bytes
bytes
bytes
bytes
bytes

SQL> create spfile=/dev/ASMspf_disk from pfile= =/oracle/asm/11.1.0/initASM.ora ;


File created.
SQL>
SQL> shutdown
ASM instance shutdown
SQL>
SQL> startup nomount;
ASM instance started
Total System Global Area 125829120 bytes
Fixed Size
1301456 bytes
Variable Size
124527664 bytes
Database Buffers
0 bytes
Redo Buffers
0 bytes
SQL>
SQL> set linesize 1000
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from
v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION
STATUS
--------------- ---------------- -------------------- ----------------- -----------1 +ASM1
node1
11.1.0.6.0
STARTED

SQL>

11gRAC/ASM/AIX

[email protected]

70 of 393

TEHN On node2, let start the ASM instance


On node2
{node2:asm}/oracle/asm # cd 11.1.0/dbs
{node2:asm}/oracle/asm/11.1.0/dbs # export ORACLE_HOME=/oracle/asm/11.1.0
{node2:asm}/oracle/asm/11.1.0/dbs # export ORACLE_SID=+ASM2
{node2:asm}/oracle/asm/11.1.0/dbs # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 22:31:09 2008
Copyright (c) 1982, 2007, Oracle.
SQL> connect
Connected to
SQL>
SQL> startup
ASM instance

All rights reserved.

/as sysasm
an idle instance.
nomunt
started

Total System Global Area 125829120


Fixed Size
1301456
Variable Size
124527664
Database Buffers
0
Redo Buffers
0
SQL>
SQL> set linesize 1000
SQL> select INSTANCE_NUMBER, INSTANCE_NAME,

bytes
bytes
bytes
bytes
bytes
HOST_NAME, VERSION, STATUS from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION
STATUS
--------------- ---------------- -------------------- ----------------- -----------1 +ASM2
node1
11.1.0.6.0
STARTED

SQL>
Do query the global view or cluster view, know as gv$instance
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from
gv$instance;
INSTANCE_NUMBER
--------------2
1

INSTANCE_NAME
---------------+ASM2
+ASM1

HOST_NAME
-------------------node2
node1

VERSION
----------------11.1.0.6.0
11.1.0.6.0

STATUS
-----------STARTED
STARTED

SQL>

11gRAC/ASM/AIX

[email protected]

71 of 393

On node1, then node2


Do create an oracle password pfile in /oracle/asm/11.1.0/dbs
{node1:asm}/oracle/asm/11.1.0/dbs # orapwd
Usage: orapwd file=<fname> password=<password> entries=<users> force=<y/n>
ignorecase=<y/n> nosysdba=<y/n>
where
file - name of password file (required),
password - password for SYS (optional),
entries - maximum number of distinct DBA (required),
force - whether to overwrite existing file (optional),
ignorecase - passwords are case-insensitive (optional),
nosysdba - whether to shut out the SYSDBA logon (optional Database Vault only).
There must be no spaces around the equal-to (=) character.
{node1:asm}/oracle/asm/11.1.0/dbs #
On node1
{node1:asm}/oracle/asm/11.1.0/dbs
password=oracle entries=20
{node1:asm}/oracle/asm/11.1.0/dbs
{node1:asm}/oracle/asm/11.1.0/dbs
-rw-r----1 asm
oinstall
{node1:asm}/oracle/asm/11.1.0/dbs

# orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1
#
# ls -la orapw*
1536 Feb 14 10:53 orapw+ASM1
#

On node2
{node2:asm}/oracle/asm/11.1.0/dbs
password=oracle entries=20
{node2:asm}/oracle/asm/11.1.0/dbs
{node2:asm}/oracle/asm/11.1.0/dbs
-rw-r----1 asm
oinstall
{node2:asm}/oracle/asm/11.1.0/dbs

Checking Oracle
Clusterware

11gRAC/ASM/AIX

# orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1
#
# ls -la orapw*
1536 Feb 14 10:53 orapw+ASM2
#

{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

[email protected]

72 of 393

We need to register the ASM instances in Oracle Clusterware Registry (OCR).


Registering
ASM instances
within Oracle
Clusterware :

For node1
{node1:asm}/oracle/asm # srvctl add asm n node1 o /oracle/asm/11.1.0
{node1:asm}/oracle/asm #
For node2
{node1:asm}/oracle/asm # srvctl add asm n node2 o /oracle/asm/11.1.0
{node1:asm}/oracle/asm #

Checking
Oracle
Clusterware
after ASM
resources
registration.

Start ans stop


ASM instance
ressource :

{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

Stop ASM on node1, and node2


{node1:asm}/oracle/asm # srvctl stop asm n node1
{node1:asm}/oracle/asm # srvctl stop asm n node2
Start ASM on node1, and node2
{node1:asm}/oracle/asm # srvctl start asm n node1
{node1:asm}/oracle/asm # srvctl start asm n node2

Add entry in
/etc/oratab from
each node :

Add +ASM1:/oracle/asm/11.1.0:N
On node1 :
{node1:asm}/oracle/asm # cat /etc/oratab
+ASM1:/oracle/asm/11.1.0:N
{node1:asm}/oracle/asm #
On node2 :
{node2:asm}/oracle/asm # cat /etc/oratab
+ASM2:/oracle/asm/11.1.0:N
{node2:asm}/oracle/asm #

11gRAC/ASM/AIX

[email protected]

73 of 393

13.6 Configure ASM local and remote listeners


Create and Add entry for local and remote listeners in $ASM_HOME/network/admin/tnsnames.ora
(/oracle/asm/11.1.0/network/admin/tnsnames.ora)
On node1

{node1:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

{node1:asm}/oracle/asm #
On node2

{node2:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

{node2:asm}/oracle/asm #

11gRAC/ASM/AIX

[email protected]

74 of 393

Set LOCAL_LISTENER, REMOTE_LISTNER and SERVICES at ASM instance level


For node1, and {node1:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle # export ORACLE_SID=+ASM1
node2,
{node1:asm}/oracle # sqlplus /nolog

from node1 :

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008


Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
+ASM
SQL> show parameter listener
NAME
-----------------------------------local_listener
remote_listener
SQL>

TYPE
VALUE
----------- -----------------------------string
string

Let set local_listener and remote_listener

remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE
PRESENT in the tnsnames.ora from each node.
local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT
in the tnsnames.ora from each node.
local_listener from node1, and node2 are not the ones defined in the listener.ora files from
each node.

SQL> ALTER SYSTEM SET remote_listener='LISTENERS_+ASM' SCOPE=BOTH SID='*';


System altered.
SQL> ALTER SYSTEM SET local_listener='LISTENER_+ASM1' SCOPE=BOTH SID='+ASM1';
System altered.
SQL> ALTER SYSTEM SET local_listener='LISTENER_+ASM2' SCOPE=BOTH SID='+ASM2';
System altered.
SQL> show parameters listener
NAME
-----------------------------------local_listener
remote_listener
SQL>

11gRAC/ASM/AIX

TYPE
----------string
string

[email protected]

VALUE
-----------------------------LISTENER_+ASM1
LISTENERS_+ASM

75 of 393

Check for
node2,
from node2 :

{node2:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0


{node2:asm}/oracle # export ORACLE_SID=+ASM1
{node2:asm}/oracle # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
+ASM
SQL> show parameters listener
NAME
-----------------------------------local_listener
remote_listener
SQL>

TYPE
----------string
string

VALUE
-----------------------------LISTENER_+ASM2
LISTENERS_+ASM

Checking the listener status on each node, we will see 2 instances registered, +ASM1 and +ASM2
{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008
00:01:57
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
13-FEB-2008 22:04:55
Uptime
5 days 1 hr. 57 min. 2 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX

[email protected]

76 of 393

11gRAC/ASM/AIX

[email protected]

77 of 393

Checking the listener services on each node, we will see 2 instances registered, +ASM1 and +ASM2
{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl services
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008
00:02:47
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX

[email protected]

78 of 393

For remote connection to ASM instances :


Add entry in $ASM_HOME/network/admin/tnsnames.ora
(/oracle/asm/11.1.0/network/admin/tnsnames.ora)
On node1, and
node2

{node1:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = +ASM)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

{node1:asm}/oracle/asm #

11gRAC/ASM/AIX

[email protected]

79 of 393

13.7 Check Oracle Clusterware, and manage ASM ressources


Checking if ASM instances are registered in Oracle Clusterware :
1 ASM instance on each node must be registered !!!

To stop the ASM instance


on specified node :
Node1 for example

To start the ASM instance


on specified node :
Node1 for example

11gRAC/ASM/AIX

{node1:asm}/oracle/asm # srvctl stop asm -n node1


{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
OFFLINE
OFFLINE
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

{node1:asm}/oracle/asm # srvctl start asm -n node1


{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

[email protected]

80 of 393

To check status and config


of ASM cluster resources on
node1 and node2 in the
cluster :

{node1:asm}/oracle/asm # srvctl status asm -n


ASM instance +ASM1 is running on node node1.
{node1:asm}/oracle/asm # srvctl status asm -n
ASM instance +ASM2 is running on node node2.
{node1:asm}/
{node1:asm}/oracle/asm # srvctl config asm -n
+ASM1 /oracle/asm/11.1.0
{node1:asm}/oracle/asm # srvctl config asm -n
+ASM2 /oracle/asm/11.1.0
{node1:asm}/oracle/asm #

node1
node2
node1
node2

To list all attributes from the {node1:asm}/oracle/asm/11.1.0/network/admin # crs_stat -p


ora.node1.ASM1.asm
resource ASM on node1 :
NAME=ora.node1.ASM1.asm
TYPE=application
ACTION_SCRIPT=/oracle/asm/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=300
DESCRIPTION=CRS application for ASM instance
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=900
STOP_TIMEOUT=180
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysasm
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=mount
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX

[email protected]

81 of 393

To see resources
read/write
permissions and
ownership within
the clusterware :

Check ASM
deamons
For example on
node1 :

{node1:asm}/oracle/asm/11.1.0 # crs_stat -ls


Name
Owner
Primary PrivGrp
Permission
----------------------------------------------------------------ora....SM1.asm asm
oinstall
rwxrwxr-ora....E1.lsnr asm
oinstall
rwxrwxr-ora.node1.gsd crs
oinstall
rwxr-xr-ora.node1.ons crs
oinstall
rwxr-xr-ora.node1.vip root
oinstall
rwxr-xr-ora....SM2.asm asm
oinstall
rwxrwxr-ora....E2.lsnr asm
oinstall
rwxrwxr-ora.node2.gsd crs
oinstall
rwxr-xr-ora.node2.ons crs
oinstall
rwxr-xr-ora.node2.vip root
oinstall
rwxr-xr-{node1:asm}/oracle/asm/11.1.0/network/admin #

{node1:root}/ # su - asm
{node1:asm}/oracle/asm # ps -ef|grep ASM
asm 163874
1
0
Feb 14
- 10:07 asm_lmon_+ASM1
asm 180340
1
0
Feb 14
- 0:18 asm_gmon_+ASM1
asm 221216
1
0
Feb 14
- 2:19 asm_rbal_+ASM1
asm 229550
1
0
Feb 14
- 0:12 asm_lgwr_+ASM1
asm 237684
1
0
Feb 14
- 11:14 asm_lmd0_+ASM1
asm 364670
1
0
Feb 14
- 0:08 asm_mman_+ASM1
asm 417982
1
0
Feb 14
- 17:15 asm_dia0_+ASM1
asm 434260
1
0
Feb 14
- 0:36
/oracle/asm/11.1.0/bin/racgimon daemon ora.node1.ASM1.asm
asm 458862
1
0
Feb 14
- 0:15 asm_ckpt_+ASM1
asm 463014
1
0
Feb 14
- 0:36 asm_lck0_+ASM1
asm 540876
1
0
Feb 14
- 0:07 asm_psp0_+ASM1
asm 544906
1
0
Feb 14
- 1:24 asm_pmon_+ASM1
asm 577626
1
0
Feb 14
- 0:26 asm_ping_+ASM1
asm 598146
1
0
Feb 14
- 2:15 asm_diag_+ASM1
asm 606318
1
0
Feb 14
- 0:10 asm_dbw0_+ASM1
asm 630896
1
0
Feb 14
- 12:30 asm_lms0_+ASM1
asm 635060
1
0
Feb 14
- 0:07 asm_smon_+ASM1
asm 639072
1
0
Feb 14
- 2:27 asm_vktm_+ASM1
asm 913652 954608
0 09:52:02 pts/5 0:00 grep ASM
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX

[email protected]

82 of 393

13.8 About ASM instances Tuning


You should increase the ASM Instance process parameter on each node
Moving from the default 100 value to 200, or 400 depending of your I/O activities.
Too low process value may generate read/write I/O access error from the database instances to the ASM disk
Groups.
{node1:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle # export ORACLE_SID=+ASM1
{node1:asm}/oracle # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysasm


Connected
SQL> show parameter processes
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------processes
integer
100
SQL>
processes parameter can not be changed on the fly, in dynamic.
You must modify the parameter on both ASM instances, and permanently in the ASM
spfile, then stop and restart ASM instance to activate the change :
SQL> alter system set processes=200 scope=spfile SID='*';
System altered.
SQL> show parameter processes
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------processes
integer
100
SQL> exit
Let stop and restart ASM instances thru clusterware commands :
{node1:asm}/oracle/asm # srvctl stop asm n node1
{node1:asm}/oracle/asm # srvctl stop asm n node2
{node1:asm}/oracle/asm # crs_stat -t | grep asm
ora....SM1.asm application
OFFLINE
OFFLINE
ora....SM2.asm application
OFFLINE
OFFLINE
{node1:asm}/oracle/asm #
{node1:asm}/oracle/asm # srvctl start asm n node1
{node1:asm}/oracle/asm # srvctl start asm n node2
{node1:asm}/oracle/asm # crs_stat -t | grep asm
ora....SM1.asm application
ONLINE
ONLINE
ora....SM2.asm application
ONLINE
ONLINE
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX

node1
node2

[email protected]

83 of 393

Then under SQLPlus :


SQL> show parameter processes
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------processes
integer
200
SQL>
PleasereadfollowingMetalinknote:
Subject:SGAsizingforASMinstancesanddatabasesthatuseASMDocID:Note:282777.1

11gRAC/ASM/AIX

[email protected]

84 of 393

13.9 About ASM CMD (Command Utility)


Access to document :

Commanding ASM on http://www.oracle.com/oramag


http://www.oracle.com/technology/oramag/oracle/06-mar/o26asm.html
Access, transfer, and administer ASM files without SQL commands.
By Arup Nanda

Add Storage, Not Projects on http://www.oracle.com/oramag


http://www.oracle.com/technology/oramag/oracle/04-may/o34tech_management.html
Automatic Storage Management lets DBAs do their jobs and keep their weekends.
By Jonathan Gennick

To access the
content of the
ASM Disks
Groups :

From node1 :
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # export ORACLE_SID=+ASM1
{node1:asm}/oracle/asm # asmcmd -a sysasm -p
ASMCMD [+] > help
asmcmd [-v] [-a <sysasm|sysdba>] [-p] [command]

The environment variables ORACLE_HOME and ORACLE_SID determine the


instance to which the program connects, and ASMCMD establishes a
bequeath connection to it, in the same manner as a SQLPLUS / AS
SYSDBA. The user must be a member of the SYSDBA group.
Specifying the -v option prints the asmcmd version number and
exits immediately.
Specify the -a option to choose the type of connection. There are
only two possibilities: connecting as "sysasm" or as "sysdba".
The default value if this option is unspecified is "sysasm".
Specifying the -p option allows the current directory to be displayed
in the command prompt, like so:
ASMCMD [+DATAFILE/ORCL/CONTROLFILE] >
[command] specifies one of the following commands, along with its
parameters.
Type "help [command]" to get help on a specific ASMCMD command.
commands:
-------help
cd
cp
du
find
ls
lsct
lsdg
mkalias
mkdir
pwd
rm
rmalias
md_backup
md_restore

(New with 11G)


(New with 11G)

lsdsk
remap

(New with 11G)


(New with 11G)

ASMCMD [+] >

Subject:ASMCMDASMcommandlineutilityDocID:Note:332180.1

11gRAC/ASM/AIX

[email protected]

85 of 393

Using ASMCD ls command :


ASMCMD [+] > help ls
ls [-lsdrtLacgH] [name]
List [name] or its contents alphabetically if [name] refers to a
directory. [name] can contain the wildcard "*" and is the current
directory if unspecified. Directory names in the display list
have the "/" suffix to clarify their identity. The first two optional
flags specify how much information is displayed for each file, in the
following manner:
(no flag)

V$ASM_ALIAS.NAME

-l

V$ASM_ALIAS.NAME, V$ASM_ALIAS.SYSTEM_CREATED;
V$ASM_FILE.TYPE, V$ASM_FILE.REDUNDANCY,
V$ASM_FILE.STRIPED, V$ASM_FILE.MODIFICATION_DATE

-s

V$ASM_ALIAS.NAME;
V$ASM_FILE.BLOCK_SIZE, V$ASM_FILE.BLOCKS,
V$ASM_FILE.BYTES, V$ASM_FILE.SPACE

If the user specifies both flags, then the command shows an union of
their respective columns, with duplicates removed.
If an entry in the list is an user-defined alias or a directory,
then -l displays only the V$ASM_ALIAS columns, and -s shows only
the alias name and its size, which is zero because it is negligible.
Moreover, the displayed name contains a suffix that is in the form of
an arrow pointing to the absolute path of the system-created filename
it references:
t_db1.f => +diskgroupName/DBName/DATAFILE/SYSTEM.256.1
See the -L option below for an exception to this rule.
For disk group information, this command queries V$ASM_DISKGROUP_STAT
by default, which can be modified by the -c and -g flags.
The remaining flags have the following meanings:
-d

If an argument is a directory,
(not its contents).

list only its name

-r

Reverse the sorting order.

-t

Sort by time stamp (latest first) instead of by name.

-L

If an argument is an user alias, display information on


the file it references.

-a

If an argument is a system-created filename, show the


location of its user-defined alias, if any.

-c

For disk group information, select from V$ASM_DISKGROUP,


or GV$ASM_DISKGROUP, if the -g flag is also active.
If the ASM software version is 10gR1, then the effects
of this flag is always active, whether or not it is set.

-g

For disk group information, select from


GV$ASM_DISKGROUP_STAT, or GV$ASM_DISKGROUP, if the -c
flag is also active; GV$ASM_DISKGROUP.INST_ID is
included in the output.

-H

Suppress the column header information, so that


scripting is easier.

Note that "ls +" would return information on all diskgroups, including
whether they are mounted.

11gRAC/ASM/AIX

[email protected]

86 of 393

Not all possible file columns or disk group columns are included.
To view the complete set of columns for a file or a disk group,
query the V$ASM_FILE and V$ASM_DISKGROUP views.
ASMCMD [+] >

Using ASMCD lsct command :


ASMCMD [+] > help lsct
lsct [-cgH] [group]
List all clients and their information from V$ASM_CLIENT.
specified, then return only information on that group.
-g
-H
ASMCMD [+] >

If group is

Select from GV$ASM_CLIENT.


Suppress the column headers from the output.

Using ASMCD du command :


ASMCMD [+] > help du
du [-H] [dir]
Display total space used for files located recursively under [dir],
similar to du s under UNIX; default is the current directory. Two
values are returned, both in units of megabytes. The first value does
not take into account mirroring of the diskgroup while the second does.
For instance, if a file occupies 100 MB of space, then it actually
takes up 200 MB of space on a normal redundancy diskgroup and 300 MB
of space on a high redundancy diskgroup.
[dir] can also contain wildcards.
The H flag suppresses the column headers from the output.
ASMCMD [+] >

Using ASMCD lsdg command :


ASMCMD [+] > help lsdg
lsdg [-cgH] [group]
List all diskgroups and their information from V$ASM_DISKGROUP. If
[group] is specified, then return only information on that group. The
command also informs the user if a rebalance is currently under way
for a diskgroup.
This command queries V$ASM_DISKGROUP_STAT by default, which can be
modified by the -c and -g flags.
-c

Select from V$ASM_DISKGROUP, or GV$ASM_DISKGROUP,


if the -g flag is also active. If the ASM software
version is 10gR1, then the effects of this flag is
always active, whether or not it is set.

-g

Select from GV$ASM_DISKGROUP_STAT, or GV$ASM_DISKGROUP,


if the -c flag is also active;
GV$ASM_DISKGROUP.INST_ID is included in the output.

-H

Suppress the column headers from the output.

Not all possible disk group columns are included. To view the
complete set of columns for a disk group, query the V$ASM_DISKGROUP

11gRAC/ASM/AIX

[email protected]

87 of 393

view.
ASMCMD [+] >

11gRAC/ASM/AIX

[email protected]

88 of 393

Running ASMCD command (lsdg, lsct, ls, du, lsct) :


From node1 :

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/asm/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=+ASM1
{node1:oracle}/oracle -> asmcmd -a sysasm -p
ASMCMD [+] >
ASMCMD [+] > lsgd
ASMCMD [+] > lsdg
State
Type
Rebal
MOUNTED NORMAL N
ASMCMD [+] >
ASMCMD [+] > lsdg -g

Sector
512

Inst_ID
1
2

Rebal
N
N

State
MOUNTED
MOUNTED

ASMCMD [+] >

Type
NORMAL
NORMAL

Block
4096
Sector
512
512

AU
1048576
Block
4096
4096

Total_MB
24576

AU
1048576
1048576

Free_MB
24382

Total_MB
24576
24576

Req_mir_free_MB
4096

Free_MB
24382
24382

Usable_file_MB
10143

Req_mir_free_MB
4096
4096

ASMCMD [+] > du


Used_MB
Mirror_used_MB
2467
4952
ASMCMD [+] >

11gRAC/ASM/AIX

[email protected]

89 of 393

Usable_file_MB
10143
10143

Offline_disks
0
Offline_disks
0
0

Name
DATA_DG1/
Name
DATA_DG1/
DATA_DG1/

Running ASMCD command (lsdg, lsct, ls, du, lsct) :


From node1

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/asm/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=+ASM1
{node1:oracle}/oracle -> asmcmd -a sysasm -p
ASMCMD [+] >
ASMCMD [+] > ls
DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > ls -la
State
Type
Rebal
MOUNTED NORMAL N
ASMCMD [+] >
ASMCMD [+] > ls -s
Sector Block
AU
512
4096 1048576
ASMCMD [+] >
ASMCMD [+] > ls -g
Inst_ID Name
1 DATA_DG1/
2 DATA_DG1/
ASMCMD [+] >

Name
DATA_DG1/
Total_MB
24576

Free_MB
24382

Req_mir_free_MB
4096

Usable_file_MB
10143

Offline_disks
0

ASMCMD [+] > du


No database create at this stage, so no result
ASMCMD [+] >
ASMCMD [+] > lsct
No database create at this stage, so no result
ASMCMD [+] >
ASMCMD [+] > lsct
DB_Name Status
Software_Version Compatible_version Instance_Name Disk_Group
JSC1DB
CONNECTED
11.1.0.6.0
11.1.0.0.0 JSC1DB1
DATA_DG1
ASMCMD [+] > lsct -g
Instance_ID DB_Name Status
Software_Version Compatible_version Instance_Name
1 JSC1DB
CONNECTED
11.1.0.6.0
11.1.0.0.0 JSC1DB1
2 JSC1DB
CONNECTED
11.1.0.6.0
11.1.0.0.0 JSC1DB2
ASMCMD [+] >

11gRAC/ASM/AIX

[email protected]

Disk_Group
DATA_DG1
DATA_DG1

90 of 393

Name
DATA_DG1/

13.10 Usefull Metalink notes

Subject:HowtocleanupASMinstallation(RACandNonRAC)DocID:Note:311350.1
Subject:BackingUpanASMInstanceDocID:Note:333257.1
Subject:HowtoReconfigureAsmDiskGroup?DocID:Note:331661.1
Subject:AssigningaPhysicalVolumeID(PVID)ToAnExistingASMDiskCorruptstheASMDiskHeader
DocID:Note:353761.1
Subject:HowToReclaimAsmDiskSpace?DocID:Note:351866.1
Subject:RecreatingASMInstancesandDiskgroupsDocID:Note:268481.1
Subject:HowToConnectToAsmInstanceRemotelyDocID:Note:340277.1

13.11 What has been done ?

At this stage :

The Oracle Cluster Registry and Voting Disk are created and configured
The Oracle Cluster Ready Services is installed, and started on all nodes.
The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.
ASM Home is installed
Default node listener for ASM are created
ASM instances are created and started
ASM Diskgroup is created

11gRAC/ASM/AIX

[email protected]

91 of 393

14 INSTALLING ORACLE 11G R1 SOFTWARE


This cookbook is dealing with 11g RAC database(s), but if you have also 10g database to run in cluster mode, you
can use 11g Oracle Clusterware and 11g ASM, but keeping 10g database(s).
Each RAC database software will have its own ORACLE_HOME !!!
For all new RAC projects, well advice to use Oracle 11g clusterware and Oracle 11g ASM.

For 11g RAC database(s), all new features from 11g ASM will be available for use.
For 10g RAC database(s), not all 11g ASM new features are available for use, you have to use different ASM
diskgroups for 10g and 11g RAC database(s), and set proper ASM disgroup compatibleattribute.

11gRAC/ASM/AIX

[email protected]

92 of 393

At this stage :

Oracle clusterware is installed in $ORA_CRS_HOME (/crs/11.1.0)


Oracle ASM software is installed in $ASM_HOME (/oracle/asm/11.1.0)
Default Listener for ASM instances are configured within the $ASM_HOME
ASM instances are up and running
At least on ASM Disk Group is created and mounted on each node.

In our case, we have a Diskgroup called


DATA_DG1 where to create an Oracle
cluster database.

BUT for now, we need to install the Oracle 11g Database Software in its own $ORACLE_HOME
(/oracle/rdbms/11.1.0) using the rdbms unix user.

11gRAC/ASM/AIX

[email protected]

93 of 393

14.1 11g R1 RDBMS Installation


Oracle RAC option installation just have to be done only starting from one node. Once the first node is installed,
Oracle OUI automatically starts the copy of the mandatory files on the second node, using rcp command. This step
could last long, depending on the network speed (one hour), without any message. So, dont think the OUI is
stalled, and look at the network traffic before canceling the installation !
You can also create a staging area. The name of the subdirectories is in the format Disk1 to Disk3
On each node :

Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced


libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node2:root}/ # /usr/sbin/slibclean

From first node As


root user, execute :

Under VNC Client session, or other graphical interface, execute :

On each node, set right


ownership and permissions
to following directories :

{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

{node1:root}/ # chown rdbms:oinstall /oracle/rdbms


{node1:root}/ # chmod 665 /oracle/rdbms
{node1:root}/ #

Login as rdbms (in our case), or oracle user and follow the procedure hereunder

Setup and
export your
DISPLAY, TMP and
TEMP variables

With /tmp or other destination having enough free space, about 500Mb on
each node.
{node1:rdbms}/ # export DISPLAY=node1:0.0
If not set in asm .profile, do :
{node1:rdbms}/ # export TMP=/tmp
{node1:rdbms}/ # export TEMP=/tmp
{node1:rdbms}/ # export TMPDIR=/tmp

Check that Oracle Clusterware (including VIP, ONS, GSD, Listeners and ASM ressources) is started on
each node !!!. ASM instances and listeners are not mandatory for the RDBMS installation.
As asm user from
node1 :

11gRAC/ASM/AIX

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #

[email protected]

94 of 393

{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 190 MB. Actual 1933 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-14_02-41-49PM. Please wait
...{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.

At the OUI Welcome screen

Just click Next ...

Select the installation type :


You have the option to choose Enterprise,
Standard Edition, or Custom to proceed.
Choose the Custom option to avoid
creating a database by default.

Then click Next ...

11gRAC/ASM/AIX

[email protected]

95 of 393

Specify File Locations :


Do not change the Source field
Specify a different ORACLE_HOME Name
with its own directory for the Oracle
software installation.

This ORACLE_HOME must be


different then the CRS and ASM
ORACLE_HOME.
OraDb11g_home1
/oracle/rdbms/11.1.0
Then click Next ...

If you dont see the following screen


with Node selection, it might be that
your CRS is down on one or all nodes.
Please check if CRS is up and
running on all nodes.
Specify Hardware Cluster Installation
Mode :
Select Cluster Installation
AND the other nodes on to which the
Oracle RDBMS software will be installed.
It is not necessary to select the node on
which the OUI is currently running. Click
Next.
Then click Next ...

The installer will check some productspecific Prerequisite.


Dont take care of the lines with checking
at status Not executed, These are just
warnings because AIX maintenance level
might be higher then 5300, which is the
case in our example (ML03).
Then click Next ...

11gRAC/ASM/AIX

[email protected]

96 of 393

Details of the
prerequisite
checks done
by
runInstaller

hecking operating system requirements ...


Expected result: One of 5300.05,6100.00
Actual Result: 5300.07
Check complete. The overall result of this check is: Passed
========================================================
Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.7.0).
Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.60).
Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.7.0).
Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.7.0). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.7.0).
Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.7.0).
Passed
Checking for rsct.basic.rte(0.0); found rsct.basic.rte(2.4.8.0).
Passed
Checking for rsct.compat.clients.rte(0.0); found rsct.compat.clients.rte(2.4.8.0).
Checking for bos.mp64(5.3.0.56); found bos.mp64(5.3.7.1). Passed
Checking for bos.rte.libc(5.3.0.55); found bos.rte.libc(5.3.7.1). Passed
Checking for xlC.aix50.rte(8.0.0.7); found xlC.aix50.rte(9.0.0.1). Passed
Checking for xlC.rte(8.0.0.7); found xlC.rte(9.0.0.1).
Passed
Check complete. The overall result of this check is: Passed
========================================================
Checking recommended operating system patches
Checking for IY89080(bos.rte.aio,5.3.0.51); found (bos.rte.aio,5.3.7.0).
Checking for IY92037(bos.rte.aio,5.3.0.52); found (bos.rte.aio,5.3.7.0).
Checking for IY94343(bos.rte.lvm,5.3.0.55); found (bos.rte.lvm,5.3.7.0).
Check complete. The overall result of this check is: Passed
========================================================

Passed

Passed
Passed
Passed

Checking kernel parameters


Check complete. The overall result of this check is: Not executed <<<<
OUI-18001: The operating system 'AIX Version 5300.07' is not supported.
Recommendation: Perform operating system specific instructions to update the kernel parameters.
========================================================
Checking physical memory requirements ...
Expected result: 922MB
Actual Result: 2048MB
Check complete. The overall result of this check is: Passed
========================================================
Checking available swap space requirements ...
Expected result: 3072MB
Actual Result: 3548MB
Check complete. The overall result of this check is: Passed
========================================================
Validating ORACLE_BASE location (if set) ...
Check complete. The overall result of this check is: Passed
========================================================
Checking maxmimum command line length argument, ncarg...
Check complete. The overall result of this check is: Passed
========================================================
Checking for proper system clean-up....
Check complete. The overall result of this check is: Passed
========================================================
Checking Oracle Clusterware version ...
Check complete. The overall result of this check is: Passed
========================================================
Checking uid/gid...
Check complete. The overall result of this check is: Passed
========================================================

11gRAC/ASM/AIX

[email protected]

97 of 393

Available Product Components :


Select the product components for Oracle
Database 11g that you want to install.
INFO : Compared to 10gRAC R1 installation,
there is no Real Application Cluster option to
select.
Then click Next ...

Privileged Operating Systems Groups :


Verify the UNIX primary group name of the user
which controls the installation of the Oracle11g
database software. (Use unix command id to find
out)
And specify the Privileged Operating System
Groups to the value found.
In our example, this must be :
dba for Database Administrator (OSDBA)
group
oper for Database Operator (OSOPER)
group
asm for administrator (SYSASM) group
(Primary group of unix asm user) to be set for
both entries.
Then click Next ...
Create Database :
Choose Install database Software only, we
dont want to create a database at this stage.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

98 of 393

Summary :
The Summary screen will be presented.
Confirm that the RAC database software and
other selected options will be installed.
Check Cluster Nodes and Remote Nodes lists.
The OUI will install the Oracle 10g software on to
the local node, and then copy this information to
the other selected nodes.
Then click Install ...

Install :
The Oracle Universal Installer will proceed the
installation on the first node, then will copy
automatically the code on the others selected
nodes.
At 50%, it may take time to pass over the 50%,
dont be affrais, installation is processing, running
a test script on remote nodes
Just wait for the next screen ...

At this stage, you could hit message about Failed Attached Home !!!

if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on OK :
From node2 :
{node2:root}/oracle/rdbms/11.1.0 # su rdbms
{node2:rdbms}/oracle/rdbms/11.1.0 -> /oracle/rdbms/11.1.0/oui/bin/runInstaller attachHome
noClusterEnabled ORACLE_HOME=/oracle/rdbms/11.1.0 ORACLE_HOME_NAME=OraDb11g_home1
CLUSTER_NODES=node1,node2 INVENTORY_LOCATION=/oracle/oraInventory LOCAL_NODE=node2
Starting Oracle Universal Installer
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
AttachHome was successful.
{node2:rdbms}/oracle/rdbms/11.1.0/bin #

Execute Configuration Scripts

11gRAC/ASM/AIX

[email protected]

99 of 393

will pop-up :
AS root, execute root.sh on each
node.
For our case, this script is
located in the
/oracle/products/asm

Just click OK ...

{node1:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node1:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node1:root}/ #
--------------------------------------------------------------{node2:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node2:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

11gRAC/ASM/AIX

[email protected]

100 of 393

Finished product-specific root actions.


{node2:root}/ #

Coming back to
this previous
screen,
Just click OK

End of Installation :
This screen will automatically appear.
Check that it is successful and write down the
URL list of the J2EE applications that have
been deployed (isqlplus, ).

Then click Exit ...

11gRAC/ASM/AIX

[email protected]

101 of 393

14.2 Symbolic links creation for listener.ora, tnsnames.ora and sqlnet.ora


In order to create the symbolic links, make sure that the files listener.ora, tnsnames.ora and
sqlnet.ora does exist in /oracle/asm/11.1.0/network/admin/listener.ora on both nodes.
As rdbms user on each node :
{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/listener.ora
/oracle/rdbms/11.1.0/network/admin/listener.ora
{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/tnsnames.ora
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/sqlnet.ora
/oracle/rdbms/11.1.0/network/admin/sqlnet.ora
{node1:rdbms}/ # ls la /oracle/rdbms/11.1.0/network/admin/*.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 listener.ora -> /oracle/
asm/11.1.0/network/admin/listener.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 tnsnames.ora ->
/oracle/asm/11.1.0/network/admin/tnsnames.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 sqlnet.ora ->
/oracle/asm/11.1.0/network/admin/sqlnet.ora
{node1:rdbms}/ #
Doing this will avoid dealing with many listener.ora, tnsnames.ora and sqlnet.ora, as setting TNS_ADMIN
variable will not be enough as some oracle assistants tools will not take consideration of this variable.
As rdbms user on node2 :
{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/listener.ora
/oracle/rdbms/11.1.0/network/admin/listener.ora
{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/tnsnames.ora
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/sqlnet.ora
/oracle/rdbms/11.1.0/network/admin/sqlnet.ora
{node2:rdbms}/ # ls la /oracle/rdbms/11.1.0/network/admin/*.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 listener.ora -> /oracle/
asm/11.1.0/network/admin/listener.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 tnsnames.ora ->
/oracle/asm/11.1.0/network/admin/tnsnames.ora
lrwxrwxrwx
1 oracle
dba
47 Apr 23 10:19 sqlnet.ora ->
/oracle/asm/11.1.0/network/admin/sqlnet.ora
{node2:rdbms}/ #

11gRAC/ASM/AIX

[email protected]

102 of 393

14.3 Update the RDBMS unix user .profile


To be done on each node for rdbms (in our case) or oracle unix user.
vi $HOME/.profile file in rdbmss home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
then echo "$MAILMSG"
fi

# This is at Shell startup. In normal


# operation, the Shell checks
# periodically.

ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from asm user, and reconnect to load modified $HOME/.profile

11gRAC/ASM/AIX

[email protected]

103 of 393

15 DATABASE CREATION ON ASM

In our case, we have a Diskgroup called


DATA_DG1 where to create an Oracle
cluster database.

{node2:root}/
{node2:root}/
Change permission {node2:root}/
{node2:root}/
to allow rdbms user to
{node2:root}/
write on directories
{node2:root}/
onwned by asm user.
{node2:root}/
{node2:root}/
MANDATORY

11gRAC/ASM/AIX

#
#
#
#
#
#
#
#

rsh
rsh
rsh
rsh
rsh
rsh
rsh
rsh

node1
node2
node1
node2
node1
node2
node1
node2

chmod
chmod
chmod
chmod
chmod
chmod
chmod
chmod

-R
-R
-R
-R
-R
-R
-R
-R

[email protected]

g+w
g+w
g+w
g+w
g+w
g+w
g+w
g+w

/oracle/asm/11.1.0/network
/oracle/asm/11.1.0/network
/oracle/cfgtoollogs
/oracle/cfgtoollogs
/oracle/admin
/oracle/admin
/oracle/diag
/oracle/diag

104 of 393

15.1 Thru Oracle DBCA


Connect as rdbms unix user
from first node,
and setup your DISPLAY

From node1 :

{node1:rdbms}/ # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/ # export ORACLE_SID=
Execute dbca & to lanch the
database configuration assistant {node1:rdbms}/ # cd $ORACLE_HOME/bin
{node1:rdbms}/oracle/rdbms/11.1.0/bin #./dbca

Check that all resources from


Oracle Clusterware are started
on their home node.

From node1 or node2 :


{node2:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:rdbms}/oracle/rdbms #

DBCA Welcome Screen :


Select the Oracle Real Application Cluster
Database option.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

105 of 393

Operations :
Select the Create a Database option.
Then click Next ...

Node Selection :

Make sure to select all RAC nodes.


Then click Next ...

Database Templates :
Select General Purpose
Or Custom Database if you want to generate
the creation scripts.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

106 of 393

Database Identification :
Specify the Global Database Name
The SID Prefix will be automatically
updated. (by default it is the Global Database
Name)
For our example : JSC1DB
Then click Next ...

Management Options :
Check Configure the database with
Enterprise Manager if you want to use the
Database Control (local administration).
Or Dont check if you plan to administrate the
database using the Grid Control (global
network administration)
You can also set Alert and backup :

Then click Next ...


Database Credentials :
Specify same password for all
administrator users,
or specify individual password for each user.

Then click Next ...

11gRAC/ASM/AIX

[email protected]

107 of 393

Storage Options :

Choose Automatic Storage


Management (ASM)
Then click Next ...
Youll be prompted to enter ASM sys
password :

Then Click OK

Create Disk Group


Now the ASM Disks Group is created
Select DiskGroup to be used !!!
DATA_DG1 in our example
Then click Ok ...

Database File Locations :


Select Use Oracle-Managed Files
AND Select DiskGroup to use for the
Database Files.

Then click Next ...

11gRAC/ASM/AIX

[email protected]

108 of 393

Recovery Configuration :
Select the DiskGroup to be used for the
Flash Recovery Area.
In our case :
+DATA_DG1
and size of 4096

Enable Archiving, and click on Edit


Archive Mode arameters to specify an
other ASM Disk Group as different
Archive Log Destinations.

You can also check the File Location


Variables

Click Ok then Next ...

11gRAC/ASM/AIX

[email protected]

109 of 393

Database Content :
Select the options needed

Then click Next ...

Initialization Parameters :
Select the parameters needed
You can click on All Initialization
Parameters to view or modify

You can also select the folders

Sizing
Character Sets
Connection Mode

to validate/modify/specify settings at your


convenience.
Then click Next ...

About Oracle Cluster database instances parameters, well have to look on these parameters :
11gRAC/ASM/AIX

[email protected]

110 of 393

Instance

Name

Value

asm_preferred_read_failure_groups

ONLY usable if you did create and ASM diskgroup in normal or


high redoundancy with a minimum of 2 failures groups.
Valuer will be set as the name of the preferred failure group.

If you want to set a different location than the default choosen ASM disk group for the control files, you
should modify the value of parameter (control_files)
*

control_files

(+DATA_DG1/{DB_NAME}/control01.ctl,
+DATA_DG1/{DB_NAME}/control02.ctl,
+DATA_DG1/{DB_NAME}/control03.ctl)

db_create_file_dest

db_recovery_file_dest

*
JSC1DB1
JSC1DB2
JSC1DB1
JSC1DB2
*
JSC1DB1
JSC1DB2
JSC1DB1
JSC1DB2
*
*
*

remote_listener
undo_tablespace
undo_tablespace
local_listener
Local_listener
log_archive_dest_1
Instance_number
Instance_number
thread
thread
db_name
db_recovery_file_dest_size
diagnostic_dest

+DATA_DG1
(By default, all files from the database will be created on the
specified ASM Diskgroup)
+DATA_DG1
(Default location for backups, and archives logs)
LISTENERS_JSC1DB
UNDOTBS1
UNDOTBS2
LISTENER_JSC1DB1
LISTENER_JSC1DB2
'LOCATION=+DATA_DG1/'
1
2
1
2
JSC1DB
4294967296
(ORACLE_BASE)
(Default location for the database traces and logs files, it
replace the USER, BACKGROUND and DUMP destinations)

LISTENERS_JSC1DB =
$ORACLE_HOME/network/admin (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
, do edit tnsnames.ora file on each
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
node, and add the following lines :
)

In

LISTENERS_JSC1DB could be
automatically added by DBCA in
the tnsnames.ora

LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_JSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

11gRAC/ASM/AIX

[email protected]

111 of 393

Security Settings :
Select the options
needed
By default
Keep the enhanced 11g
default security
settings.
Then click Next ...

Either (By default)

Or

Automatic Maintenance Tasks :


By default, Enable Automatic
Maintenance Tasks.
Then click Next ...

11gRAC/ASM/AIX

[email protected]

112 of 393

Database Storage :
Check the datafiles organization
Add one extra log member on each thread
now with the assistant,

or later using QSL commands as :


ALTER DATABASE
ADD LOGFILE THREAD 1 GROUP 5 SIZE
51200K
ALTER DATABASE
ADD LOGFILE THREAD 2 GROUP 6 SIZE
51200K
Then click Next ...

REMINDER

{node2:root}/
{node2:root}/
Change permission {node2:root}/
{node2:root}/
to allow rdbms user to
{node2:root}/
write on directories
{node2:root}/
onwned by asm user.
{node2:root}/
{node2:root}/

11gRAC/ASM/AIX

#
#
#
#
#
#
#
#

rsh
rsh
rsh
rsh
rsh
rsh
rsh
rsh

node1
node2
node1
node2
node1
node2
node1
node2

chmod
chmod
chmod
chmod
chmod
chmod
chmod
chmod

-R
-R
-R
-R
-R
-R
-R
-R

[email protected]

g+w
g+w
g+w
g+w
g+w
g+w
g+w
g+w

/oracle/asm/11.1.0/network
/oracle/asm/11.1.0/network
/oracle/cfgtoollogs
/oracle/cfgtoollogs
/oracle/admin
/oracle/admin
/oracle/diag
/oracle/diag

113 of 393

Creation Options :
Select the options needed

Create Database

Generate Database Creation Scripts


Then click Finish ...

Summary :
Check the description
Save the HTML summary file if needed
Then click Ok ...

Database Creation script generation


When finished :

Click Ok ...

11gRAC/ASM/AIX

[email protected]

114 of 393

Database creation on progress :


Just wait while processing ...
Check /oracle/cfgtoollogs/dbca/JSC1DB
for the logs in case of failure to create the
database.

Passwords Management
Enter in password management, if you
need to change pasword, and unlock
some user accounts that are locked by
default (for security purpose).

Then click Exit ...

11gRAC/ASM/AIX

[email protected]

115 of 393

Execute
crs_stat t
on one node
as rdbms
user :

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #

Target
-----OFFLINE
OFFLINE
OFFLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----OFFLINE
OFFLINE
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on

node1
node1
node1
node1
node1
node2
node2
node2
node2
node2

Starting cluster
database

Execute
crs_stat t
on one node
as rdbms
user :

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

{node1:rdbms}/oracle/rdbms # crsstat JSC1DB


HA Resource
----------ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.db
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
ONLINE

State
----ONLINE on node1
ONLINE on node2
ONLINE on node1

11gRAC/ASM/AIX

[email protected]

on
on
on
on
on
on
on
on
on
on
on
on
on

node1
node2
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2

116 of 393

15.2 Manual Database Creation


Here are the steps to be followed to create a Real Application Clusters database :
Lets imagine we want to create an 11g cluster database called JSC2DB on our 2 RAC nodes cluster, with instances
JSC2DB1 on node1 and JSC2DB2 on node2.
1.

Make the necessary sub directories on each node :

On node1 :

{node1:root}/ # su rdbms
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #

mkdir
mkdir
mkdir
mkdir
mkdir
mkdir

$ORACLE_BASE/admin/JSC2DB
$ORACLE_BASE/admin/JSC2DB/scripts
$ORACLE_BASE/admin/JSC2DB/adump
$ORACLE_BASE/admin/JSC2DB/hdump
$ORACLE_BASE/admin/JSC2DB/dpdump
$ORACLE_BASE/admin/JSC2DB/pfile

{node1:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db


{node1:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db/JSC2DB1

On node2 :

{node2:root}/ # su rdbms
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #

mkdir
mkdir
mkdir
mkdir
mkdir
mkdir

$ORACLE_BASE/admin/JSC2DB
$ORACLE_BASE/admin/JSC2DB/scripts
$ORACLE_BASE/admin/JSC2DB/adump
$ORACLE_BASE/admin/JSC2DB/hdump
$ORACLE_BASE/admin/JSC2DB/dpdump
$ORACLE_BASE/admin/JSC2DB/pfile

{node2:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db


{node2:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db/JSC2DB2
2.

Create any ASM diskgroup(s) if needed :

From node1 :
{node1:root}/ # su asm
{node1:asm}/home/asm #
{node1:asm}/home/asm # export ORACLE_SID=+ASM1
{node1:asm}/home/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 10:28:54 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> set linesize 1000
SQL> select name, state from v$asm_diskgroup;
NAME
STATE
------------------------------ ----------DATA_DG1
MOUNTED
SQL>
SQL> select HEADER_STATUS,PATH from v$asm_disk order by PATH;

11gRAC/ASM/AIX

[email protected]

117 of 393

HEADER_STATU PATH
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------MEMBER
/dev/ASM_disk1
MEMBER
/dev/ASM_disk2
MEMBER
/dev/ASM_disk3
MEMBER
/dev/ASM_disk4
MEMBER
/dev/ASM_disk5
MEMBER
/dev/ASM_disk6
CANDIDATE
/dev/ASM_disk7
CANDIDATE
/dev/ASM_disk8
CANDIDATE
/dev/ASM_disk9
CANDIDATE
/dev/ASM_disk10
CANDIDATE
/dev/ASM_disk11
CANDIDATE
/dev/ASM_disk12
12 rows selected.
SQL>
You can use the disks that are in disk header status CANDIDATE or FORMER

SQL> CREATE DISKGROUP DATA_DG2 NORMAL


FAILGROUP GROUP1 DISK
'/dev/ASM_disk7',
'/dev/ASM_disk8',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk9',
'/dev/ASM_disk10';

REDUNDANCY

Diskgroup created.
SQL> select HEADER_STATUS,PATH from v$asm_disk order by PATH;
HEADER_STATU PATH
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------MEMBER
/dev/ASM_disk1
MEMBER
/dev/ASM_disk2
MEMBER
/dev/ASM_disk3
MEMBER
/dev/ASM_disk4
MEMBER
/dev/ASM_disk5
MEMBER
/dev/ASM_disk6
MEMBER
/dev/ASM_disk7
MEMBER
/dev/ASM_disk8
EMBER
/dev/ASM_disk9
MEMBER
/dev/ASM_disk10
MEMBER
/dev/ASM_disk11
CANDIDATE
/dev/ASM_disk12
12 rows selected.
SQL>
SQL> select INST_ID, NAME, STATE, OFFLINE_DISKS from gv$asm_diskgroup;
INST_ID NAME
STATE
OFFLINE_DISKS
---------- ------------------------------ ----------- ------------2 DATA_DG1
MOUNTED
0

11gRAC/ASM/AIX

[email protected]

118 of 393

2 DATA_DG2
1 DATA_DG1
1 DATA_DG2

DISMOUNTED
MOUNTED
MOUNTED

0
0
0

6 rows selected.
SQL>
We need to mount DATA_DG2 on the second node !!!
On node2 :
{node2:root}/oracle/diag # su - asm
{node2:asm}/oracle/asm # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 13:41:23 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysasm


Connected.
SQL>
SQL> alter diskgroup DATA_DG2 mount;
Diskgroup altered.
SQL> select INST_ID, NAME, STATE, OFFLINE_DISKS from gv$asm_diskgroup;
INST_ID
---------1
1
2
2

NAME
-----------------------------DATA_DG1
DATA_DG2
DATA_DG1
DATA_DG2

STATE
OFFLINE_DISKS
----------- ------------MOUNTED
0
MOUNTED
0
MOUNTED
0
MOUNTED
0

6 rows selected.
SQL>

DATA_DG2 is now mounyed on all nodes !!!


Then with asmcmd command, we will see :
{node1:asm}/oracle/asm # asmcmd -p
ASMCMD [+] > ls
DATA_DG1/
DATA_DG2/
ASMCMD [+] >

3.

Create necessary sub directories for the JSC2DB database within the choosen ASM diskgroup :

From node1 : {node1:root}/ # su asm


{node1:asm}/home/asm #
{node1:asm}/home/asm # export ORACLE_SID=+ASM1
{node1:asm}/home/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/home/asm # asmcmd p
{node1:asm}/oracle/asm # asmcmd -p
ASMCMD [+] > ls
DATA_DG1/
ASMCMD [+] > mkdir

11gRAC/ASM/AIX

[email protected]

119 of 393

4.

Make a init<SID>.ora in your $ORACLE_HOME/dbs directory.

ClusterWide
Parameter
s for
Database "
JSC2DB ":

##############################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
##############################################################################
###########################################
# Archive
###########################################
log_archive_dest_1='LOCATION=+DATA_DG2/'
log_archive_format=%t_%s_%r.dbf
###########################################
# Cache and I/O
###########################################
db_block_size=8192
###########################################
# Cluster Database
###########################################
cluster_database_instances=2
#remote_listener=LISTENERS_JSC2DB
###########################################
# Cursors and Library Cache
###########################################
open_cursors=300
###########################################
# Database Identification
###########################################
db_domain=""
db_name=JSC2DB
###########################################
# File Configuration
###########################################
db_create_file_dest=+DATA_DG2
db_recovery_file_dest=+DATA_DG2
db_recovery_file_dest_size=4294967296
###########################################
# Miscellaneous
###########################################
asm_preferred_read_failure_groups=""
compatible=11.1.0.0.0
diagnostic_dest=/oracle
memory_target=262144000
###########################################
# Processes and Sessions
###########################################
processes=150
###########################################
# Security and Auditing
###########################################
audit_file_dest=/oracle/admin/JSC2DB/adump
audit_trail=db
remote_login_passwordfile=exclusive

11gRAC/ASM/AIX

[email protected]

120 of 393

###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=JSC2DBXDB)"
###########################################
# Cluster Database
###########################################
db_unique_name=JSC2DB
control_files=/oracle/admin/JSC1DB/scripts/tempControl.ctl
JSC2DB1.instance_number=1
JSC2DB2.instance_number=2
JSC2DB2.thread=2
JSC2DB1.thread=1
JSC2DB1.undo_tablespace=UNDOTBS1
JSC2DB2.undo_tablespace=UNDOTBS2
5.

Run the following sqlplus command to connect to the database:

From node1 :

{node1:root}/ # su rdbms
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms # export ORACLE_SID=JSC2DB1
{node1:rdbms}/home/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/home/rdbms # sqlplus /nolog
connect / as sysdba

6.

Startup up the database in NOMOUNT mode:

From node1 :
7.

SQL> startup nomount pfile="/oracle/admin/JSC2DB/scripts/init.ora";

Create the Database (ASM diskgroup(s) must be created and mounted on all nodes) :

From node1 :
SQL> CREATE DATABASE "JSC2DB"
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K
MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K
MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P15
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 51200K, GROUP 2 SIZE 51200K;

11gRAC/ASM/AIX

[email protected]

121 of 393

8.

Create a Temporary Tablespace:


SQL> CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
'+DATA_DG2 SIZE 40M REUSE

From node1 :

9.

Create a 2nd Undo Tablespace:


SQL> CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
'+DATA_DG2 SIZE 200M REUSE
NEXT 5120K MAXSIZE UNLIMITED;

From node1 :

10.

Run the necessary scripts to build views, synonyms, etc.:

From node1 :

The primary scripts that you must run are:


@/oracle/rdbms/11.1.0/rdbms/admin//catalog.sql
-- creates the views of data dictionary tables and the dynamic performance views
@/oracle/rdbms/11.1.0/rdbms/admin/catproc.sql
-- establishes the usage of PL/SQL functionality and creates many of the PL/SQL Oracle
supplied packages
@/oracle/rdbms/11.1.0/rdbms/admin/catclust.sql
-- create the cluster views
Run any other necessary catalog that you require

11.

Edit init<SID>.ora and set appropriate values for the 2nd instance on the 2nd Node:
instance_name=JSC2DB2
instance_number=2
local_listener=LISTENER_JSC2DB2
thread=2
undo_tablespace=UNDOTBS2

12.

From the first instance, run the following command:

From node1 :
SQL> connect SYS/oracle as SYSDBA
SQL> shutdown immediate;
SQL> startup mount pfile="/oracle/admin/JSC1DB/scripts/init.ora";
SQL> alter database archivelog;
SQL> alter database open;
SQL> select group# from v$log where group# =3;
SQL> select group# from v$log where group# =4;
SQL> select group# from v$log where group# =6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 SIZE 51200K,
GROUP 4 SIZE 51200K,
GROUP 6 SIZE 51200K;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;
SQL> create spfile='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' FROM
pfile='/oracle/admin/JSC2DB/scripts/init.ora';
SQL> shutdown immediate;

From node1 :

11gRAC/ASM/AIX

[email protected]

122 of 393

{node1:rdbms}/home/rdbms # echo SPFILE='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' >


/oracle/rdbms/11.1.0/dbs/initJSC2DB1.ora
From node2 :
{node2:rdbms}/home/rdbms # echo SPFILE='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' >
/oracle/rdbms/11.1.0/dbs/initJSC2DB2.ora
13.

Create the oracle instance password files

From node1 :
{node1:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB1 password=oracle force=y
From node2 :
{node2:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB2 password=oracle force=y
14.

Start the second Instance. (Assuming that your cluster configuration is up and running)

15.

Configure listener.ora / sqlnet.ora / tnsnames.ora,


Use netca and/or netmgr to check the configuration of the listener and configure Oracle Net services (by
default the Net service may be equal to the global database name (see instance parameter service_names
).

16.

Configure Oracle Enterprise Manager (dbconsole or grid control agent)


Then start the Grid agent :
$emctl start agent

17.

Check /etc/oratab
The file should contain a reference to the database name, not to the instance name.
The last field should always be N on a RAC environment to avoid 2 instances of the same name to be
started.
{node1:rdbms}/oracle/rdbms # cat /etc/oratab

+ASM1:/oracle/asm/11.1.0:N
JSC1DB:/oracle/rdbms/11.1.0:N
JSC2DB:/oracle/rdbms/11.1.0:N
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

123 of 393

18.

Register the database resources with cluster srvctl command :

From node1 :
{node1:rdbms}/home/rdbms # srvctl add database d JSC2DB o /oracle/rdbms/11.1.0
{node1:rdbms}/home/rdbms # srvctl add instance d JSC2DB i JSC2DB1 n node1
{node1:rdbms}/home/rdbms # srvctl add instance d JSC2DB i JSC2DB2 n node2
{node1:rdbms}/home/rdbms #
{node1:rdbms}/oracle/rdbms # crsstat JSC2DB
HA Resource
----------ora.JSC2DB.JSC2DB1.inst
ora.JSC2DB.JSC2DB2.inst
ora.JSC2DB.db
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

Target
-----ONLINE
ONLINE
ONLINE

[email protected]

State
----ONLINE on node1
ONLINE on node2
ONLINE on node1

124 of 393

15.3 Post Operations


15.3.1

UpdatetheRDBMSunixuser.profile

To be done on each node for rdbms (in our case) or oracle unix user.
vi $HOME/.profile file in rdbmss home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
# This is at Shell startup. In normal
then echo "$MAILMSG"
# operation, the Shell checks
fi
# periodically.
ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=JSC1DB1
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from asm user, and reconnect to load modified $HOME/.profile
THEN on node2
..
export ORACLE_SID=JSC1DB2
..

Do disconnect from asm user, and reconnect to load modified $HOME/.profile


15.3.2

AdministerandCheck

11gRAC/ASM/AIX

[email protected]

125 of 393

To start / stop the database, stopping all instances :


{node2:rdbm
s}/oracle/rdbms # srvctl stop database d JDBC1DB
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
OFFLINE
OFFLINE
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
OFFLINE
OFFLINE
ora....B2.inst application
OFFLINE
OFFLINE
{node1:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # srvctl start database d JDBC1DB
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
To start / stop selected database instance :
{node2:rdbms}/oracle/rdbms # srvctl stop instance d JDBC1DB i JDBC1DB1
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
OFFLINE
OFFLINE
ora....B2.inst application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # srvctl stop instance d JDBC1DB i JDBC1DB2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
OFFLINE
OFFLINE
ora....B2.inst application
OFFLINE
OFFLINE
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
OFFLINE
OFFLINE
{node2:rdbms}/oracle/rdbms # srvctl start instance d JDBC1DB i JDBC1DB1
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
OFFLINE
OFFLINE
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
OFFLINE
OFFLINE
{node2:rdbms}/oracle/rdbms # srvctl start instance d JDBC1DB i JDBC1DB2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application
ONLINE
ONLINE
node2

11gRAC/ASM/AIX

[email protected]

126 of 393

Getting profile from the ora.JSC1DB.JSC1DB1.inst


resource within the clusterware

Getting profile from the


ora.JSC1DB.JSC1DB1.db resource within the
clusterware

{node1:rdbms}/oracle/rdbms # crs_stat -p
ora.JSC1DB.JSC1DB1.inst

{node1:rdbms}/oracle/rdbms # crs_stat -p
ora.JSC1DB.db

NAME=ora.JSC1DB.JSC1DB1.inst
TYPE=application
ACTION_SCRIPT=/oracle/rdbms/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=300
DESCRIPTION=CRS application for Instance
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=ora.node1.ASM1.asm
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=900
STOP_TIMEOUT=300
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

NAME=ora.JSC1DB.db
TYPE=application
ACTION_SCRIPT=/crs/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=600
DESCRIPTION=CRS application for the
Database
FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_THRESHOLD=1
HOSTING_MEMBERS=
OPTIONAL_RESOURCES=
PLACEMENT=balanced
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=600
START_TIMEOUT=600
STOP_TIMEOUT=600
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

127 of 393

Check JSC1DB spfile or initfile content, For example from node1 :


{node1:rdbms}/oracle/rdbms # cat 11.1.0/dbs/initJSC1DB1.ora
SPFILE='+DATA_DG1/JSC1DB/spfileJSC1DB.ora'
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Feb 20 19:33:47 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> create pfile='/oracle/rdbms/JSC1DB_pfile.ora' from
spfile='+DATA_DG1/JSC1DB/spfileJSC1DB.ora';
File created.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit
Production
With the Partitioning, Real Application Clusters and Real Application Testing options
{node1:rdbms}/oracle/rdbms # ls
11.1.0
JSC1DB
JSC1DB_pfile.ora
{node1:rdbms}/oracle/rdbms # cat JSC1DB_pfile.ora
JSC1DB1.__db_cache_size=8388608
JSC1DB2.__db_cache_size=16777216
JSC1DB1.__java_pool_size=12582912
JSC1DB2.__java_pool_size=4194304
JSC1DB1.__large_pool_size=4194304
JSC1DB2.__large_pool_size=4194304
JSC1DB1.__oracle_base='/oracle'#ORACLE_BASE set from environment
JSC1DB2.__oracle_base='/oracle'#ORACLE_BASE set from environment
JSC1DB1.__pga_aggregate_target=83886080
JSC1DB2.__pga_aggregate_target=83886080
JSC1DB1.__sga_target=180355072
JSC1DB2.__sga_target=180355072
JSC1DB1.__shared_io_pool_size=0
JSC1DB2.__shared_io_pool_size=0
JSC1DB1.__shared_pool_size=146800640
JSC1DB2.__shared_pool_size=146800640
JSC1DB1.__streams_pool_size=0
JSC1DB2.__streams_pool_size=0
*.audit_file_dest='/oracle/admin/JSC1DB/adump'
*.audit_trail='db'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='11.1.0.0.0'
*.control_files='+DATA_DG1/jsc1db/controlfile/current.261.647193795','+DATA_DG1/jsc1db/contro
lfile/current.260.6471
93795'
*.db_block_size=8192
*.db_create_file_dest='+DATA_DG1'
*.db_domain=''
*.db_name='JSC1DB'
*.db_recovery_file_dest='+DATA_DG1'
*.db_recovery_file_dest_size=4294967296
*.diagnostic_dest='/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JSC1DBXDB)'
JSC1DB1.instance_number=1
JSC1DB2.instance_number=2
*.log_archive_dest_1='LOCATION=+DATA_DG1/'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=262144000
*.open_cursors=300
*.processes=150
*.remote_listener='LISTENERS_JSC1DB'
*.remote_login_passwordfile='exclusive'
JSC1DB2.thread=2

11gRAC/ASM/AIX

[email protected]

128 of 393

JSC1DB1.thread=1
JSC1DB1.undo_tablespace='UNDOTBS1'
JSC1DB2.undo_tablespace='UNDOTBS2'

{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

129 of 393

Checking listener status on each node :


{node1:rdbms}/oracle/rdbms # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 20-FEB-2008 21:15:01
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
19-FEB-2008 13:59:26
Uptime
1 days 7 hr. 15 min. 36 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #

Check also :
{node1:rdbms}/oracle/rdbms # lsnrctl service

11gRAC/ASM/AIX

[email protected]

130 of 393

15.4 Setting Database Local and remote listeners

About Oracle Cluster database instances parameters, well have to look on these parameters :
Instance
*
JSC1DB1
JSC1DB2

Name
remote_listener
local_listener
Local_listener

Value
LISTENERS_JSC1DB
LISTENER_JSC1DB1
LISTENER_JSC1DB2

remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
local_listener from node1, and node2 are not the ones defined in the listener.ora files from each node.

At database level, you should see


From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> connect /as sysdba
Connected.
SQL> show parameter listener
NAME
-----------------------------------local_listener
remote_listener
SQL>

TYPE
VALUE
----------- -----------------------------string
string
LISTENERS_JSC1DB

From node2 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB2
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> connect /as sysdba
Connected.
SQL> show parameter listener
NAME
-----------------------------------local_listener
remote_listener
SQL>

TYPE
VALUE
----------- -----------------------------string
string
LISTENERS_JSC1DB

ONLY remote_listener parameter is set, local_listener parameter for each instance is not mandatory as were using
port 1521 (automatic registration) for the default node listeners. But if not using 1521 port, it is mandatory to set them
to have a proper load balancing and failover of user connections.

11gRAC/ASM/AIX

[email protected]

131 of 393

$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin
MANDATORY !!!
On all nodes, for
JSC1DB
Database, you
should have or
add the following
lines :

LISTENERS_JSC1DB =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_AJSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

If remote_listener, and local_listener are not set, empty or not well set, you should issue the following
commands :
From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> connect /as sysdba
Connected.
SQL> ALTER SYSTEM SET local_listener='LISTENER_JSC1DB1' SCOPE=BOTH SID='JSC1DB1';
System Altered.
SQL> ALTER SYSTEM SET local_listener='LISTENER_JSC1DB2' SCOPE=BOTH SID='JSC1DB2';
System Altered.
SQL>

If needed to set remote_listener, command will be as follow :


SQL> ALTER SYSTEM SET remote_listener='LISTENERS_JSC1DB' SCOPE=BOTH SID='*';
System Altered.
From node1 :

SQL> show parameter listener


NAME
-----------------------------------local_listener
remote_listener
SQL>

TYPE
----------string
string

VALUE
-----------------------------LISTENER_JSC1DB1
LISTENERS_JSC1DB

TYPE
----------string
string

VALUE
-----------------------------LISTENER_JSC1DB2
LISTENERS_JSC1DB

From node2 :

SQL> show parameter listener


NAME
-----------------------------------local_listener
remote_listener
SQL>

11gRAC/ASM/AIX

[email protected]

132 of 393

15.5 Creating Oracle Services


With 11g, its not anymore possible to create services thru DBCA tool, option has been removed.
The 2 options are :

15.5.1

thru srvctl tool from Oracle Clusterware


thru Oracle Grid Control, or DB Console

Creationthrusrvctlcommand

NEVER Create a service with same name as database !!!


Subject:Cannotmanageservicewithsrvctl,whenit'screatedwithsamenameasDatabase:PRKO2120
DocID:Note:362645.1
Details of command to add oracle services thru srvctl :
srvctl add service -d <name> -s <service_name> -r <preferred_list> [-a <available_list>] [-P <TAF_policy>]
[-u]
-d
-s
-a
-P
-r
-u

Database name
Service name
for services, list of available instances, this list cannot include preferred instances
for services, TAF preconnect policy NONE, BASIC, PRECONNECT
for services, list of preferred instances, this list cannot include available instances.
updates the preferred or available list for the service to support the specified instance.
Only one instance may be specified with the
-u switch. Instances that already support the service should not be included.
Examples for a 4 RAC nodes Cluster.
With a cluster database named ORA.
4 instances named ORA1, ORA2, ORA3 and ORA4.
Add a STD_BATCH service to an existing database with preferred instances (-r) and available instances (-a).
Use basic failover to the available instances.
srvctl add service -d RAC -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4
Add a STD_BATCH service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4 -P PRECONNECT

11gRAC/ASM/AIX

[email protected]

133 of 393

In our case, we want to :


Add an OLTP service to an existing JSC1DB database
with JSC1DB1 and JSC1DB2 as preferred instances (-r)
Using basic failover to the available instances.
As rdbms user, from one node :
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s OLTP -r JSC1DB1,JSC1DB2
Add a BATCH service to an existing JSC1DB database
with JSC1DB2 as preferred instances (-r)
and JSC1DB1 as available instances (-a).
Using basic failover to the available instances.
As rdbms user, from one node :
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s BATCH -r JSC1DB2 -a JSC1DB1
Add a DISCO service to an existing JSC1DB database
with JSC1DB2 as available instances (-a)
and JSC1DB1 as preferred instances (-r).
Using basic failover to the available instances.
As rdbms user, from one node :
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s DISCO -r JSC1DB1 -a JSC1DB2

Query the cluster


before creating any
service

11gRAC/ASM/AIX

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #

[email protected]

134 of 393

Create and manage OLTP service :


Let see in details, As rdbms user, from one node
{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service OLTP


Add an OLTP service to an existing JSC1DB database
with JSC1DB1 and JSC1DB2 as preferred instances (-r)
Using basic failover to the available instances.
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s OLTP -r JSC1DB1,JSC1DB2
{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s OLTP
OLTP PREF: JSC1DB1 JSC1DB2 AVAIL:
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
OFFLINE
OFFLINE
ora....DB2.srv application
OFFLINE
OFFLINE
ora....OLTP.cs application
OFFLINE
OFFLINE
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #

To query after
OLTP service
creation

To query and
get full name
of OLTP
service
resource
Entries added
by the creation
of Oracle
services at
Oracle
Clusterware
level

{node1:rdbms}/oracle/rdbms # crs_stat |grep OLTP


NAME=ora.JSC1DB.OLTP.JSC1DB1.srv
NAME=ora.JSC1DB.OLTP.JSC1DB2.srv
NAME=ora.JSC1DB.OLTP.cs
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crsstat OLTP
HA Resource
----------ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

Target
-----OFFLINE
OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE
OFFLINE

135 of 393

To start the service OLTP


{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s OLTP
{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s OLTP
Service OLTP is running on instance(s) JSC1DB1, JSC1DB2
{node1:rdbms}/oracle/rdbms #

To query after OLTP


service startup

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
ONLINE
ONLINE
node1
ora....DB2.srv application
ONLINE
ONLINE
node2
ora....OLTP.cs application
ONLINE
ONLINE
node1
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #

To stop OLTP service :


{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s OLTP

To query after
stopping OLTP
service

11gRAC/ASM/AIX

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
OFFLINE
OFFLINE
ora....DB2.srv application
OFFLINE
OFFLINE
ora....OLTP.cs application
OFFLINE
OFFLINE
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #

[email protected]

136 of 393

THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :

All VIP declared


Load Balancing
configured
Failover
Configured for
SESSION and
SELECT

OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

Test connexion using OLPT connexion string from node 2


{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 08:48:38 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options
SQL> select instance_name from v$instance;
INSTANCE_NAME
---------------JSC1DB1
SQL>
Test connexion using OLPT connexion string from node 1
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'

SQL> select instance_name from v$instance;


INSTANCE_NAME
---------------JSC1DB2
SQL>

11gRAC/ASM/AIX

[email protected]

137 of 393

11gRAC/ASM/AIX

[email protected]

138 of 393

Let stop the OLTP service to see if we can still connect thru SQLPlus
From node2 :
{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s OLTP
{node2:rdbms}/oracle/rdbms # crsstat |grep OLTP
ora.JSC1DB.OLTP.JSC1DB1.srv
OFFLINE
OFFLINE
ora.JSC1DB.OLTP.JSC1DB2.srv
OFFLINE
OFFLINE
ora.JSC1DB.OLTP.cs
OFFLINE
OFFLINE
{node2:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:10:18 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node2:rdbms}/oracle/rdbms #
From node1 :
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:09:35 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node1:rdbms}/oracle/rdbms #
Conclusion : if the OLTP service is stopped, no new connections will be allowed on any database
instances even so nodes listeners are still up and running !!!
Restarting OLTP service will enable the connections
{node2:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s OLTP

11gRAC/ASM/AIX

[email protected]

139 of 393

Create and manage BATCH service :


Let see in details, As rdbms user, from one node
{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service BATCH


Add an OLTP service to an existing JSC1DB database
with JSC1DB2 as preferred instances (-r)
and JSC1DB1 as available instances (-a).
Using basic failover to the available instances.
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s BATCH -r JSC1DB2 a JSC1DB1

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....DB2.srv application
OFFLINE
OFFLINE
ora....ATCH.cs application
OFFLINE
OFFLINE
ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
ONLINE
ONLINE
node1
ora....DB2.srv application
ONLINE
ONLINE
node2
ora....OLTP.cs application
ONLINE
ONLINE
node1
ora.JSC1DB.db application
ONLINE
ONLINE
node2
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #

To query
after BATCH
service
creation

To query and get full name of BATCH service resource


Entries
added by the
creation of
Oracle
services at
Oracle
Clusterware
level

{node1:rdbms}/oracle/rdbms # crs_stat |grep BATCH


NAME=ora.JSC1DB.BATCH.JSC1DB2.srv
NAME=ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

Target
-----OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE

140 of 393

To start the service BATCH


{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s BATCH
To query after BATCH service startup
{node1:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE

State
----ONLINE on node2
ONLINE on node2

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH


Service BATCH is running on instance(s) JSC1DB2
{node1:rdbms}/oracle/rdbms #
To stop BATCH service :
{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s BATCH
To query after stopping BATCH service
{node1:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #

Target
-----OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE

THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :

All VIP declared


Load Balancing
configured
Failover
Configured for
SESSION and
SELECT

BATCH =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

11gRAC/ASM/AIX

[email protected]

141 of 393

BATCH Service Testing


{node1:rdbms}/oracle/rdbms # tnsping BATCH
TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23FEB-2008 21:15:14
Copyright (c) 1997, 2007, Oracle.

All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = BATCH)
(FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))))
OK (10 msec)
{node1:rdbms}/oracle/rdbms #
Test connexion using BATCH connexion string from node 2
BATCH service MUST BE started !!!
{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 08:48:38 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options
SQL> select instance_name from v$instance;
INSTANCE_NAME
---------------JSC1DB2

Were connected on instance JSC1DB2.

SQL>
Test connexion using BATCH connexion string from node 1
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'

SQL> select instance_name from v$instance;


INSTANCE_NAME
---------------JSC1DB2

Were connected on instance JSC1DB2.

SQL>
As you can see, connections will only goes to instance JSC1DB2 (node2), as this is the preferred instance
for the BATCH service, no connections will goes to instance JSC1DB1 (node1) unless node2 fails, then
BATCH will use the available instance as set in the configuration.

11gRAC/ASM/AIX

[email protected]

142 of 393

Let stop the BATCH service to see if we can still connect thru SQLPlus
From node2 :
{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s BATCH
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node2:rdbms}/oracle/rdbms #

Target
-----OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE

{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'


SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:10:18 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node2:rdbms}/oracle/rdbms #
From node1 :
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:09:35 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node1:rdbms}/oracle/rdbms #
Conclusion : if the BATCH service is stopped, no new connections will be allowed on any database
instances even so nodes listeners are still up and running !!!
Re-starting BATCH service will enable the connections
{node2:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s BATCH

11gRAC/ASM/AIX

[email protected]

143 of 393

11gRAC/ASM/AIX

[email protected]

144 of 393

Let crash node2 which host the BATCH service preferred instance called JSC1DB2 to see what will
happened to existing connection
BATCH service MUST BE started !!!
From node2 :
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node2:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE

State
----ONLINE on node2
ONLINE on node2

Let connect to othe service from node1


{node1:rdbms}/oracle/rdbms/11.1.0/network/admin # sqlplus 'sys/password@batch as
sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 11:04:47 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options
SQL> select instance_name from v$instance;
INSTANCE_NAME
---------------JSC1DB2

Were connected on instance JSC1DB2.

SQL>

From node2 :
{node2:root}/ # reboot
Rebooting . . .

From node1 (same unix and SQLPlus session as before) :


SQL> /
INSTANCE_NAME
---------------JSC1DB1

Were still connected, but on instance JSC1DB1

SQL>

11gRAC/ASM/AIX

[email protected]

145 of 393

From node1 (new unix session) :


{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.node1.vip
ora.node2.vip

vip

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs

BATCH

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE on
ONLINE on
ONLINE on
OFFLINE
ONLINE on
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
OFFLINE
OFFLINE
OFFLINE
OFFLINE
ONLINE on

Target
-----ONLINE
ONLINE

State
----ONLINE on node1
ONLINE on node1

Target
-----ONLINE
ONLINE

State
----ONLINE on node1
ONLINE on node1

node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1

node1

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH
Service BATCH is running on instance(s) JSC1DB1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

146 of 393

When node2 is back ONLINE, Oracle Clusterware will start, VIP from node2 failed on node1 will get back
on its home node (node2), ASM instance2, Database instance will restart as others resources linked to
node2.
{node2:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node2:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on

node1
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2

BUT service status will not change unless we decide to switch back to JSC1DB2 as preferred instance for
BATCH service
To relocate the service on its preferred instance JSC1DB2
{node2:rdbms}/oracle/rdbms # srvctl relocate service -d JSC1DB -s BATCH -i JSC1DB1 -t
JSC1DB2 -f
{node2:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH
Service BATCH is running on instance(s) JSC1DB2
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs

Target
-----ONLINE
ONLINE

State
----ONLINE on node2
ONLINE on node1

{node2:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node2:rdbms}/oracle/rdbms #
From node1 (same unix and SQLPlus session as before) :
SQL> /
INSTANCE_NAME
---------------JSC1DB2

11gRAC/ASM/AIX

[email protected]

147 of 393

SQL>
We switched back to preferred instance

11gRAC/ASM/AIX

[email protected]

148 of 393

Create and manage DISCO service :


Let see in details, As rdbms user, from one node
{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service DISCO


Add an DISCO service to an existing JSC1DB database
with JSC1DB1 as preferred instances (-r)
and JSC1DB2 as available instances (-a).
Using basic failover to the available instances.
{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s DISCO -r JSC1DB1 a JSC1DB2

To query
after
DISCO
service
creation

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
OFFLINE
OFFLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE on
ONLINE on
OFFLINE
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on

{node1:rdbms}/oracle/rdbms # crsstat DISCO


HA Resource
----------ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
{node1:rdbms}/oracle/rdbms #

Target
-----OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE

node2
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s DISCO


DISCO PREF: JSC1DB1 AVAIL: JSC1DB2
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s DISCO
Service DISCO is not running.
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

149 of 393

11gRAC/ASM/AIX

[email protected]

150 of 393

To start the service DISCO


{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s DISCO
To query after DISCO service startup
{node1:rdbms}/oracle/rdbms # crsstat DISCO
HA Resource
----------ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE

State
----ONLINE on node1
ONLINE on node1

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s DISCO


Service DISCO is running on instance(s) JSC1DB1
{node1:rdbms}/oracle/rdbms #
To stop DISCO service :
{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s DISCO
To query after stopping DISCO service
{node1:rdbms}/oracle/rdbms # crsstat DISCO
HA Resource
----------ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
{node1:rdbms}/oracle/rdbms #

Target
-----OFFLINE
OFFLINE

State
----OFFLINE
OFFLINE

THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :

All VIP declared


Load Balancing
configured
Failover
Configured for
SESSION and
SELECT

DSICO =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DISCO)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

11gRAC/ASM/AIX

[email protected]

151 of 393

We now have 3 services for the JSC1DB cluster database :

OLTP running on preferred database instance JSC1DB1 on node1 and JSC1DB2 on node2.
BATCH running on preferred database instance JSC1DB2 on node2.
DISCO running on preferred database instance JSC1DB1 on node1.
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on

node2
node1
node1
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2

{node1:rdbms}/oracle/rdbms # crsstat srv


HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE
ONLINE
ONLINE

State
----ONLINE
ONLINE
ONLINE
ONLINE

on
on
on
on

node2
node1
node1
node2

{node1:rdbms}/oracle/rdbms # for service in OLTP BATCH DISCO


> do
> srvctl config service -d JSC1DB -s $service
> srvctl status service -d JSC1DB -s $service
> echo ''
> done
OLTP PREF: JSC1DB1 JSC1DB2 AVAIL:
Service OLTP is running on instance(s) JSC1DB1, JSC1DB2
BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
Service BATCH is running on instance(s) JSC1DB2
DISCO PREF: JSC1DB1 AVAIL: JSC1DB2
Service DISCO is running on instance(s) JSC1DB1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

152 of 393

At database level, you should check database parameter service_names, ensuring that services created
are listed :
From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:18:12 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
OLTP, JSC1DB, DISCO
SQL>
On node1, well see OLTP and DISCO as they are prefered on JSC1DB1.
BUT not BATCH as its only available on JSC1DB1.
JSC1DB MUST be also declared in the service_names list.

From node2 :

{node2:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node2:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB2
{node2:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:20:40 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
OLTP, JSC1DB, BATCH
SQL>
On node2, well see OLTP and BATCH as they are prefered on JSC1DB2.
BUT not DISCO as its only available on JSC1DB2.
JSC1DB MUST be also declared in the service_names list.

11gRAC/ASM/AIX

[email protected]

153 of 393

If service_names not set, empty or not well set, you should issue the following commands :
From node1 :

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:18:12 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> ALTER SYSTEM SET service_names='JSCDB','OLTP','DISCO' SCOPE=BOTH SID='JSC1DB1';
System Altered.
SQL> ALTER SYSTEM SET service_names='JSCDB','OLTP','BATCH' SCOPE=BOTH SID='JSC1DB2';
System Altered.
SQL>
Check that services are registered with command : lsnrctl status . On node1 with listener_node1
{node1:rdbms}/oracle/rdbms # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-FEB-2008 21:45:43
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Start Date
22-FEB-2008 07:57:23
Uptime
1 days 13 hr. 48 min. 21 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

154 of 393

Service "BATCH" has 1 instance(s).


Instance "JSC1DB2", status READY, has 1 handler(s)
for this service...

This is normal as BATCH service is


Available on JSC1DB1 instance, node1
Prefered on JSC1DB2 instance, node2

Service "DISCO" has 1 instance(s).


Instance "JSC1DB1", status READY, has 2 handler(s)
for this service...

This is normal as DISCO service is


Prefered on JSC1DB1 instance, node1
Available on JSC1DB2 instance, node2

Service "OLTP" has 2 instance(s).


This is normal as OLTP service is
Instance "JSC1DB1", status READY, has 2 handler(s)
Prefered on JSC1DB1 instance, node1
for this service...
Prefered on JSC1DB2 instance, node2
Instance "JSC1DB2", status READY, has 1 handler(s)
for this service...

Check that services are registered with command : lsnrctl status . On node2 with listener_node2
{node2:rdbms}/oracle/rdbms # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-FEB-2008 21:55:48
Copyright (c) 1991, 2007, Oracle.

All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE2
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
22-FEB-2008 11:11:32
Uptime
1 days 10 hr. 44 min. 17 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node2/listener_node2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.182)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.82)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...

11gRAC/ASM/AIX

[email protected]

155 of 393

Instance "JSC1DB2", status READY, has 2 handler(s) for this service...


The command completed successfully
{node2:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

156 of 393

Let check what will happend as database level when failling node2
Before failure of one node, crs_stat t will show :
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #

Target
-----ONLINE
ONLINE

State
----ONLINE on node2
ONLINE on node2

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 22:14:50 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
OLTP, JSC1DB, DISCO
SQL>

If node2 fails, or reboot for any reason : What should we see on node1 ?
After failure : On node1, AS BATCH was prefered on JSC1DB2, and available on JSC1DB1,
THEN after node2 failure, BATCH service will be swithed to available instance JSC1DB1.
{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 22:14:50 2008
Copyright (c) 1982, 2007, Oracle.

All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------service_names
string
OLTP, JSC1DB, DISCO, BATCH
SQL>

11gRAC/ASM/AIX

[email protected]

157 of 393

After failure of node2, crs_stat t will show :

VIP from node2 swith to node1.

ONS, GSD, Listener, ASM2 instance, and JSC1DB2 instance are switch to OFFLINE state.

OLTP service is switch to OFFLINE state on node2. But still ONLINE state on node1

BATCH service is switch to ONLINE state from node2 to node1.

THEN BATCH service is still available thru node1.

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

[email protected]

State
----ONLINE on
ONLINE on
ONLINE on
OFFLINE
ONLINE on
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
OFFLINE
OFFLINE
OFFLINE
OFFLINE
ONLINE on

node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1

node1

158 of 393

15.6 Transaction Application Failover

$ORACLE_HOME/network/admin/listener.ora
Which will be in /oracle/asm/11.1.0/network/admin

Listener from node1 is listening on node1-vip (10.3.25.181) and on node1 (10.3.25.81).


Listener from node2 is listening on node2-vip (10.3.25.182) and on node2 (10.3.25.82).
Users will connect thru node1-vip or node2-vip
If Load Balancing is configuring on the client connection string, users will be dispatch on both
nodes depending of the workload.
DBAs can connect thru node1 or node2 for administration purpose.
IPC necessary.

Checking content of listener.ora on node1 :


{node1:rdbms}/oracle/rdbms/11.1.0/network/admin # cat listener.ora
# listener.ora.node1 Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/listener.ora.node1
# Generated by Oracle configuration tools.
LISTENER_NODE1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.81)(PORT = 1521)(IP = FIRST))
)
)
{node1:rdbms}/oracle/rdbms/11.1.0/network/admin #
Checking content of listener.ora on node2 :
{node2:rdbms}/oracle/rdbms/11.1.0/network/admin # cat listener.ora
# listener.ora.node2 Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/listener.ora.node2
# Generated by Oracle configuration tools.
LISTENER_NODE2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.82)(PORT = 1521)(IP = FIRST))
)
)
{node2:rdbms}/oracle/rdbms/11.1.0/network/admin #

11gRAC/ASM/AIX

[email protected]

159 of 393

11gRAC/ASM/AIX

[email protected]

160 of 393

How to query LISTENER resources


{node2:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.node1.LISTENER_NODE1.lsnr
ora.node2.LISTENER_NODE2.lsnr
{node2:rdbms}/oracle/rdbms #

LISTENER
Target
-----ONLINE
ONLINE

State
----ONLINE on node1
ONLINE on node2

How to obtain LISTENER ressources config


{node2:rdbms}/oracle/rdbms # srvctl config listener -n node1
node1 LISTENER_NODE1
{node2:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # srvctl config listener -n node2
node2 LISTENER_NODE2
{node2:rdbms}/oracle/rdbms #
If listener is not the default one, or default one but not with default naming as
LISTENER_NODENAME THEN Imagine listener is named LISTENER_JSC
{node2:rdbms}/oracle/rdbms # srvctl config listener -n node2 l LISTENER_JSC
node2 LISTENER_JSC
{node2:rdbms}/oracle/rdbms #
How to STOP LISTENER ressources config
{node2:rdbms}/oracle/rdbms # srvctl stop listener -n node1
If listener is not the default one, or default one but not with default naming as
LISTENER_NODENAME THEN Imagine listener is named LISTENER_JSC
{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1 -l listener_JSC
How to START LISTENER ressources config
{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1
If listener is not the default one, or default one but not with default naming as
LISTENER_NODENAME THEN Imagine listener is named LISTENER_JSC
{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1 -l listener_JSC

11gRAC/ASM/AIX

[email protected]

161 of 393

Lets have a look on the tnsnames.ora file from on node


ALL clients connections MUST use VIPs !!!

$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin
REMOTE_LISTENER
for ASM instances :

+ASM1
+ASM2

(Added by netca)

LOCAL_LISTENER
for ASM instance :

+ASM2

{node1:rdbms}/oracle/rdbms/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

(Added manualy)

LOCAL_LISTENER
for ASM instance :

+ASM1

LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

(Added manualy)

Connection String for


ASM instances with :

Load Balancing
Failover (Session
and Select)

This entry could be


used for by the DBA to
connect remotely to
ASM instances
(Added manualy)

11gRAC/ASM/AIX

ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = +ASM)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

[email protected]

162 of 393

REMOTE_LISTENER
for Database
instances :

JSC1DB1
JSC1DB2

LISTENERS_JSC1DB =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

(Added by netca/dbca)

LOCAL_LISTENER
for ASM instance :

JSC1DB1

LISTENER_JSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

LOCAL_LISTENER
for ASM instance :

JSC1DB2

(Added manualy)

This connection
string will only allow
to connect to
database instance
JSC1DB2

NO Failover
NO Load Balancing

LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

JSC1DB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(INSTANCE_NAME = JSC1DB2)
)
)

(Added by netca/dbca)

This connection
string will only allow
to connect to
database instance
JSC1DB1

NO Failover
NO Load Balancing

JSC1DB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(INSTANCE_NAME = JSC1DB1)
)
)

(Added by netca/dbca)

11gRAC/ASM/AIX

[email protected]

163 of 393

This connection
string will allow to
connect to
database instances
JSC1DB1 and
JSC1DB2

NO Failover
NO Load
Balancing

(Added by netca)

JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(P<ORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
)
)

Without lines as :
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
Load Balancing will be achieved, BUT NO Failover will be ensured !!!
So Entry for JSC1DB should me modified as below :
This connection
string will allow to
connect to
database instances
JSC1DB1 and
JSC1DB2

Failover
(Session and
Select)
Load Balancing

(Added by netca)

11gRAC/ASM/AIX

JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

[email protected]

164 of 393

Connection String
for OLTP Database
cluster Service
instances with :

Load Balancing
Failover (Session
and Select)

Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)

Connection String
for BATCH Database
cluster Service
instances with :

Load Balancing
Failover (Session
and Select)

Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)

Connection String
for DISCO Database
cluster Service
instances with :

Load Balancing
Failover (Session
and Select)

Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)

OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

BATCH =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

DISCO =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DISCO)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
{node1:rdbms}/oracle/rdbms/11.1.0/network/admin #

11gRAC/ASM/AIX

[email protected]

165 of 393

IMPORTANT, What should be on the client side with the local tnsnames.ora file ?
For Load Balancing and failover on all nodes without going thru Oracle clusterware services, I will have to
add the JSC1DB entries in my local client tnsnames.ora.
The name of the
connection string
JSC1DB could be
replaced by an other
name, as long as the
description including
SERVICE_NAME is
not changed !!!

JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

For Load Balancing and failover on all nodes going thru Oracle clusterware services, I will have to choice
add the OLTP entries in my local client tnsnames.ora.
The name of the
connection string
OLTP could be
replaced by an other
name, as long as the
description including
SERVICE_NAME is
not changed !!!

OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
For example, changing the name of the connection string OLTP to TEST will look like
TEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

For instance 2 as preferred instance and failover to available instance 1 thru Oracle clusterware services, I
will have to choice add the BATCH entries in my local client tnsnames.ora.
For instance 1 as preferred instance and failover to available instance 2 thru Oracle clusterware services, I
will have to choice add the DISCO entries in my local client tnsnames.ora.
IMPORTANT, On the client side, make sure IP/hostname resiolution for VIPs are solved either by using DNS or
hosts file, even so you changed the VIP hotname by real IP. Otherwise you may end-up with intermittent connection

11gRAC/ASM/AIX

[email protected]

166 of 393

problems.
15.7 About DBCONSOLE

15.7.1

CheckingDBCONSOLE

Since 10gRAC R2, and with 11gRAC also, dbconsole is only configured on first nore at database creation.
When using DBCONSOLE, each database will have its own dbconsole !!!
Look at Metalink Note :
Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1
Subject: Troubleshooting Database Control Startup Issues Doc ID: Note:549079.1
How to
start/stop the
dbconsole
agent :

When using DBCONSOLE, the oracle agent start and stop automatically with the dbconsole.
If needed, its still possible to start/stop the agent using the emctl tool as follow :
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms
OR
{node1:rdbms}/oracle/rdbms

# export ORACLE_HOME=/oracle/rdbms/11.1.0
# export ORACLE_SID=JSC1DB1
# emctl start agent
# emctl stop agent

Check status of {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


dbconsole agent {node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl status agent
for node1
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
--------------------------------------------------------------Agent Version
: 10.1.0.5.1
OMS Version
: 10.1.0.5.0
Protocol Version : 10.1.0.2.0
Agent Home
: /oracle/rdbms/11.1.0/node1_JSC1DB1
Agent binaries
: /oracle/rdbms/11.1.0
Agent Process ID : 1507562
Parent Process ID : 1437786
Agent URL
: http://node1:3938/emd/main
Started at
: 2007-04-23 10:44:38
Started by user
: oracle
Last Reload
: 2007-04-23 10:44:38
Last successful upload
: 2007-04-23 17:32:23
Total Megabytes of XML files uploaded so far :
7.64
Number of XML files pending upload
:
0
Size of XML files pending upload(MB)
:
0.00
Available disk space on upload filesystem
:
37.78%
--------------------------------------------------------------Agent is Running and Ready
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

167 of 393

Check status of EM agent


for node1, and node2 :

Commands to
start
DBCONSOLE :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl start dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 10g Database Control
................. started.
-----------------------------------------------------------------Logs are generated in directory
/oracle/rdbms/11.1.0/node1_JSC1DB1/sysman/log
{node1:rdbms}/oracle/rdbms #

Commands to
stop
DBCONSOLE :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl stop dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 10g Database Control ...
... Stopped.
{node1:rdbms}/oracle/rdbms #

Check status of
dbconsole

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl status dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.
-----------------------------------------------------------------Logs are generated in directory
/oracle/rdbms/11.1.0/node1_JSC1DB1/sysman/log
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX

[email protected]

168 of 393

Update your local


host file with entries
for node1 and node2.
AND
Access DBCONSOLE
thru
http://node1:1158/em
Connect with oracle
database sys user
as SYSDBA

Just click on I agree to


accept the Oracle
Database 10g Licensing
Information.

11gRAC/ASM/AIX

[email protected]

169 of 393

Look at Metalink Note to discover dbconsole, or get help to solve any issues :
Subject: How to configure dbconsole to display information from the ASM DISKGROUPS Doc ID: Note:329581.1
Subject: Em Dbconsole 10.1.0.X Does Not Discover Rac Database As A Cluster Doc ID: Note:334546.1
Subject: DBConsole Shows Everything Down Doc ID: Note:332865.1
Subject: How To Config Dbconsole (10.1.x or 10.2) EMCA With Another Hostname Doc ID: Note:336017.1
Subject: How To Change The DB Console Language To English Doc ID: Note:370178.1
Subject: Dbconsole Fails After A Physical Move Of Machine To New Domain/Hostnam Doc ID: Note:401943.1
Subject: How to Troubleshoot Failed Login Attempts to DB Control Doc ID: Note:404820.1
Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1
Subject: How To Drop, Create And Recreate DB Control In A 10g Database Doc ID: Note:278100.1
Subject: Overview Of The EMCA Commands Available for DB Control 10.2 Installations Doc ID: Note:330130.1

Subject:Howtochangethepasswordofthe10gdatabaseuserdbsnmpDocID:Note:259387.1
Subject:EM10.2DBConsoleDisplayingWrongOldInformationDocID:Note:336179.1
Subject:HowToConfigureAListenerTargetFromGridControl10gDocID:Note:427422.1
For each RAC node :

Testing if
dbconsole is
working or
not !!!

As oracle user on node 1 :


Export DISPLAY=??????
Export ORACLE_SID=JSC1DB1
Check if DBConsole is running : emctl status dbconsole
IF dbconsole not running
THEN

1/ execute : emctl start dbconsole


2/ access dbconsole using Internet browser :
http://node1:1158/em
using sys as user, connected as sysdba, with its password
IF dbconsole started and reachable with http://node1:1158/em
THEN

dbconsole is OK on Node 1

ELSE

See metalink note for DBConsole troubleshooting

!!!
ELSE

1/ access dbconsole using Internet browser :

http://node1:1158/em
using sys as user, connected as sysdba, with its password
IF dbconsole is reachable with http://node1:1158/em
THEN

dbconsole is OK on Node 1

ELSE

See metalink note for DBConsole troubleshooting

!!!
END

11gRAC/ASM/AIX

[email protected]

170 of 393

The dbconsole may not start after doing an install through the Oracle Enterprise Manager (OEM)
Troubleshooting and selecting a database.

Tips
The solution is to:
Editthefile:${ORACLE_HOME}/<hostname>_${ORACLE_SID}/sysman/config/emd.properties
LocatetheentrywheretheEMD_URLisset.
Thisentryshouldhavetheformat:
EMD_URL=http://<hostname>:%EM_SERVLET_PORT%/emd/main
Ifyouseethestring:%EM_SERVLET_PORT%intheentry,thenreplacethecompletestringwithan
unusedportnumberthatisnotdefinedinthe"/etc/services"file.Ifthisstringismissingandnoport
numberisinitsplace,theninsertanunusedportnumberthatisnotdefinedinthe"/etc/services"filein
betweenthe"http://<hostname>:"andthe"/emd/main"strings.
Usethecommandemctlstartdbconsoletostartthedbconsoleaftermakingthischange.
Forexample:
EMD_URL=http://myhostname.us.oracle.com:5505/emd/main

15.7.2

MovingfromdbconsoletoGridControl

You must either use DBCONSOLE, or GRID CONTROL, not both at the same time !!!
To move from Locally Managed (DBConsole) to Centrally Managed (Grid Control) :

You must have an existing GRID Control, or install a new Grid Control on aseparate LPAR, or server. Grid
Control is available on AIX5L, but could be installed on any supported operating system.

You must install the Oracle Grid Agent on each RAC node AIX LPAR, with same unix user as Oracle
Clusterware owner, and in the same oraInventory.

THEN you must follow this metalink note :

Subject: How to change a 10.2.0.x Database from Locally Managed to Centrally Managed Doc ID:
Note:400476.1

11gRAC/ASM/AIX

[email protected]

171 of 393

16 ASM ADVANCED TOOLS


16.1 ftp and http access
Subject:HowtoconfigureXDBforusingftpandhttpprotocolswithASMDocID:Note:357714.1
Subject:/Sys/AsmpathisnotvisibleinXDBRepositoryDocID:Note:368454.1
Subject:HowtoDeinstallandReinstallXMLDatabase(XDB)DocID:Note:243554.1

11gRAC/ASM/AIX

[email protected]

172 of 393

17 SOME USEFULL COMMANDS


Command to start/stop the Database, and Databases Instances :
From any node :

For database
{node1:rdbms}/oracle/rdbms # srvctl start database d ASMDB
{node1:rdbms}/oracle/rdbms # srvctl stop database d ASMDB

to start the Database instance


to stop the Database instance

For instance 1
{node1:rdbms}/oracle/rdbms # srvctl start instance d ASMDB i ASMDB1 to start the DB.. instance
{node1:rdbms}/oracle/rdbms # srvctl stop instance d ASMDB i ASMDB1 to stop the DB.. instance

For instance 2
{node2:rdbms}/oracle/rdbms # srvctl start instance d ASMDB i ASMDB2 to start the DB.. instance
{node2:rdbms}/oracle/rdbms # srvctl stop instance d ASMDB i ASMDB2 to stop the DB.. instance

To access an ASM instance with sqlplus


From node1 :
{node1:rdbms}/oracle/rdbms # set ORACLE_SID=+ASM1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog

connect /as sysdba


show sga

From node2 :
{node2:rdbms}/oracle/rdbms # set ORACLE_SID=+ASM2
{node2:rdbms}/oracle/rdbms # sqlplus /nolog

connect /as sysdba


show sga

To acces a Database instance stored in ASM with sqlplus


From node1 :
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=ASMDB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog

connect /as sysdba


show sga
.

11gRAC/ASM/AIX

[email protected]

173 of 393

17.1 Oracle CLuster Registry content Check and Backup

{node1:crs}/crs # ocrcheck
Status of Oracle Cluster Registry
Check
Version
Oracle
Cluster
Total space (kbytes)
Registry
Used space (kbytes)
Integrity
Available space (kbytes)
ID
As oracle
Device/File Name
user,
Execute
Device/File Name
ocrcheck

is as follows :
:
2
:
306972
:
6540
:
300432
: 1928316120
: /dev/ocr_disk1
Device/File integrity check succeeded
: /dev/ocr_disk2
Device/File integrity check succeeded

Cluster registry integrity check succeeded


{node1:crs}/crs #

AS root
user :

{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export4.dmp -s
Export online
Oracle {node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
Cluster -rw-r--r-1 root
system
106420 Jan 30 18:30
Registry /oracle/ocr_export.dmp
content {node1:crs}/crs/11.1.0/bin #
you must not edit/modify this exported file
View OCR automatic periodic backup managed by Oracle Clusterware
{node1:crs}/crs/11.1.0 # ocrconfig -showbackup
node1

2008/03/25 09:24:19

/crs/11.1.0/cdata/crs_cluster/backup00.ocr

node1

2008/03/25 05:24:18

/crs/11.1.0/cdata/crs_cluster/backup01.ocr

node1

2008/03/25 01:24:18

/crs/11.1.0/cdata/crs_cluster/backup02.ocr

node1

2008/03/23 13:24:15

/crs/11.1.0/cdata/crs_cluster/day.ocr

node2

2008/03/14 06:45:44

/crs/11.1.0/cdata/crs_cluster/week.ocr

node2

2008/03/16 17:13:39

/crs/11.1.0/cdata/crs_cluster/backup_20080316_171339.ocr

node1

2008/02/24 08:09:21

/crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr

node1
2008/02/24 08:08:48
{node1:crs}/crs/11.1.0 #

/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr

11gRAC/ASM/AIX

[email protected]

174 of 393

18 APPENDIX A : ORACLE / IBM TECHNICAL DOCUMENTS


Technical Documents on Oracle Real Application Cluster :
http://www.oracle.com/technology/products/database/clustering/index.html
Technical Documents on Oracle Real Application Cluster :
Note:282036.1 Minimum Software Versions and Patches Required to Support Oracle Products on IBM System p
Note:302806.1 IBM General Parallel File System (GPFS) and Oracle RAC on AIX 5L and IBM eServer System p
Note:341507.1 Oracle Products on Linux on IBM POWER
Note:4044741 Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4
JSC Cookbooks (available at http://www.oracleracsig.org) :
9i RAC Release 2 / AIX / HACMP / GPFS
10g RAC Release 1 / AIX / GPFS / ASM / Concurrent Raw Devices on IBM SAN Storage
10g RAC Release 2 / AIX / GPFS / ASM on IBM SAN Storage
IBM Tech documents (http://www.ibm.com) :
Implementing 10gRAC with ASM on AIX5L
Oracle 10g RAC on AIX with Veritas Storage Foundation for Oracle RAC
Oracle 9i RAC on AIX with VERITAS SFRAC
Oracle's licensing in a multi-core environment
Oracle Architecture and Tuning on AIX
Diagnosing Oracle Database Performance on AIX Using IBM NMON and Oracle Statspack Reports
Observations Using SMT on an Oracle 9i OLTP Workload
Oracle 9i & 10g on IBM AIX5L: Tips & Considerations
Impact of Advanced Power Virtualization Features on Performance Characteristics of a Java / C++ / Oracle
based Application
Performance Impact of Upgrading a Data Warehouse Application from AIX 5.2 to AIX 5.3 with SMT
Simultaneous Multi-threading (SMT) on IBM POWER5: Performance improvement on commercial OLTP workload
using Oracle database version 10g

11gRAC/ASM/AIX

[email protected]

175 of 393

19 APPENDIX B : ORACLE TECHNICAL NOTES


This appendix provides some useful notes coming from Oracle support. These notes can be found in Metalink.
19.1 CRS and 10g Real Application Clusters
Doc
Note:259301.1
ID:
Subjec CRS and 10g Real Application
t:
Clusters
Type: BULLETIN
Status
PUBLISHED
:

Content Type:
Creation Date:
Last Revision
Date:

TEXT/XHTML
05-DEC2003
15-MAR2005

PURPOSE
------This document is to provide additional information on CRS (Cluster Ready Services)
in 10g Real Application Clusters.
SCOPE & APPLICATION
------------------This document is intended for RAC Database Administrators and Oracle support
enginneers.
CRS and 10g REAL APPLICATION CLUSTERS
------------------------------------CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters
that provides a standard cluster interface on all platforms and performs
new high availability operations not available in previous versions.
CRS KEY FACTS
------------Prior to installing CRS and 10g RAC, there are some key points to remember about
CRS and 10g RAC:
- CRS is REQUIRED to be installed and running prior to installing 10g RAC.
- CRS can either run on top of the vendor clusterware (such as Sun Cluster,
HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster,
etc...) or can run without the vendor clusterware. The vendor clusterware
was required in 9i RAC but is optional in 10g RAC.
- The CRS HOME and ORACLE_HOME must be installed in DIFFERENT locations.
- Shared Location(s) or devices for the Voting File and OCR (Oracle
Configuration Repository) file must be available PRIOR to installing CRS. The
voting file should be at least 20MB and the OCR file should be at least 100MB.
- CRS and RAC require that the following network interfaces be configured prior
to installing CRS or RAC:
- Public Interface
- Private Interface
- Virtual (Public) Interface
For more information on this, see Note 264847.1.

11gRAC/ASM/AIX

[email protected]

176 of 393

- The root.sh script at the end of the CRS installation starts the CRS stack.
If your CRS stack does not start, see Note 240001.1.
- Only one set of CRS daemons can be running per RAC node.
- On Unix, the CRS stack is run from entries in /etc/inittab with "respawn".
- If there is a network split (nodes loose communication with each other).
or more nodes may reboot automatically to prevent data corruption.

One

- The supported method to start CRS is booting the machine. MANUAL STARTUP OF
THE CRS STACK IS NOT SUPPORTED UNTIL 10.1.0.4 OR HIGHER.
- The supported method to stop is shutdown the machine or use "init.crs stop".
- Killing CRS daemons is not supported unless you are removing the CRS
installation via Note 239998.1 because flag files can become mismatched.
- For maintenance, go to single user mode at the OS.
Once the stack is started, you should be able to see all of the daemon processes
with a ps -ef command:
[rac1]/u01/home/beta> ps -ef | grep crs
oracle
oracle
root
oracle

1363
999
1003
1002

999
1
1
1

0
0
0
0

11:23:21
11:21:39
11:21:39
11:21:39

?
?
?
?

0:00
0:01
0:01
0:01

/u01/crs_home/bin/evmlogger.bin -o /u01
/u01/crs_home/bin/evmd.bin
/u01/crs_home/bin/crsd.bin
/u01/crs_home/bin/ocssd.bin

CRS DAEMON FUNCTIONALITY


-----------------------Here is a short description of each of the CRS daemon processes:
CRSD:
- Engine for HA operation
- Manages 'application resources'
- Starts, stops, and fails 'application resources' over
- Spawns separate 'actions' to start/stop/check application resources
- Maintains configuration profiles in the OCR (Oracle Configuration Repository)
- Stores current known state in the OCR.
- Runs as root
- Is restarted automatically on failure
OCSSD:
- OCSSD is part of RAC and Single Instance with ASM
- Provides access to node membership
- Provides group services
- Provides basic cluster locking
- Integrates with existing vendor clusteware, when present
- Can also runs without integration to vendor clustware
- Runs as Oracle.
- Failure exit causes machine reboot.
--- This is a feature to prevent data corruption in event of a split brain.
EVMD:
- Generates events when things happen
- Spawns a permanent child evmlogger
- Evmlogger, on demand, spawns children
- Scans callout directory and invokes callouts.

11gRAC/ASM/AIX

[email protected]

177 of 393

- Runs as Oracle.
- Restarted automatically on failure

11gRAC/ASM/AIX

[email protected]

178 of 393

CRS LOG DIRECTORIES


------------------When troubleshooting CRS problems, it is important to review the directories
under the CRS Home.
$ORA_CRS_HOME/crs/log - This directory includes traces for CRS resources that are
joining, leaving, restarting, and relocating as identified by CRS.
$ORA_CRS_HOME/crs/init - Any core dumps for the crsd.bin daemon should be written
here. Note 1812.1 could be used to debug these.
$ORA_CRS_HOME/css/log - The css logs indicate all actions such as
reconfigurations, missed checkins , connects, and disconnects from the client
CSS listener . In some cases the logger logs messages with the category of
(auth.crit) for the reboots done by oracle. This could be used for checking the
exact time when the reboot occured.
$ORA_CRS_HOME/css/init - Core dumps from the ocssd primarily and the pid for the
css daemon whose death is treated as fatal are located here. If there are
abnormal restarts for css then the core files will have the formats of
core.<pid>. Note 1812.1 could be used to debug these.
$ORA_CRS_HOME/evm/log - Log files for the evm and evmlogger daemons.
as often for debugging as the CRS and CSS directories.

Not used

$ORA_CRS_HOME/evm/init - Pid and lock files for EVM. Core files for EVM should
also be written here. Note 1812.1 could be used to debug these.
$ORA_CRS_HOME/srvm/log - Log files for OCR.

11gRAC/ASM/AIX

[email protected]

179 of 393

STATUS FOR CRS RESOURCES


-----------------------After installing RAC and running the VIPCA (Virtual IP Configuration Assistant)
launched with the RAC root.sh, you should be able to see all of your CRS
resources with crs_stat. Example:
cd $ORA_CRS_HOME/bin
./crs_stat
NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac1.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac2.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE
NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE

11gRAC/ASM/AIX

[email protected]

180 of 393

There is also a script available to view CRS resources in a format that is


easier to read. Just create a shell script with:
--------------------------- Begin Shell Script ---------------------------#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
#
- Returns formatted version of crs_stat -t, in tabular
#
format, with the complete rsc names and filtering keywords
#
- The argument, $RSC_KEY, is optional and if passed to the script, will
#
limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
#
- $ORA_CRS_HOME should be set in your environment
RSC_KEY=$1
QSTAT=-u
AWK=/usr/xpg4/bin/awk

# if not available use /usr/bin/awk

# Table header:echo ""


$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'
# Table body:
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
--------------------------- End Shell Script -----------------------------Example output:
[opcbsol1]/u01/home/usupport> ./crsstat
HA Resource
----------ora.V10SN.V10SN1.inst
ora.V10SN.V10SN2.inst
ora.V10SN.db
ora.opcbsol1.ASM1.asm
ora.opcbsol1.LISTENER_OPCBSOL1.lsnr
ora.opcbsol1.gsd
ora.opcbsol1.ons
ora.opcbsol1.vip
ora.opcbsol2.ASM2.asm
ora.opcbsol2.LISTENER_OPCBSOL2.lsnr
ora.opcbsol2.gsd
ora.opcbsol2.ons
ora.opcbsol2.vip

11gRAC/ASM/AIX

Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

[email protected]

State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE

on
on
on
on
on
on
on
on
on
on
on
on
on

opcbsol1
opcbsol2
opcbsol2
opcbsol1
opcbsol1
opcbsol1
opcbsol1
opcbsol1
opcbsol2
opcbsol2
opcbsol2
opcbsol2
opcbsol2

181 of 393

CRS RESOURCE ADMINISTRATION


--------------------------You can use srvctl to manage these resources.

Below are syntax and examples.

--------------------------------------------------------------------------CRS RESOURCE STATUS


srvctl status database -d <database-name> [-f] [-v] [-S <level>]
srvctl status instance -d <database-name> -i <instance-name> >[,<instance-name-list>]
[-f] [-v] [-S <level>]
srvctl status service -d <database-name> -s <service-name>[,<service-name-list>]
[-f] [-v] [-S <level>]
srvctl status nodeapps [-n <node-name>]
srvctl status asm -n <node_name>

EXAMPLES:
Status of the database, all instances and all services.
srvctl status database -d ORACLE -v
Status of named instances with their current services.
srvctl status instance -d ORACLE -i RAC01, RAC02 -v
Status of a named services.
srvctl status service -d ORACLE -s ERP -v
Status of all nodes supporting database applications.
srvctl status node
START CRS RESOURCES
srvctl start database -d <database-name> [-o < start-options>]
[-c <connect-string> | -q]
srvctl start instance -d <database-name> -i <instance-name>
[,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start service -d <database-name> [-s <service-name>[,<service-name-list>]]
[-i <instance-name>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start nodeapps -n <node-name>
srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]

EXAMPLES:
Start the database with all enabled instances.
srvctl start database -d ORACLE
Start named instances.
srvctl start instance -d ORACLE -i RAC03, RAC04
Start named services. Dependent instances are started as needed.
srvctl start service -d ORACLE -s CRM
Start a service at the named instance.
srvctl start service -d ORACLE -s CRM -i RAC04
Start node applications.
srvctl start nodeapps -n myclust-4

11gRAC/ASM/AIX

[email protected]

182 of 393

STOP CRS RESOURCES


srvctl stop database -d <database-name> [-o <stop-options>]
[-c <connect-string> | -q]
srvctl stop instance -d <database-name> -i <instance-name> [,<instance-name-list>]
[-o <stop-options>][-c <connect-string> | -q]
srvctl stop service -d <database-name> [-s <service-name>[,<service-name-list>]]
[-i <instance-name>][-c <connect-string> | -q] [-f]
srvctl stop nodeapps -n <node-name>
srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]

EXAMPLES:
Stop the database, all instances and all services.
srvctl stop database -d ORACLE
Stop named instances, first relocating all existing services.
srvctl stop instance -d ORACLE -i RAC03,RAC04
Stop the service.
srvctl stop service -d ORACLE -s CRM
Stop the service at the named instances.
srvctl stop service -d ORACLE -s CRM -i RAC04
Stop node applications. Note that instances and services also stop.
srvctl stop nodeapps -n myclust-4

ADD CRS RESOURCES


srvctl add
[-A
[-s
srvctl add
srvctl add
[-a
srvctl add
[-A
srvctl add

database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>]


<name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
<start_options>] [-n <db_name>]
instance -d <name> -i <inst_name> -n <node_name>
service -d <name> -s <service_name> -r <preferred_list>
<available_list>] [-P <TAF_policy>] [-u]
nodeapps -n <node_name> -o <oracle_home>
<name|ip>/netmask[/if1[|if2|...]]]
asm -n <node_name> -i <asm_inst_name> -o <oracle_home>

OPTIONS:
-A

-a
-m
-n
-o
-P
-r
-s
-u

vip range, node, and database, address specification. The format of


address string is:
[<logical host name>]/<VIP address>/<net mask>[/<host interface1[ |
host interface2 |..]>] [,] [<logical host name>]/<VIP address>/<net mask>
[/<host interface1[ | host interface2 |..]>]
for services, list of available instances, this list cannot include
preferred instances
domain name with the format us.mydomain.com
node name that will support one or more instances
$ORACLE_HOME to locate Oracle binaries
for services, TAF preconnect policy - NONE, PRECONNECT
for services, list of preferred instances, this list cannot include
available instances.
spfile name
updates the preferred or available list for the service to support the
specified instance. Only one instance may be specified with the -u
switch. Instances that already support the service should not be

11gRAC/ASM/AIX

[email protected]

183 of 393

included.

EXAMPLES:
Add a new node:
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME A
139.184.201.1/255.255.255.0/hme0
Add a new database.
srvctl add database -d ORACLE -o $ORACLE_HOME
Add named instances to an existing database.
srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3
Add a service to an existing database with preferred instances (-r) and
available instances (-a). Use basic failover to the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
Add a service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
PRECONNECT

-P

REMOVE CRS RESOURCES


srvctl
srvctl
srvctl
srvctl

remove
remove
remove
remove

database -d <database-name>
instance -d <database-name> [-i <instance-name>]
service -d <database-name> -s <service-name> [-i <instance-name>]
nodeapps -n <node-name>

EXAMPLES:
Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4
MODIFY CRS RESOURCES
srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>]
[-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s <start_options>]
srvctl modify instance -d <database-name> -i <instance-name> -n <node-name>
srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-t <instance-name> [-f]
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-r [-f]
srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x]

11gRAC/ASM/AIX

[email protected]

184 of 393

OPTIONS:
-i <instance-name> -t <instance-name> the instance name (-i) is replaced by the
instance name (-t)
-i <instance-name> -r the named instance is modified to be a preferred instance
-A address-list for VIP application, at node level
-s <asm_inst_name> add or remove ASM dependency

EXAMPLES:
Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 r
RELOCATE SERVICES
srvctl relocate service -d <database-name> -s <service-name> [-i <instance-name >]t<instance-name > [-f]

EXAMPLES:
Relocate a service from one instance to another
srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01
ENABLE CRS RESOURCES (The resource may be up or down to use this function)
srvctl enable database -d <database-name>
srvctl enable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl enable service -d <database-name> -s <service-name>] [, <service-name-list>]
[-i <instance-name>]

EXAMPLES:
Enable the database.
srvctl enable database -d ORACLE
Enable the named instances.
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service.
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance.
srvctl enable service -d ORACLE -s CRM -i RAC03
DISABLE CRS RESOURCES (The resource must be down to use this function)
srvctl disable database -d <database-name>
srvctl disable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl disable service -d <database-name> -s <service-name>] [,<service-name-list>]
[-i <instance-name>]

11gRAC/ASM/AIX

[email protected]

185 of 393

EXAMPLES:
Disable the database globally.
srvctl disable database -d ORACLE
Disable the named instances.
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service globally.
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service at the named instance.
srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04

For more information on this see the Oracle10g Real Application Clusters
Administrators Guide - Appendix B
RELATED DOCUMENTS
Oracle10g Real Application Clusters Installation and Configuration
Oracle10g Real Application Clusters Administrators Guide
19.2 About RAC ...

282036.1 - Minimum software versions and patches required to Support Oracle Products on ...
283743.1 - Pre-Install checks for 10g RDBMS on AIX
220970.1 - RAC: Frequently Asked Questions
183408.1 - Raw Devices and Cluster Filesystems With Real Application Clusters
293750.1 - 10g Installation on Aix 5.3, Failed with Checking operating system version mu...

19.3 About CRS ...

263897.1 - 10G: How to Stop the Cluster Ready Services (CRS)


295871.1 - How to verify if CRS install is Valid
265769.1 - 10g RAC: Troubleshooting CRS Reboots
259301.1 - CRS and 10g Real Application Clusters
268937.1 - Repairing or Restoring an Inconsistent OCR in RAC
293819.1 - Placement of voting and OCR disk files in 10gRAC
239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
272332.1 - CRS 10g Diagnostic Collection Guide
279793.1 - How to Restore a Lost Voting Disk in 10g
239989.1 - 10g RAC: Stopping Reboot Loops When CRS Problems Occur
298073.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
298069.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
284949.1 - CRS Home Is Only Partially Copied to Remote Node
285046.1 - How to recreate ONS,GSD,VIP deleted from ocr by crs_unregister

11gRAC/ASM/AIX

[email protected]

186 of 393

19.4 About VIP ...

296856.1 - Configuring the IBM AIX 5L Operating System for the Oracle 10g VIP
294336.1 - Changing the check interval for the Oracle 10g VIP
276434.1 - Modifying the VIP of a Cluster Node
298895.1 - Modifying the default gateway address used by the Oracle 10g VIP
264847.1 - How to Configure Virtual IPs for 10g RAC

19.5 About manual database cration ...

240052.1 - 10g Manual Database Creation in Oracle (Single Instance and RAC)

19.6 About Grid Control ...

284707.1 - Enterprise Manager Grid Control 10.1.0.3.0 Release Notes


277420.1 - EM 10G Grid Control Preinstall Steps for AIX 5.2

19.7 About TAF ...

271297.1 - Troubleshooting TAF Issues in 10g RAC

19.8 About Adding/Removing Node ...

269320.1 - Removing a Node from a 10g RAC Cluster


270512.1 - Adding a Node to a 10g RAC Cluster

19.9 About ASM ...

243245.1 - 10G New Storage Features and Enhancements


268481.1 - Re-creating ASM Instances and Diskgroups
282777.1 - SGA sizing for ASM instances and databases that use ASM
274738.1 - Creating an ASM-enabled Database
249992.1 - New Feature on ASM (Automatic Storage Manager).
252219.1 - Steps To Migrate Database From Non-ASM to ASM And Vice-Versa
293234.1 - How To Move Archive Files from ASM
270066.1 - Manage ASM instance-creating diskgroup,adding/dropping/resizing disks.

11gRAC/ASM/AIX

[email protected]

187 of 393

300472.1 - How To Delete Archive Log Files Out Of +Asm?


265633.1 - ASM Technical Best Practices http://metalink.oracle.com/metalink/plsql/docs/ASM.pdf
Forfullarticle,downloadAutomaticStorageManagement(154K/pdf)
294869.1 - Oracle ASM and Multi-Pathing Technologies

19.10 Metalink note to use in case of problem with CRS ...

263897.1 - 10G: How to Stop the Cluster Ready Services (CRS)


295871.1 - How to verify if CRS install is Valid
265769.1 - 10g RAC: Troubleshooting CRS Reboots
259301.1 - CRS and 10g Real Application Clusters
268937.1 - Repairing or Restoring an Inconsistent OCR in RAC
293819.1 - Placement of voting and OCR disk files in 10gRAC
239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
272332.1 - CRS 10g Diagnostic Collection Guide
239989.1 - 10g RAC: Stopping Reboot Loops When CRS Problems Occur
298073.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
298069.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
284949.1 - CRS Home Is Only Partially Copied to Remote Node

11gRAC/ASM/AIX

[email protected]

188 of 393

20 APPENDIX C : USEFULL COMMANDS

Crsctl : To administrate the clusterware

Ocrconfig : To backup, export, import repair ... OCR (Oracle Cluster Registry) contents.

Ocrdump : To dump Oracle cluster registry content

Srvctl : To administrate Oracle clusterware resources

Crs_stat help : To query state of Oracle clusterware ressources

Clufvy : To pre/post check installation stages

11gRAC/ASM/AIX

[email protected]

189 of 393

crsctl : To administrate the clusterware.


{node1:crs}/crs # crsctl
Usage: crsctl check crs - checks the viability of the Oracle Clusterware
crsctl check cssd
- checks the viability of Cluster Synchronization Services
crsctl check crsd
- checks the viability of Cluster Ready Services
crsctl check evmd
- checks the viability of Event Manager
crsctl check cluster [-node <nodename>] - checks the viability of CSS across nodes
crsctl set css <parameter> <value> - sets a parameter override
crsctl get css <parameter> - sets the value of a Cluster Synchronization Services parameter
crsctl unset css <parameter> - sets the Cluster Synchronization Services parameter to its default
crsctl query css votedisk - lists the voting disks used by Cluster Synchronization Services
crsctl add css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs - enables startup for all Oracle Clusterware daemons
crsctl disable crs - disables startup for all Oracle Clusterware daemons
crsctl start crs [-wait] - starts all Oracle Clusterware daemons
crsctl stop crs [-wait] - stops all Oracle Clusterware daemons. Stops Oracle Clusterware managed resources in case of cluster.
crsctl start resources - starts Oracle Clusterware managed resources
crsctl stop resources - stops Oracle Clusterware managed resources
crsctl debug statedump css - dumps state info for Cluster Synchronization Services objects
crsctl debug statedump crs - dumps state info for Cluster Ready Services objects
crsctl debug statedump evm - dumps state info for Event Manager objects
crsctl debug log css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug log crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug log evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl debug log res [resname:level] ... - turns on debugging for Event Manager
crsctl debug trace css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug trace crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug trace evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl query crs softwareversion [<nodename>] - lists the version of Oracle Clusterware software installed
crsctl query crs activeversion - lists the Oracle Clusterware operating version
crsctl lsmodules css - lists the Cluster Synchronization Services modules that can be used for debugging
crsctl lsmodules crs - lists the Cluster Ready Services modules that can be used for debugging
crsctl lsmodules evm - lists the Event Manager modules that can be used for debugging
If necessary any of these commands can be run with additional tracing by adding a 'trace'
argument at the very front. Example: crsctl trace check css
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

190 of 393

ocrconfig : To backup, export, import repair OCR (Oracle Clusterware


Registry) contents.
{node1:crs}/crs # ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a
file
-import <filename>

- Import cluster registry contents from a

file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous
version
-downgrade [-version <version string>]
- Downgrade cluster registry to the
specified version
-backuploc <dirname>
- Configure periodic backup location
-showbackup [auto|manual]
- Show backup information
-manualbackup
- Perform OCR backup
-restore <filename>
- Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite
- Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename>
- Repair local OCR configuration
-help
- Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
{node1:crs}/crs #

ocrdump : To dump the content of the OCR.


{node1:crs}/crs # ocrdump -help
Name:
ocrdump - Dump contents of Oracle Cluster Registry to a file.
Synopsis:
ocrdump [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>]
[-xml] [-noheader]
Description:
Default filename is OCRDUMPFILE. Examples are:
prompt> ocrdump
writes cluster registry contents to OCRDUMPFILE in the current directory
prompt> ocrdump MYFILE
writes cluster registry contents to MYFILE in the current directory
prompt> ocrdump -stdout -keyname SYSTEM
writes the subtree of SYSTEM in the cluster registry to stdout
prompt> ocrdump -stdout -xml
writes cluster registry contents to stdout in xml format
Notes:
The header information will be retrieved based on best effort basis.
A log file will be created in

11gRAC/ASM/AIX

[email protected]

191 of 393

$ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure


you have file creation privileges in the above directory before
running this tool.
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

192 of 393

srvctl : To administrate the clusterware resources.


{node1:crs}/crs # srvctl
Usage: srvctl <command> <object> [<options>]
command: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
objects: database|instance|service|nodeapps|asm|listener
For detailed help on each command and object and its options use:
srvctl <command> <object> -h
{node1:crs}/crs #

srvctl h : To obtain all possible commands.


{node1:crs}/crs # srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY |
PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl add instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl add service -d <name> -s <service_name> -r "<preferred_list>" [-a "<available_list>"] [-P <TAF_policy>]
Usage: srvctl add service -d <name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl add nodeapps -n <node_name> -A <name|ip>/netmask[/if1[|if2|...]]
Usage: srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> [-p <spfile>]
Usage: srvctl add listener -n <node_name> -o <oracle_home> [-l <listener_name>]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

config
config
config
config
config
config

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

disable
disable
disable
disable

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

enable
enable
enable
enable

database
database -d <name> [-a] [-t]
service -d <name> [-s <service_name>] [-a] [-S <level>]
nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
asm -n <node_name>
listener -n <node_name>
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]

Usage: srvctl getenv database -d <name> [-t "<name_list>"]


Usage: srvctl getenv instance -d <name> -i <inst_name> [-t "<name_list>"]
Usage: srvctl getenv service -d <name> -s <service_name> [-t "<name_list>"]
11gRAC/ASM/AIX

[email protected]

193 of 393

Usage: srvctl getenv nodeapps -n <node_name> [-t "<name_list>"]

Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY |
LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]
Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

remove
remove
remove
remove
remove
remove

database -d <name> [-f]


instance -d <name> -i <inst_name> [-f]
service -d <name> -s <service_name> [-i <inst_name>] [-f]
nodeapps -n "<node_name_list>" [-f]
asm -n <node_name> [-i <asm_inst_name>] [-f]
listener -n <node_name> [-l <listener_name>]

Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl

setenv
setenv
setenv
setenv

database -d <name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}


instance -d <name> [-i <inst_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
service -d <name> [-s <service_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
nodeapps -n <node_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}

Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

start
start
start
start
start
start

Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl

status
status
status
status
status

Usage:
Usage:
Usage:
Usage:
Usage:
Usage:

srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

stop
stop
stop
stop
stop
stop

database -d <name> [-o <start_options>]


instance -d <name> -i "<inst_name_list>" [-o <start_options>]
service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-o <start_options>]
nodeapps -n <node_name>
asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
listener -n <node_name> [-l <lsnr_name_list>]
database -d <name> [-f] [-v] [-S <level>]
instance -d <name> -i "<inst_name_list>" [-f] [-v] [-S <level>]
service -d <name> [-s "<service_name_list>"] [-f] [-v] [-S <level>]
nodeapps -n <node_name>
asm -n <node_name>

database -d <name> [-o <stop_options>]


instance -d <name> -i "<inst_name_list>" [-o <stop_options>]
service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-f]
nodeapps -n <node_name> [-r]
asm -n <node_name> [-i <asm_inst_name>] [-o <stop_options>]
listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl unsetenv database -d <name> -t "<name_list>"


11gRAC/ASM/AIX

[email protected]

194 of 393

Usage: srvctl unsetenv instance -d <name> [-i <inst_name>] -t "<name_list>"


Usage: srvctl unsetenv service -d <name> [-s <service_name>] -t "<name_list>"
Usage: srvctl unsetenv nodeapps -n <node_name> -t "<name_list>"
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

195 of 393

crs_stat help : To query state of Oracle Clusterware resources.


{node1:crs}/crs # crs_stat -help
Usage: crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
crs_stat -p [resource_name [...]] [-q]
crs_stat [-a] application -g
crs_stat [-a] application -r [-c cluster_member]
crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
crs_stat -ls [resource_name [...]] [-q]
{node1:crs}/crs #

Cluvfy : To pre/post check installation stages.


{node1:crs}/crs # cluvfy
USAGE:
cluvfy
cluvfy
cluvfy
cluvfy
cluvfy

[ -help ]
stage { -list | -help }
stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
comp { -list | -help }
comp <component-name> <component-specific options> [-verbose]

{node1:crs}/crs #

cluvfy stage list


{node1:crs}/crs # cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options>

[-verbose]

Valid stage options and stage names are:


-post hwos
: post-check for hardware and operating system
-pre cfs
: pre-check for CFS setup
-post cfs
: post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre dbinst : pre-check for database installation
-pre dbcfg
: pre-check for database configuration
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

196 of 393

cluvfy stage -help


{node1:crs}/crs # cluvfy stage -help
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options>

[-verbose]

SYNTAX
cluvfy
cluvfy
cluvfy
cluvfy

(for Stages):
stage -post hwos -n <node_list> [ -s <storageID_list> ] [-verbose]
stage -pre cfs -n <node_list> -s <storageID_list> [-verbose]
stage -post cfs -n <node_list> -f <file_system>
[-verbose]
stage -pre crsinst -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -c <ocr_location> ] [ -q <voting_disk> ]
[ -osdba <osdba_group> ]
[ -orainv <orainventory_group> ] [-verbose]
cluvfy stage -post crsinst -n <node_list> [-verbose]
cluvfy stage -pre dbinst -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -osdba <osdba_group> ] [-verbose]
cluvfy stage -pre dbcfg -n <node_list> -d <oracle_home> [-verbose]
{node1:crs}/crs #

cluvfy comp -list


{node1:crs}/crs # cluvfy comp -list
USAGE:
cluvfy comp

<component-name> <component-specific options>

[-verbose]

Valid components are:


nodereach : checks reachability between nodes
nodecon
: checks node connectivity
cfs
: checks CFS integrity
ssa
: checks shared storage accessibility
space
: checks space availability
sys
: checks minimum system requirements
clu
: checks cluster integrity
clumgr
: checks cluster manager integrity
ocr
: checks OCR integrity
crs
: checks CRS integrity
nodeapp
: checks node applications existence
admprv
: checks administrative privileges
peer
: compares properties with peers
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

197 of 393

Cluvfy comp -help


{node1:crs}/crs # cluvfy comp -help
USAGE:
cluvfy comp
SYNTAX
cluvfy
cluvfy
cluvfy
cluvfy
cluvfy

<component-name> <component-specific options>

[-verbose]

(for
comp
comp
comp
comp
comp

Components):
nodereach -n <node_list> [ -srcnode <node> ] [-verbose]
nodecon -n <node_list> [ -i <interface_list> ] [-verbose]
cfs [ -n <node_list> ] -f <file_system>
[-verbose]
ssa [ -n <node_list> ] [ -s <storageID_list> ] [-verbose]
space [ -n <node_list> ] -l <storage_location>
-z <disk_space> {B|K|M|G} [-verbose]
cluvfy comp sys [ -n <node_list> ] -p { crs | database } [-r { 10gR1 | 10gR2 | 11gR1
} ]
[ -osdba <osdba_group> ] [ -orainv <orainventory_group> ]
[-verbose]
cluvfy comp clu [ -n <node_list> ] [-verbose]
cluvfy comp clumgr [ -n <node_list> ] [-verbose]
cluvfy comp ocr [ -n <node_list> ] [-verbose]
cluvfy comp crs [ -n <node_list> ] [-verbose]
cluvfy comp nodeapp [ -n <node_list> ] [-verbose]
cluvfy comp admprv [ -n <node_list> ] [-verbose]
-o user_equiv [-sshonly]
-o crs_inst [-orainv <orainventory_group> ]
-o db_inst [-osdba <osdba_group> ]
-o db_config -d <oracle_home>
cluvfy comp peer [ -refnode <node> ] -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -orainv <orainventory_group> ] [ -osdba <osdba_group> ] [-verbose]
{node1:crs}/crs #

11gRAC/ASM/AIX

[email protected]

198 of 393

21 APPENDIX D : EMPTY TABLES TO USE FOR INSTALLATION


21.1 Network document to ease your installation
(Network layout, Hosts naming, etc ...)

Each node MUST have same network interfaces layout and usage, Do identify your PUBLIC, and PRIVATE network interface, and fill in table on next page

11gRAC/ASM/AIX

[email protected]

199 of 393

Public, Private, and Virtual Host Name layout


For each node participating to the cluster, fill in hostname and associated IP for :

Node

Public network
Virtual network
Private network
Network
Interface name

en____
en____

en____
en____

en____
en____

en____
en____

en____
en____

11gRAC/ASM/AIX

Host Type

Defined hostname

Assigned IP

Observation

Public

RAC Public node name (Public Network)

Virtual

RAC VIP node name (Private Network)

Private

RAC Interconnect node name (Private Network)

Public

RAC Public node name (Public Network)

Virtual

RAC VIP node name (Private Network)

Private

RAC Interconnect node name (Private Network)

Public

RAC Public node name (Public Network)

Virtual

RAC VIP node name (Private Network)

Private

RAC Interconnect node name (Private Network)

Public

RAC Public node name (Public Network)

Virtual

RAC VIP node name (Private Network)

Private

RAC Interconnect node name (Private Network)

Public

RAC Public node name (Public Network)

Virtual

RAC VIP node name (Private Network)

Private

RAC Interconnect node name (Private Network)

[email protected]

200 of 393

21.2 Steps Checkout


Action

11gRAC/ASM/AIX

Node 1

[email protected]

Node 2

Node 3

Done ?
Node 4

201 of 393

Node 5

Node 6

Node ..

21.3 Disks document to ease disks preparation in your implementation


(OCR, Voting, ASM spfile and ASM disks)
Disks
Disk example

LUNs
Number

Device Name

L0

/dev/example_disk

hdisk
Hdisk1

Node 1
Major
Num.
30

Minor
Num.
2

Node 2
Major
hdisk
Num.
Hdisk0
30

Minor
Num.
2

Disk for OCR 1


Disk for OCR 2
Disk for Voting 1
Disk for Voting 2
Disk for Voting 3
Disk for ASM spfile
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
Disk 9 for ASM
Disk 10 for ASM

11gRAC/ASM/AIX

[email protected]

202 of 393

hdisk
Hdisk1

Node 3
Major
Num.
30

Minor
Num.
2

hdisk
Hdisk1

Node 4
Major
Num.
30

Minor
Num.
2

22 DOCUMENTS, BOOKS TO LOOK AT ..


IBM DS4000 and Oracle Database for AIX.
Abstract: The IBM System Storage DS4000 is very well suited for Oracle Databases. Learn how to best use the DS4000 in an AIX environment by understanding
applicable data layout techniques and other important considerations.
Document Author : Jeff Mucher (Advanced Technical Support)
Document ID : PRS3044
Product(s) covered : AIX 5L; DS4000; DS4700; DS4800; GPFS; Oracle; Oracle RAC; Oracle Database
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3044

Oracle Automatic Storage


Management
Under-the-Hood & Practical
Deployment Guide
McGraw Hill
By: Nitin Vengurlekar, Murali Vallath,
Rich Long

11gRAC/ASM/AIX

Oracle Database 10g High Availability


with RAC Flashback and Data Guard
McGraw Hill
By : Matthew Hart , Scott Jesse

[email protected]

203 of 393

You might also like