Cookbook 11grac r1 Asm Aix5l San Storage Installation Guide
Cookbook 11grac r1 Asm Aix5l San Storage Installation Guide
Cookbook 11grac r1 Asm Aix5l San Storage Installation Guide
COOKBOOK
Installation Guide
Oracle 1 1 g RAC Relea se 1
with Oracle A utoma tic
Storage M an agem en t (ASM)
on IBM
Sy stem p
and i R unni ng A IX 5L
wi th S AN Storage
Ve r s io n 1 .0
A p r il 20 0 8
ORACL E / IBM
Joint Solut io ns Cen te r
IBM Thierry Plumeau
Oracle/IBM Joint Solutions Center
Oracle - Frederic Michiara
Oracle/IBM Joint Solutions Center
11gRAC/ASM/AIX
1 of 393
Document history :
Version
1.0
Date
January 2008
- Creation
Update
April 2008
- Review
Who
Validated by
Frederic Michiara
Thierry Plumeau
Dider Wojciechowski
Paul Bramy
Alain Roy
Paul Bramy
Contributors :
o
Special thanks to the participants of the 11gRAC workshop in Switzerland (Geneva) who used our last
draft coobook release, and helped us to discover some typos, and validate the content of the cookbbok.
Contact :
o
11gRAC/ASM/AIX
2 of 393
1
2
3
4
5
6
7
11.1 Required local disks (Oracle Clusterware, ASM and RAC software).................................. ...........103
11.2 Oracle Clusterware Disks (OCR and Voting Disks)........................................... ...........................105
11.2.1 Required LUNs......................................................................................... ...........................106
11.2.2 How to Identify if a LUN is used or not ?........................................................... ...................107
11.2.3 Register LUNs at AIX level.............................................................................................. .....110
11.2.4 Identify LUNs and corresponding hdisk on each node...................................... ..................111
11.2.5 Removing reserve lock policy on hdisks from each node ................................ ...................114
11.2.6 Identify Major and Minor number of hdisk on each node.................................................... .117
11.2.7 Create Unique Virtual Device to access same LUN from each node....................................119
11.2.8 Set Ownership / Permissions on Virtual Devices........................................................ ..........120
11gRAC/ASM/AIX
3 of 393
12
4 of 393
11gRAC/ASM/AIX
5 of 393
Origine
Rfrence
http://www.oracle.com/pls/db111/porta
l.portal_db?selected=11&frame=#aix_i
nstallation_guides
B28252
http://www.oracle.com/pls/db111/hom
epage?remark=tahiti
B28252
http://www.oracle.com/pls/db111/porta
l.portal_db?selected=11&frame=#aix_i
nstallation_guides
B32075
http://downloadeast.oracle.com/docs/cd/B10501_01/
em.920/a96697/preface.htm
http://metalink.oracle.com/
http://metalink.oracle.com/
OracleASMandMultiPathingTechnologies
11gRAC/ASM/AIX
Oracle Metalink
Oracle Metalink
A96697-01
169706.1
370915.1
Oracle Metalink
http://metalink.oracle.com/
294869.1
6 of 393
http://tahiti.oracle.com
Your comments are important for us, and we thanks the ones who send us their feedback about previous
release, and about how this document did help them in their implementation. We want our technical papers to
be as helpful as possible.
Please send us your comments about this document to the Oracle/IBM Joint Solutions Center.
11gRAC/ASM/AIX
+33 (0)4 67 34 67 49
7 of 393
And :
DBA Essentials
At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide
11gRAC/ASM/AIX
HTML PDF
HTML PDF
HTML PDF
8 of 393
Network heartbeat
Disk heartbeat
11gRAC/ASM/AIX
9 of 393
-1-
-2-
-3-
11gRAC/ASM/AIX
10 of 393
Protecting Single Instance Database and Thrid application tier thru Oracle Clusterware
-1-
-2-
Listener
HR application tier
11gRAC/ASM/AIX
APPS VIP (10.3.25.200) will switch to a preffered node (if configured, inj our case
on the third node. If third node was not available, APPS VIP will switch to one avail
able node).
11 of 393
-3-
-4-
When node 1 come back to normal operation, node VIP (10.3.25.81) will switch back
automatically to its home node, meaning first node. BUT APPS VIPS will not switch
back !
-5To switch back APPS VIP and its dependant resources to first node, administrator will have to relocate the APPS VIP to first node thru clusterware commands.
Sending command on cluster events thru Oracle Clusterware
11gRAC/ASM/AIX
12 of 393
-1-
-2-
11gRAC/ASM/AIX
-3-
13 of 393
Oracle Clusterware
Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a
single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC).
In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a
cluster. In any case Oracle Clusterware is the intelligence in those systems that ensures the required cooperation
between the cluster nodes.
Oracle Clusterware
Oracle Clusterware Whitepaper (PDF)
Oracle Clusterware Technical Articles
Using standard NFS to support a third voting disk on an Extended Distance
cluster configuration on Linux, AIX, HP-UX, or Solaris (PDF) February 2008
Oracle Homes in an Oracle Real Application Clusters Environment (PDF) February 2008
Using Oracle Clusterware to protect any kind of application
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
Using Oracle Clusterware to Protect Oracle Application Server (PDF) November 2005
Using Oracle Clusterware to Protect an Oracle Database 10g with
Oracle Enterprise Manager Grid Control Integration (PDF) February 2008
Using Oracle Clusterware to Protect A Single Instance Oracle Database 11g (PDF) February 2008
Oracle applications protected by Oracle Clusterware
Siebel CRM Applications protected by Oracle Clusterware January 2008
Pre-configured agents for Oracle Clusterware
Providing High Availability for SAP Resources (PDF) March 2007
11gRAC/ASM/AIX
14 of 393
StripingASM spreads data evenly across all disks in a disk group to optimize performance and utilization.
This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.
MirroringASM can increase data availability by optionally mirroring any file. ASM mirrors at the file level,
unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies,
or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy
of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to
access affected files by accessing mirrored copies on the surviving disks in the disk group.
Online storage reconfiguration and dynamic rebalancingASM permits you to add or remove disks from
your disk storage system while the database is operating. When you add a disk to a disk group, ASM
automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the
new disk. The process of redistributing data so that it is also spread across the newly added disks is known as
rebalancing. It is done in the background and with minimal impact to database performance.
Managed file creation and deletionASM further reduces administration tasks by enabling files stored in
ASM disk groups to be managed by Oracle Database. ASM automatically assigns file names when files are
created, and automatically deletes files when they are no longer needed by the database.
ASM is implemented as a special kind of Oracle instance, with its own System Global Area and background processes.
The ASM instance is tightly integrated with the database instance. Every server running one or more database
instances that use ASM for storage has an ASM instance. In an Oracle RAC environment, there is one ASM instance
for each node, and the ASM instances communicate with each other on a peer-to-peer basis. Only one ASM instance is
required for each node regardless of the number of database instances on the node.
Oracle recommends that you use ASM for your database file storage, instead of raw devices or the operating system
file system. However, databases can have a mixture of ASM files and non-ASM files.
See Also:
And :
DBA Essentials
At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide
11gRAC/ASM/AIX
HTML PDF
HTML PDF
HTML PDF
15 of 393
11gRAC/ASM/AIX
16 of 393
And :
DBA Essentials
At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA
2 Day + Performance Tuning Guide
2 Day + Real Application Clusters Guide
11gRAC/ASM/AIX
HTML PDF
HTML PDF
HTML PDF
17 of 393
11gRAC/ASM/AIX
18 of 393
IBM VIOS Virtual SCSI for ASM data storage and associated raw hdisk based Voting and
OCR (NEW April 2008).
IBM VIOS Virtual LAN for all public network and private network interconnect for supported
data storage options (NEW April 2008).
11gRAC/ASM/AIX
19 of 393
7 INFRASTRUCTURE REQUIREMENTS
11gRAC/ASM/AIX
20 of 393
Scale-in within each node, adding resources when possible, as your business grow
Scale out when scale in is not anymore possible, responding to your business grow
Etc
Oracle Real Application protect high availability of your databases, but all hardware components of a cluster as :
SAN storage attachements from nodes to SAN, including SAN switch and HBAs
Cluster network interconnect for Oracle clusterware network heartbeat, and Oracle RAC cache fusion
mechanism (Private Network), including network interfaces
Storage
Etc
11gRAC/ASM/AIX
21 of 393
7.1
General Requirements
This chapter will list the general requirements to look at for an Oracle RAC implementation on IBM system p.
Topics covered are :
7.1.1
AboutServersandprocessors
11gRAC/ASM/AIX
22 of 393
7.1.2
AboutRAConIBMSystemp
Oracle Real Application cluster could be implemented on LPARs/DLPARs from Separated physical IBM System p
servers
For production or
testing to achieve :
High Availability
(protecting loss of
a physical server)
Scalability (adding
cores and memory,
or adding new
server)
Database
Workload affinity
across servers
OR implemented between LPARs, DLPARs from a same physical IBM System p server
For production if high availability is not the
focus, but mostly about separating the
database workload.
Or for testing, or development purpose.
11gRAC/ASM/AIX
23 of 393
7.1.3
AboutNetwork
Protect Private network with Etherchanel Implementation, the following diagram show the implementation with 2
servers :
11gRAC/ASM/AIX
24 of 393
Network cards for public network must have same name on each participating node in the RAC cluster
(For example en0 on all nodes).
Network cards for Interconnect Network (Private) must have same Name on each participating Node in the
RAC cluster (For example en1 on all nodes).
1 virtual IP per node must be reserved, and not used on the network prior to Oracle clusterware
installation. Dont set IP allias at AIX level, Oracle clusterware will take charge of it.
With AIX EtherChannel implemented with 2 Network switches, well cover the loss of interconnect network cards (for
example en2 and en3) and corresponding interconnect network switch.
11gRAC/ASM/AIX
25 of 393
7.1.4
AboutSANStorage
When implementing RAC, you must be carrefull on the SAN storage to use. The SAN Storage must be
capable thru its drivers of read/write concurrency at same time from any member of the RAC cluster, which
means that reserve_policy attribute from disks (hdisk, hdiskpower, dlmfdrv, etc ) discovered must be able
to be set to no_reserve or no_lock values.
For other storages as EMC, HDS, HP and so on, youll have to check that the storage to be selected is compatible
and supported with RAC. Oracle is not supporting or certifying storage, Storage Constructors will say if they support or
not, and with which Multi-pathing tool (IBM SDDPCM, EMC PowerPath, HDLM, etc).
Some documents to look at for Oracle on IBM storage :
11gRAC/ASM/AIX
26 of 393
11gRAC/ASM/AIX
27 of 393
7.1.5
Proposedinfrastructurewith2servers
11gRAC/ASM/AIX
28 of 393
7.1.6
Whatdoweprotect?
Oracle RAC protect against node failures, the infrastructure should cover a maximum of failure case as :
For the storage
All components of the infrastructure are protected for high availability apart for the storage, if the full storage is lost.
The only way to extend the high availability to the storage level is to introduce a second storage and implement RAC
as a stretched or extended cluster.
11gRAC/ASM/AIX
29 of 393
7.1.7
AboutIBMAdvancedPowerVirtualizationandRAC
Extract from Certify : RAC for Unix On IBM AIX based Systems (RAC only) (27/02/2008, check for last update)
https://metalink.oracle.com/metalink/plsql/f?p=140:1:12020374767153922780:::::
Oracle products are certified for AIX5L on all servers that IBM supports with AIX5L. This includes IBM System i
and System p models that use POWER5 and POWER6 processors. Please visit IBM's website at this URL for
more information on AIX5L support for System i details.
IBM power based systems that support AIX5L includes machines branded as RS6000, pSeries, iSeries, System
p and System i.
The minimum AIX levels for POWER 6 based models are AIX5L 5.2 TL10 and AIX5L 5.3 TL06
AIX5L certifications include AIX5L versions 5.2 and 5.3. 5.1 was desupported on 16-JUN-2006.
o
AIX 64-bit kernel is required for 10gR2 RAC. AIX 32-bit kernel and AIX 64-bit kernel are supported with 9.2 and
10g.
Extract from RAC Technologies Matrix for UNIX Platforms (11/04/2008, check for last update)
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html
Technology
Category
Platform
IBM AIX
Server/Processor
Architecture
11gRAC/ASM/AIX
Technology
IBM System p, i,
BladeCenter JS20,
JS21
Notes
30 of 393
7.1.7.1
NetworkandVIOServer
Implementing Etherchannel links thru VIO servers for public and private network between nodes :
Example between 2 LPARs, 1 for RAC node, and one for APPS node.
11gRAC/ASM/AIX
31 of 393
Implementing Etherchannel and VLAN (Virtual LAN) links thru VIO servers for public and private network
between 2 RAC nodes :
11gRAC/ASM/AIX
32 of 393
7.1.7.2
StorageandVIOServer
11gRAC/ASM/AIX
33 of 393
The hdisks hosting the following components are accessed thru VIO servers
Etc
And OCR, Voting and ASM disks are accessed thru direct attached HBAs to the RAC LPARs.
11gRAC/ASM/AIX
34 of 393
7.2
Cookbook infrastructure
For our infrastructure, we used a cluster which is composed of three partitions (IBM LPAR) on an IBM
p 570 using AIX 5L.
system
BUT in the real world, to achieve true high availability its necessary to have at least two IBM Systems p / i servers as
shown bellow :
Interconnect network
Public network
11gRAC/ASM/AIX
35 of 393
7.2.1
IBMSytempservers
http://www-03.ibm.com/servers/eserver/pseries/hardware/highend/
http://www-03.ibm.com/systems/p/
??????
THEN youll need 1 AIX5L LPAR on each server for real RAC implementation, with necessary memory and Power 6
CPU assigned to each LPAR.
11gRAC/ASM/AIX
36 of 393
11gRAC/ASM/AIX
37 of 393
{node1:root}/ # prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 1 JSC-11g-node1
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node1
IP Address: 10.3.25.81
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 512MB
Percent Used: 7%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME
PV STATE
TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0
active
273
152
44..00..00..53..55
==============================================================================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0
+ sysplanar0
* vio0
* vscsi0
* hdisk0
* vsa0
* vty0
* ent2
* ent1
* ent0
* pci1
* pci6
+ fcs0
* fcnet0
* fscsi0
* dac0
* dac1
* dac2
* dac3
+ L2cache0
+ mem0
+ proc0
+ hdisk1
+ hdisk2
+ hdisk3
System Object
System Planar
Virtual I/O Bus
U9117.570.65D7D3E-V1-C5-T1
Virtual SCSI Client Adapter
U9117.570.65D7D3E-V1-C5-T1-L810000000000
Virtual SCSI Disk Drive
U9117.570.65D7D3E-V1-C0
LPAR Virtual Serial Adapter
U9117.570.65D7D3E-V1-C0-L0
Asynchronous Terminal
U9117.570.65D7D3E-V1-C4-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V1-C3-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V1-C2-T1
Virtual I/O Ethernet Adapter (l-lan)
U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1-C2-T1
FC Adapter
U7879.001.DQD17GX-P1-C2-T1
Fibre Channel Network Protocol Device
U7879.001.DQD17GX-P1-C2-T1
FC SCSI I/O Controller Protocol Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200900A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200700A0B80FD42F
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C2-T1-W200600A0B80FD42F
1722-600 (600) Disk Array Controller
L2 Cache
Memory
Processor
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L0
1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
11gRAC/ASM/AIX
38 of 393
11gRAC/ASM/AIX
39 of 393
{node2:root}/ # prtconf
System Model: IBM,9117-MMA
Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 JSC-11g-node2
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node2
IP Address: 10.3.25.82
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 512MB
Percent Used: 2%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME
PV STATE
TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1
active
273
149
54..20..00..20..55
==============================================================================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0
+ sysplanar0
* pci9
U7879.001.DQD17GX-P1
* pci10
+ fcs0
* fcnet0
* fscsi0
* dac0
* dac1
* dac2
* dac3
* vio0
* vscsi0
* hdisk1
* vsa0
* vty0
* ent2
* ent1
* ent0
+ L2cache0
+ mem0
+ proc0
+ hdisk0
+ hdisk2
+ hdisk3
+ hdisk4
System Object
System Planar
PCI Bus
U7879.001.DQD17GX-P1
PCI Bus
U7879.001.DQD17GX-P1-C5-T1
FC Adapter
U7879.001.DQD17GX-P1-C5-T1
Fibre Channel Network Protocol Device
U7879.001.DQD17GX-P1-C5-T1
FC SCSI I/O Controller Protocol Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200900A0B812AB31
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200700A0B80FD42F
1722-600 (600) Disk Array Controller
U7879.001.DQD17GX-P1-C5-T1-W200600A0B80FD42F
1722-600 (600) Disk Array Controller
Virtual I/O Bus
U9117.570.65D7D3E-V2-C5-T1
Virtual SCSI Client Adapter
U9117.570.65D7D3E-V2-C5-T1-L810000000000
Virtual SCSI Disk Drive
U9117.570.65D7D3E-V2-C0
LPAR Virtual Serial Adapter
U9117.570.65D7D3E-V2-C0-L0
Asynchronous Terminal
U9117.570.65D7D3E-V2-C4-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V2-C3-T1
Virtual I/O Ethernet Adapter (l-lan)
U9117.570.65D7D3E-V2-C2-T1
Virtual I/O Ethernet Adapter (l-lan)
L2 Cache
Memory
Processor
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L0
1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L3000000000000 1722-600 (600) Disk Array Device
11gRAC/ASM/AIX
40 of 393
11gRAC/ASM/AIX
41 of 393
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
node1
JSC-11g-node1
1
Shared-SMT
Uncapped
0.50
32769
0
1
1
1
3072 MB
6144 MB
1024 MB
128
0.10
1.00
0.01
16
16
16
0.00
50.00%
0
{node2:root}/home # lparstat -i
Node Name
Partition Name
Partition Number
Type
Mode
Entitled Capacity
Partition Group-ID
Shared Pool ID
Online Virtual CPUs
Maximum Virtual CPUs
Minimum Virtual CPUs
Online Memory
Maximum Memory
Minimum Memory
Variable Capacity Weight
Minimum Capacity
Maximum Capacity
Capacity Increment
Maximum Physical CPUs in system
Active Physical CPUs in system
Active CPUs in Pool
Shared Physical CPUs in system
Maximum Capacity of Pool
Entitled Capacity of Pool
Unallocated Capacity
Physical CPU Percentage
Unallocated Weight
{node2:root}/home #
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
node2
JSC-11g-node2
2
Shared-SMT
Uncapped
0.50
32770
0
1
1
1
3072 MB
6144 MB
1024 MB
128
0.10
1.00
0.01
16
16
16
0.00
50.00%
0
11gRAC/ASM/AIX
42 of 393
11gRAC/ASM/AIX
43 of 393
7.2.2
OperatingSystem
Operating system must be installed the same way on each LPAR, with the same maintenance level, same APAR
and FILESETS level.
Check PREPARING THE SYSTEM chapter for Operating System requirements on AIX5L
The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if
youve chosen an implementation without HACMP. If this layer is implemented for
other purpose, disks ressources necessary to install and run CRS data will have to be part of an
HACMP volume group resource.
If you have previously installed HACMP, you must remove :
rsct.hacmp.rte
rsct.compat.basic.hacmp.rte
rsct.compat.clients.hacmp.rte
If you did run a first installation of the Oracle Clusterware (CRS) with HACMP
installed,
Check if /opt/ORCLcluster directory does exist and if so, remove it on
all nodes.
THEN REBOOT ALL NODES ...
11gRAC/ASM/AIX
44 of 393
7.2.3
MultipathingandASM
Please check Metalink note Oracle ASM and Multi-Pathing Technologies Doc ID: Note:294869.1
Note, that Oracle Corporation does not certify ASM against multipathing utilities. The MP utilities listed below are ones
that known working solutions. As we do more testing, additional MP utilities will be listed here, thus, this document is an
active document.
Multi-pathing allow SAN access failover, and load balancing accros SAN Fiber Channel attachements.
OS
Multi-pathing ASM Device
Platfor
tool
Usage
m
AIX
Notes
IBM SDDPCM
Use /dev/rhdiskx
device
IBM RDAC
(Redundant Disk
Array Controller)
Hitachi Dynamic
Link Manager HDLM
Use /dev/rhdiskx
device
Use
/dev/rdsk/cxtydz
thats generated by
HDLM
Or /dev/dlmfdrvx
11gRAC/ASM/AIX
45 of 393
7.2.4
IBMstorageandmultipathing
With IBM, please refer to IBM to confirm which IBM storage is supported with RAC, if not specified in our document.
IBM TotalStorage products for IBM Sytem p
IBM DS4000, DS6000 and DS8000 series are supported with 10gRAC.
IBM Storage DS300 and DS400 are not, and will not be supported with 10gRAC.
As for today March 27, 2007 IBM Storage DS3200 and DS3400 are not yet
supported with 10gRAC.
IBM System Storage and TotalStorage products
http://www-03.ibm.com/servers/storage/product/products_pseries.html
??????
There are 2 cases when using IBM storage :
IBM RDAC (Redundant Disk Array Controller) for IBM Total Storage DS4000.
RDAC driver is supported with IBM Total Storage DS4000 series only, and former FasTt.
You MUST use one or the other, depending on the storage used.
case 1: Luns provided by the IBM storage with IBM MPIO installed as multi-pathing driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.
On node 1
{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
rootvg
None
None
None
None
active
case 2: Luns provided by the IBM DS4000 storage with IBM RDAC installed as multi-pathing
driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.
On node 1
11gRAC/ASM/AIX
{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
rootvg
None
None
None
None
active
46 of 393
7.2.4.1 IBMMPIO(MultiPathI/O)SetupProcedure
devices.sddpcm.53.2.1.0.7.bff
devices.sddpcm.53.rte
devices.fcp.disk.ibm.mpio.rte
On node 1,
and node 2
Installing
the filesets :
smitty install
Install and Update Software
Install Software
* INPUT device / directory for software
[/mydir_with_my_filesets]
SOFTWARE to install
Press F4
[]
Select devices.fcp.disk.ibm.mpio
Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* INPUT device / directory for software
* SOFTWARE to install
[devices.fcp.disk.ibm.> +
PREVIEW only? (install operation will NOT occur)
+
COMMIT software updates?
+
SAVE replaced files?
+
AUTOMATICALLY install requisite software?
+
EXTEND file systems if space needed?
+
OVERWRITE same or newer versions?
+
VERIFY install and check file sizes?
+
Include corresponding LANGUAGE filesets?
+
DETAILED output?
+
Process multiple volumes?
+
ACCEPT new license agreements?
+
Preview new LICENSE agreements?
+
11gRAC/ASM/AIX
[Entry Fields]
.
no
yes
no
yes
yes
no
no
yes
no
yes
no
no
47 of 393
Installation Summary
-------------------Name
Level
Part
Event
Result
------------------------------------------------------------------------------devices.fcp.disk.ibm.mpio.r 1.0.0.0
USR
APPLY
SUCCESS
Install devices.sddpcm.53
Select :
devices.sddpcm.53
ALL
Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* INPUT device / directory for software
* SOFTWARE to install
> +
PREVIEW only? (install operation will NOT occur)
+
COMMIT software updates?
+
SAVE replaced files?
+
AUTOMATICALLY install requisite software?
+
EXTEND file systems if space needed?
+
OVERWRITE same or newer versions?
+
VERIFY install and check file sizes?
+
Include corresponding LANGUAGE filesets?
+
DETAILED output?
+
Process multiple volumes?
+
ACCEPT new license agreements?
+
Preview new LICENSE agreements?
+
.
[devices.sddpcm.53
no
yes
no
yes
yes
no
no
yes
no
yes
no
no
Installation Summary
-------------------Name
Level
Part
Event
Result
------------------------------------------------------------------------------devices.sddpcm.53.rte
2.1.0.0
USR
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.0
ROOT
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
USR
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
ROOT
APPLY
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
USR
COMMIT
SUCCESS
devices.sddpcm.53.rte
2.1.0.7
ROOT
COMMIT
SUCCESS
11gRAC/ASM/AIX
48 of 393
Commands
to know AIX WWPN
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
0A-08-02
IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM
IBM
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
FC
FC
FC
FC
FC
FC
FC
FC
FC
FC
2107
2107
2107
2107
2107
2107
2107
2107
2107
2107
DEV#:
2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812000
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0
DEV#:
3 DEVICE NAME: hdisk3 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812001
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0
DEV#:
4 DEVICE NAME: hdisk4 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812002
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi0/path0
CLOSE
NORMAL
0
0
1
fscsi1/path1
CLOSE
NORMAL
0
0
11gRAC/ASM/AIX
49 of 393
7.2.4.2 IBMAIXRDAC(FCP.ARRAYfilesets)SetupProcedure
This ONLY apply to use of of DS4000 storage series. NOT to DS6000, DS8000 and ES800.
RDAC is installed by default on AIX5L.
Each node must have 2 HBA cards, for multi-pathing. With ONLY 1 HBA per node, it will works but path to SAN will not
be protected. THEN in production, 2 HBA per node must be used.
All AIX hosts in your storage subsystem must have the RDAC multipath driver installed.
In a single server environment, AIX allows load sharing (also called load balancing). You can set the load balancing
parameter to yes. In case of heavy workload on one path the driver will move other LUNs to the controller with less
workload and, if the workload reduces back to the preferred controller. Problem that can occur is disk thrashing. That
means that the driver moves the LUN back and forth from one controller to the other. As a result the controller is more
occupied by moving disks around than servicing I/O. The recommendation is to NOT load balance on an AIX system.
The performance increase is minimal (or performance could actually get worse).
RDAC (fcp.array filesets) for AIX support round-robin load-balancing
Setting the attributes of the RDAC driver for AIX
The AIX RDAC driver files are not included on the DS4000 installation CD.
Either install them from the AIX Operating Systems CD, if the correct version is included, or download them from the
following Web site: http://techsupport.services.ibm.com/server/fixes
or http://www-304.ibm.com/jct01004c/systems/support/
Commands
to check that
necessary filesets
are present for
RDAC
codes:
Applied.
Broken.
Committed.
EFIX Locked.
Obsolete. (partially migrated to newer version)
Inconsistent State...Run lppchk -v.
Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #
{node1:root}/ # lslpp -L devices.common.IBM.fc.rte
Fileset
Level State Type Description (Uninstaller)
---------------------------------------------------------------------------devices.common.IBM.fc.rte
5.3.0.50
C
F
Common IBM FC Software
State
A -B -C -E -O -? --
codes:
Applied.
Broken.
Committed.
EFIX Locked.
Obsolete. (partially migrated to newer version)
Inconsistent State...Run lppchk -v.
Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #
11gRAC/ASM/AIX
50 of 393
Commands
to check the RDAC configuration and HBA path to hdisk
On node1
On node2
{node1:root}/ # fget_config -v -A
{node2:root}/ # fget_config -v -A
---dar0---
---dar0---
Disk
DAC
LUN
hdisk1
dac0
0
hdisk2
dac3
1
hdisk3
dac0
2
hdisk4
dac3
3
hdisk5
dac0
4
hdisk6
dac3
5
hdisk7
dac0
6
hdisk8
dac3
7
hdisk9
dac0
8
hdisk10 dac3
9
hdisk11 dac0
10
hdisk12 dac3
11
hdisk13 dac0
12
{node1:root}/ #
Disk
DAC
LUN
hdisk0
dac0
0
hdisk1
dac3
1
hdisk2
dac0
2
hdisk3
dac3
3
hdisk4
dac0
4
hdisk5
dac3
5
hdisk6
dac0
6
hdisk7
dac3
7
hdisk8
dac0
8
hdisk9
dac3
9
hdisk10 dac0
10
hdisk12 dac3
11
hdisk13 dac0
12
{node2:root}/ #
Logical Drive
G8_spfile
G8_OCR1
G8_OCR2
G8_Vote1
G8_Vote2
G8_Vote3
G8_Data1
G8_Data2
G8_Data3
G8_Data4
G8_Data5
G8_Data6
G8_tie
Logical Drive
G8_spfile
G8_OCR1
G8_OCR2
G8_Vote1
G8_Vote2
G8_Vote3
G8_Data1
G8_Data2
G8_Data3
G8_Data4
G8_Data5
G8_Data6
G8_tie
Commands
to check the RDAC configuration and HBA path to hdisk for one specific dar
{node1:root}/ # fget_config -l dar0
dac0 ACTIVE dac3 ACTIVE
hdisk1
dac0
hdisk2
dac3
hdisk3
dac0
hdisk4
dac3
hdisk5
dac0
hdisk6
dac3
hdisk7
dac0
hdisk8
dac3
hdisk9
dac0
hdisk10 dac3
hdisk11 dac0
hdisk12 dac3
hdisk13 dac0
{node1:root}/ #
11gRAC/ASM/AIX
51 of 393
Commands
To see the HBA fiber
channel statistics :
Receive Statistics
-----------------96207
12497408
LIP Count: 0
NOS Count: 0
Error Frames: 0
Dumped Frames: 0
Link Failure Count: 269
Loss of Sync Count: 469
Loss of Signal: 466
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 51
Invalid CRC Count: 0
IP over FC Adapter Driver Information
No DMA Resource Count: 0
No Adapter Elements Count: 0
FC SCSI Adapter Driver Information
No DMA Resource Count: 0
No Adapter Elements Count: 0
No Command Resource Count: 0
IP over FC Traffic Statistics
Input Requests:
0
Output Requests: 0
Control Requests: 0
Input Bytes: 0
Output Bytes: 0
FC SCSI Traffic Statistics
Input Requests:
24721
Output Requests: 8204
Control Requests: 252
Input Bytes: 46814436
Output Bytes: 4207616
{node1:root}/ #
11gRAC/ASM/AIX
52 of 393
7.2.5
EMCstorageandmultipathing
With EMC, please refer to Hitachi to see which EMC storage is supported with RAC.
There are 2 cases when using EMC storage :
case 1: Luns provided by the EMC storage with IBM MPIO installed as multi-pathing driver.
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.
On node 1
{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
rootvg
None
None
None
None
active
case 2 : Luns provided by the EMC storage with EMC PowerPath installed as multi-pathing
driver.
Disks (LUNs) will be seen as hdiskpower at AIX level using lspv command.
On node 1
{node1:root}/ # lspv
hdiskpower0
00ced22cf79098ff
hdiskpower1
none
hdiskpower2
none
hdiskpower3
none
hdiskpower4
none
rootvg
None
None
None
None
active
11gRAC/ASM/AIX
53 of 393
7.2.5.1 EMCPowerPathSetupProcedure
See PowerPath for AIX version 4.3 Installation & Administration Guide, P/N 300-001-683 for details
On node 1, and node 2
1. Install EMC ODM drivers and necessary filesets
5.2.0.1 from ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS/EMC.AIX.5.2.0.1.tar.Z
install using smit install
2. remove any existing devices attached to the EMC
{node1:root}/ # rmdev dl hdiskX
3. run /usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr to detect devices
4. Install PowerPath version 4.3.0 minimum using smit install
5. register PowerPath
{node1:root}/ # emcpreg install
6. initialize PowerPath devices
{node1:root}/ # powermt config
7. verify that all PowerPath devices are named consistently across all cluster nodes
{node1:root}/ # /usr/lpp/EMC/Symmetrix/bin/inq.aix64 | grep hdiskpower
compare results. Consistent naming is not required for ASM devices, but LUNs used
for the OCR and VOTE functions must have the same device names on all rac systems.
Identify two small luns to be used for OCR and voting
if the hdiskpowerX names for the OCR and VOTE devices are different, create a new
device for each of these functions as follows:
{node1:root}/ # mknod /dev/ocr c <major # of OCR LUN> <minor # of OCR LUN>
{node1:root}/ # mknod /dev/vote c <major # of VOTE LUN> <minor # of VOTE LUN>
Major and minor numbers can be seen using the command ls al /dev/hdiskpower*
8. On all hdiskpower devices to be used by Oracle for ASM, voting, or the OCR, the reserve_lock
attribute must be set to "no"
{node1:root}/ # chdev -l hdiskpowerX -a reserve_lock=no
9. Verify the attribute is set
{node1:root}/ # lsattr El hdiskpowerX
10. Set permissions on all hdiskpower drives to be used for ASM, voting, or the OCR as follows :
{node1:root}/ # chown oracle:dba /dev/rhdiskpowerX
{node1:root}/ # chmod 660 /dev/rhdiskpowerX
The Oracle Installer will change these permissions and ownership as necessary during the CRS install
process.
11gRAC/ASM/AIX
54 of 393
7.2.6
HITACHIstorageandmultipathing
With Hitachi, please refer to Hitachi to see which HDS storage is supported with RAC.
There are 3 cases when using Hitachi HDS storage :
case 1: Luns provided by the HDS storage with IBM MPIO installed as multi-pathing driver.
*** NOT SUPPORTED with all HDS storage, check with Hitachi ***
Disks (LUNs) will be seen as hdisk at AIX level using lspv command.
On node 1
{node1:root}/ # lspv
hdisk0
00ced22cf79098ff
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
rootvg
None
None
None
None
active
case 2 : Luns provided by the HDS storage with HDLM installed as multi-pathing driver.
*** NOT SUPPORTED ***
ReleaseNotes10g Release2(10.2)forAIX5LBasedSystems(64Bit)
B19074-04 / April2006
11gRAC/ASM/AIX
55 of 393
Case 3 : Luns provided by the HDS storage with HDLM as multi-pathing driver.
Disks will be seen as dlmfdrv at AIX level using lspv command, and part of a HDLM VG
(volume groups).
{node1:root}/ # lspv
dlmfdrv0
00ced22cf79098ff
dlmfdrv1
none
dlmfdrv2
none
dlmfdrv3
none
dlmfdrv4
none
dlmfdrv5
none
On node 1
rootvg
vg_asm
vg_asm
vg_asm
vg_asm
vg_asm
active
active
active
active
active
active
rootvg
vg_ocr_disk1
vg_voting_disk1
vg_asm_disk1
vg_asm_disk2
vg_asm_disk3
active
active
active
active
active
active
or
{node1:root}/ # lspv
dlmfdrv0
00ced22cf79098ff
dlmfdrv1
none
dlmfdrv2
none
dlmfdrv3
none
dlmfdrv4
none
dlmfdrv5
none
On node 1
-y
-y
-y
-y
-y
lv_ocr_disk1 -T
lv_voting_disk1
lv_asm_disk1 -T
lv_asm_disk2 -T
lv_asm_disk3 -T
O -w
-T O
O -w
O -w
O -w
n -s
-w n
n -s
n -s
n -s
n -r
-s n
n -r
n -r
n -r
n vg_asm 2
-r n vg_asm 2
n vg_asm 2
n vg_asm 2
n vg_asm 2
chmod
chmod
chmod
chmod
chmod
oracle:dba
oracle:dba
oracle:dba
oracle:dba
oracle:dba
660
660
660
660
660
/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3
/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3
11gRAC/ASM/AIX
-y
-y
-y
-y
-y
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
-B
-B
-B
-B
-B
-s
-s
-s
-s
-s
256
256
256
256
256
-V
-V
-V
-V
-V
101
101
101
101
101
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
dlmfdrv5
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
-y
-y
-y
-y
-y
lv_ocr_disk1 -T
lv_voting_disk1
lv_asm_disk1 -T
lv_asm_disk2 -T
lv_asm_disk3 -T
O -w
-T O
O -w
O -w
O -w
n -s
-w n
n -s
n -s
n -s
n -r
-s n
n -r
n -r
n -r
n vg_ocr1 2
-r n vg_vot1 2
n vg_asm1 2
n vg_asm2 2
n vg_asm3 2
chmod
oracle:dba
oracle:dba
oracle:dba
oracle:dba
oracle:dba
/dev/rlv_ocr_disk1
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3
660 /dev/rlv_ocr_disk1
56 of 393
dlmvaryoffvg vg_asm
chmod
chmod
chmod
chmod
On node 2
6) Identify which dlmfdrv correspond to dlmfdrv1 from
node1.
dlmfdrv1 on node1 dlmfdrv0 on node2
7) import the volume groups on node2
This will copy the vg/lv configuration that was made on
node1
dlmimportvg -V 101 -y vg_asm dlmfdrv0
660
660
660
660
/dev/rlv_voting_disk1
/dev/rlv_asm_disk1
/dev/rlv_asm_disk2
/dev/rlv_asm_disk3
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
On node 2
dlmvaryonvg vg_asm
8) this will ensure the vg will not get varyon'd at
boot
dlmchvg -a n vg_asm
On node 1
9) enable the volume groups on node1
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
dlmfdrv5
on
on
on
on
on
node1
node1
node1
node1
node1
dlmfdrv0
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
on
on
on
on
on
node2
node2
node2
node2
node2
dlmvaryonvg vg_asm
dlmimportvg
dlmimportvg
dlmimportvg
dlmimportvg
dlmimportvg
-V
-V
-V
-V
-V
101
101
101
101
101
-y
-y
-y
-y
-y
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
dlmfdrv0
dlmfdrv1
dlmfdrv2
dlmfdrv3
dlmfdrv4
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
-a
-a
-a
-a
-a
n
n
n
n
n
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
On node 1
9) enable the volume groups on node1
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
dlmvaryonvg
vg_ocr1
vg_vot1
vg_asm1
vg_asm2
vg_asm3
7.2.7
Others,StorageTek,HPEVAstorageandmultipathing
For most of the storage solutions, please contact the providing company for supported configuration, as read/write
concurrent acces from all RAC nodes must be possible to implement a RAC solution. That means possibility to setup
the disk reserve_policy to no_reserve or equivalent.
11gRAC/ASM/AIX
57 of 393
11gRAC/ASM/AIX
58 of 393
Comparing Oracle Real Applicaton Clusters to Failover Clusters for Oracle Database(PDF) December 2006
Workload Management with Oracle Real Application Clusters (FAN, FCF, Load Balancing) (PDF) May 2005
Using Oracle Clusterware to Protect 3rd Party Applications (PDF) May 2005
Using Oracle Clusterware to Protect a Single Instance Oracle Database (PDF) November 2006
Using Oracle Clusterware to protect Oracle Application Server (PDF) November 2005
How to Build End to End Recovery and Workload Balancing for Your Applications 10g Release 1(PDF) Dec
2004
Oracle Database 10g Services (PDF) Nov 2003
Oracle Database 10g Release 2 Best Practices: Optimizing Availability During Unplanned Outages Using
Oracle Clusterware and RAC New!
HOWEVER, if customer still need to have HACMP, Oracle Clusterware can cohabitate with HACMP.
Please check for last status of certification on Oracle Metalink (certify)
Extract from metalink in date of March 3rd, 2008.
Product
Certified With
Version
Status
AIX (53)
11gR1 64-bit
Oracle Clusterware
11g
Certified
AIX (53)
11gR1 64-bit
HACMP
5.4.1
Certified
AIX (53)
11gR1 64-bit
HACMP
5.3
Certified
11gRAC/ASM/AIX
59 of 393
HACMP V5.3 with PTF5, cluster.es.clvm installed and ifix for APAR IZ01809
RSCT (rsct.basic.rte) version 2.4.7.3 and ifix for APAR IZ01838 This APAR is integrated into V2.4.8.1
11gRAC/ASM/AIX
60 of 393
11gRAC/ASM/AIX
61 of 393
9 INSTALLATION STEPS
Prioir to install and use Oracle 11g Real Application Cluster, you must :
1. Prepare the infrastructure
a. Hardware
b. Storage
c. Network
d. San and Network connectivity
e. Operating system
11gRAC/ASM/AIX
62 of 393
When all that is done !!!, you can process the installation in the following order :
1. Install Oracle 11g Clusterware
a. Apply necessary patchset and patches
b. Check that Oracle clusterware is working
properly
2. Install
a.
b.
c.
d.
4. Create database(s)
a. Configure Databases instances Local and
remote listeners if required
b. Change necessary Databases instances
parameters (process, etc )
c. Create database(s) associated cluster services,
and configure TAF (Transactions Applications
Failover) as required for your needs
d.
11gRAC/ASM/AIX
63 of 393
For ALL servers, you MUST apply the Oracle pre-requisites, those prerequisites are not
optional, BUT MANDATORY !!!
For some parameters as tuning settings, values specified are the minimum required, and
migh be increase depending of your needs !!!
PLEASE check Oracle documentation for last update, and MOSTLY :
Oracle METALINK Note
User equivalences
Etc
11gRAC/ASM/AIX
64 of 393
DefineNetworkslayout,Public,VirtualandPrivateHostnames
We need 2 differents networks, with associated network interfaces on each node participating to the RAC cluster :
Public Network to be used as Users network or reserved network for Application and Databases servers.
Private Network to be used as a Reserved network for Oracle clusterware, and RAC.
MANDATORY : Network interfaces must have same name, same subnet and same usage !!!
For each node, We have to define :
11gRAC/ASM/AIX
65 of 393
Please make a table as follow to have a clear view of your RAC network architecture :
PUBLIC NODE NAME MUST BE NAME RETURNED BY hostname AIX COMMAND
Public Hostname
VIP Hostname
Private Hostname
(Virtual IP)
(RAC Interconnect)
en ? (Public Network)
Node
Name
IP
Node
Name
Not Used
en ? (Private Network)
IP
Node
Name
IP
en ?
Node Name
IP
Issue the AIX command hostname on each node to identify default node name :
11gRAC/ASM/AIX
66 of 393
True
True
True
True
True
True
VIP Hostname
Private Hostname
(Virtual IP)
(RAC Interconnect)
en ? (Public Network)
Node
Name
IP
node1
10.3.25.81
node2
10. 3.25.82
11gRAC/ASM/AIX
Node
Name
en ? (Private Network)
IP
Node
Name
IP
Not Used
en ?
Node Name
IP
67 of 393
Oracle clusterware VIPs IP adress and corresponding nodes names must not be used on the network prior to
Oracle Clusterware installation. Dont make any AIX allias on the public network interface, the clusterware installation
will do it. Just reserve 1 VIP and its hostname per RAC node.
Public Hostname
VIP Hostname
Private Hostname
(Virtual IP)
(RAC Interconnect)
en ? (Public Network)
Not Used
en ? (Private Network)
Node
Name
IP
Node
Name
IP
Node
Name
node1
10.3.25.81
node1-vip
10.3.25.181
node2
10. 3.25.82
node2-vip
10. 3.25.182
IP
en ?
Node Name
IP
Oracle Clusterware VIPs IP and corresponding nodes names can be declared in the DNS, or at minimum in the local
hosts file.
10.1.2
IdentifyNetworkInterfacescards
As root, Issue the AIX command ifconfig l to list network card on each node :
Result from node1 :
{node1:root}/ # ifconfig l
en0 en1 en2 lo0
{node1:root}/ #
{node2:root}/ # ifconfig -l
en0 en1 en2 lo0
{node2:root}/ #
11gRAC/ASM/AIX
68 of 393
As root, Issue the following shell to get necessary information from network interfaces on each node :
As root, Issue the AIX command ifconfig a to list network card on each node :
11gRAC/ASM/AIX
69 of 393
VIP Hostname
Private Hostname
(Virtual IP)
(RAC Interconnect)
Node
Name
IP
Node
Name
IP
node1
10.3.25.81
node1-vip
10.3.25.181
node2
10. 3.25.82
node2-vip
10. 3.25.182
To see
details on
the network
interface :
Node
Name
IP
Not Used
en2
Node Name
11gRAC/ASM/AIX
IP
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
True
70 of 393
VIP
Not Used
en1
en2
en0
Node
Name
IP
Node
Name
IP
Node
Name
IP
node1
10.3.25.81
node1-vip
10.3.25.181
node1-rac
10.10.25.81
node2
10. 3.25.82
node2-vip
10. 3.25.182
node1-rac
10.10.25.82
Node
Name
IP
Within our infrastructure for the cookbook, we have the following layout :
10.1.3
Updatehostsfile
You should have the following entries on each node for /etc/hosts
Update/check entries in hosts file on each node
NOTA :
11gRAC/ASM/AIX
71 of 393
10.1.4
DefiningDefaultgatewayonpublicnetworkinterface
Now, if you want to set the default gateway on public network interface, be carrefull on the impact it
11gRAC/ASM/AIX
72 of 393
may have if you have already a default gateway set on a different network interface, and multiple network
interfaces not used for Oracle clusterware and RAC purpose.
First check if default gateway is set :
Use netstat -r On both nodes.
{node1:root}/ # netstat -r
Routing tables
Destination
Gateway
Flags
Refs
Use
73348
435
0
6113590
1034401
80831
4
0
350557
481
4
0
176379
392
4
187105
If
Exp
Groups
-
32 lo0
en0
en0
en0
en0
lo0
lo0
en0
en1
en1
lo0
en1
en2
en2
lo0
en2
lo0
=>
=>
=>
=>
If not set :
Using route add, do set the default gateway on public network interface, en0 in our case, and on all nodes :
7324 en0
The value 0 or the default keyword for the Destination parameter means that any packets sent to destinations not
previously defined and not on a directly connected network go through the default gateway. The 10.3.25.254 address
is that of the gateway chosen to be the default.
11gRAC/ASM/AIX
73 of 393
10.1.5
ConfigureNetworkTuningParameters
Verify that the network tuning parameters shown in the following table are set to the values shown or higher values.
The procedure following the table describes how to verify and set the values.
Network Tuning
Parameter
ipqmaxlen
512
rfc1323
sb_max
1310720
tcp_recvspace
65536
tcp_sendspace
65536
udp_recvspace
655360
Note: The recommended value of this parameter is 10 times the value of the udp_sendspace
parameter. The value must be less than the value of the sb_max parameter.
udp_sendspace
65536
Note: This value is suitable for a default database installation. For production databases, the
minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE
initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT
initialization parameter:
(DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB
To check values
{node1:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace
udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node1:root}/ #
{node2:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace
udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node2:root}/ #
11gRAC/ASM/AIX
74 of 393
To change the
current values to
required ones, if
necessary,
follow these steps :
1. If you must change the value of any parameter, enter the following command to
determine whether the system is running in compatibility mode:
# /usr/sbin/lsattr -E -l sys0 -a pre520tune
If the system is running in compatibility mode, the output is similar to the
following, showing that the value of the pre520tune attribute is enable:
pre520tune enable Pre-520 tuning compatibility mode True
By default, with AIX5L, compatibility mode is set to false !!!
Change it to true ONLY if necessary !!!
** if you want to enable the compatibility mode, issue the following command :
# chdev -l sys0 -a pre520tune=enable
2.
THEN
ELSE
# /usr/sbin/no o parameter_name=value
For example:
# /usr/sbin/no o udp_recvspace=655360
ipqmaxlen parameter:
/usr/sbin/no -r -o ipqmaxlen=512
if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=65536
/usr/sbin/no -o udp_recvspace=655360
/usr/sbin/no -o tcp_sendspace=65536
/usr/sbin/no -o tcp_recvspace=65536
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=1310720
/usr/sbin/no -o ipqmaxlen=512
fi
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
/usr/sbin/no
-p
-p
-p
-p
-p
-p
o
o
o
o
o
o
udp_sendspace=65536
udp_recvspace=655360
tcp_sendspace=65536
tcp_recvspace=65536
rfc1323=1
sb_max=1310720
Note: If you modify the ipqmaxlen parameter, you must restart the system.
These commands modify the /etc/tunables/nextboot file, causing the attribute values to persist when the
system restarts.
11gRAC/ASM/AIX
75 of 393
To
{node1:root}/ # oslevel -s
5300-07-01-0748 gives level of TL and sublevel of service pack, which means AIX5L TL7 SP1
If the operating system version is lower than the minimum required, upgrade your operating system to this level. AIX 5L
maintenance packages are available from the following Web site :
http://www-912.ibm.com/eserver/support/fixes/
11gRAC/ASM/AIX
76 of 393
10.2.1
FilesetsRequirementsfor11gRACR1/ASM(NOHACMP)
AIX filesets required on ALL nodes for 11g RAC Release 1 implementation with
ASM !!!
Check that the
required filsets are
installed on the
system.
Filesets
NOT YET
SUPPORTED
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix50.rte 8.0.0.5
xlC.rte 8.0.0.5
Specific Filesets
For EMC Symmetrix :
EMC.Symmetrix.aix.rte.5.2.0.1 or higher
For EMC CLARiiON :
EMC.CLAR11ON.fcp.rte.5.2.0.1 or
higher
Depending on the AIX Level that you intend to install, verify that the required filesets are installed on the system. The
following procedure describes how to check these requirements.
And that :
xlC.aix50.rte and
xlC.rteare at
minimum release
of 8.0.0.4 and
8.0.0.0
5.3.0.0
COMMITTED
5.3.7.0
COMMITTED
2.4.8.0
COMMITTED
If a fileset is not installed and committed, then install it. Refer to your operating system or software documentation for
information about installing filesets. If fileset is only APPLIED, its not mandatory to commit it.
11gRAC/ASM/AIX
77 of 393
10.2.2
APARsRequirementsfor11gRACR1/ASM(NOHACMP)
AIX Patches (APAR) required on ALL nodes for 11g RAC R1 implementation with
ASM !!!
(Note: If the PTF is not downloadable, customers should request an efix through AIX customer support.)
AIX 5L version 5.3,
Maintenance Level 6 or later
APARs
IY89080
IY92037
IY94343
IZ01060 or efix for IZ01060
IZ03260 or efix for IZ03260
NOT YET
SUPPORTED
To ensure that the system meets these requirements, follow these steps:
To determine
whether an
APAR is
installed, enter a
command similar
to the following:
If an APAR is not installed, download it from the following Web site and install it:
http://www-912.ibm.com/eserver/support/fixes/
11gRAC/ASM/AIX
78 of 393
Internal disk >= 12 GB for the oracle code (CRS_HOME, ASM_HOME, ORACLE_HOME)
This part will be detailed in chapter Required local disks (Oracle Clusterware, ASM and RAC software) !!!
{node1:root}/ # df -k
Filesystem
1024-blocks
Free %Used
Iused %Iused Mounted on
/dev/hd4
262144
207816
21%
13591
23% /
/dev/hd2
4718592
2697520
43%
46201
7% /usr
/dev/hd9var
262144
233768
11%
565
2% /var
/dev/hd3
1310720
1248576
5%
255
1% /tmp
/dev/hd1
262144
261760
1%
5
1% /home
/proc
- /proc
/dev/hd10opt
524288
283400
46%
5663
9% /opt
/dev/oraclelv
15728640 15506524
2%
73
1% /oracle
/dev/crslv
5242880
5124692
3%
41
1% /crs
fas960c2:/vol/VolDistribJSC
276824064 71068384
75%
144273
2% /distrib
{node1:root}/ #
{node2:root}/home # df -k
Filesystem
1024-blocks
Free %Used
Iused %Iused Mounted on
/dev/hd4
262144
208532
21%
13564
23% /
/dev/hd2
4718592
2697360
43%
46201
7% /usr
/dev/hd9var
262144
236060
10%
485
1% /var
/dev/hd3
2097152
29636
99%
4812
35% /tmp
/dev/hd1
262144
261760
1%
5
1% /home
/proc
- /proc
/dev/hd10opt
524288
283436
46%
5663
9% /opt
/dev/oraclelv
15728640 15540012
2%
64
1% /oracle
/dev/crslv
5242880
5119460
3%
43
1% /crs
fas960c2:/vol/VolDistribJSC
276824064 71068392
75%
144273
2% /distrib
{node2:root}/home #
11gRAC/ASM/AIX
79 of 393
Volume Group
rootvg
Auto
yes
Type
lv
{node2:root}/home # lsps -a
Page Space
Physical Volume
hd6
hdisk1
{node2:root}/home #
Volume Group
rootvg
Auto
yes
Type
lv
Temporary Disk Space : The Oracle Universal Installer requires up to 400 MB of free space in the /tmp directory.
To check the free temporary space available: df k /tmp
{node1:root}/ # df -k /tmp
Filesystem
1024-blocks
/dev/hd3
1310720
{node1:root}/ #
Free %Used
1248576
5%
{node2:root}/home # df -k /tmp
Filesystem
1024-blocks
Free %Used
/dev/hd3
2097152
29636
99%
{node2:root}/home #
11gRAC/ASM/AIX
80 of 393
OPTION 2
CRS_HOME installation
ASM_HOME installation
ORACLE_HOME installation
For the cookbook purpose, and ease of administration, well implement the first option with crs, asm and
rdbms users.
Well also create an oracle user to own the /oracle ($ORACLE_BASE) wich will be shared by asm (/oracle/asm) and
rdbms (/oracle/rdbms).
This oracle user (UID=500) will be part of the following groups : oinstall, dba, oper, asm, asmdba.
11gRAC/ASM/AIX
81 of 393
On node1
mkgroup -'A' id='500' adms='root' oinstall
mkgroup -'A' id='501' adms='root' crs
mkgroup -'A' id='502' adms='root' dba
mkgroup -'A' id='503' adms='root' oper
mkgroup -'A' id='504' adms='root' asm
mkgroup -'A' id='505' adms='root' asmdba
mkuser id='500' pgrp='oinstall' groups='crs,dba,oper,asm,asmdba,staff'
home='/home/oracle' oracle
mkuser id='501' pgrp='oinstall' groups='crs,dba,oper,staff' home='/home/crs' crs
mkuser id='502' pgrp='oinstall' groups='dba,oper,asm,asmdba,staff' home='/home/asm'
asm
mkuser id='503' pgrp='oinstall' groups='dba,oper,staff' home='/home/rdbms' rdbms
THEN On node2
11gRAC/ASM/AIX
82 of 393
The crs, asm and rdbms users must have oinstall as primary group, dba as secondary groups.
Verification: check if the file /etc/group contains lines such as :
(the numbers could be different)
11gRAC/ASM/AIX
83 of 393
Set a password to
crs user
asm user
rdbms user
11gRAC/ASM/AIX
84 of 393
10.5.1
Verify that the shell limits shown in the following table are set to the values shown. The procedure following the table
describes how to verify and set the values.
Recommended Value for oracle user
In our case :
Shell Limit
(As Shown in smit)
Recommended Value
for root user
crs user
asm user
rdbms user
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
-1 (Unlimited)
2. In the User NAME field, enter the user name of the Oracle software
owner, for example crs, asm and rdbms.
3. Scroll down the list and verify that the value shown for the soft limits
listed in the previous table is -1.
If necessary, edit the existing value.
4. When you have finished making changes, press F10 to exit.
11gRAC/ASM/AIX
85 of 393
OR for root and oracle user on each node, to check thru ulimit command :
{node1:root}/ # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:root}/
{node1:root}/ # su - crs
{node1:crs}/crs/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:crs}/crs/11.1.0 #
{node1:root}/ # su - asm
{node1:asm}/oracle/asm/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:asm}/oracle/asm/11.1.0 #
{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms/11.1.0 # ulimit -a
time(seconds)
unlimited
file(blocks)
unlimited
data(kbytes)
unlimited
stack(kbytes)
4194304
memory(kbytes)
unlimited
coredump(blocks)
unlimited
nofiles(descriptors) unlimited
{node1:rdbms}/oracle/rdbms/11.1.0 #
Add the following lines to the /etc/security/limits file on each node : Do set unlimited with -1
default:
fsize = -1
core = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
Or set at minimum Oracle specified values as follow :
default:
fsize = -1
core = -1
cpu = -1
data = 512000
rss = 512000
stack = 512000
nofiles = 2000
11gRAC/ASM/AIX
86 of 393
10.5.2
Setcrs,asmandrdbmsuserscapabilities
#
#
#
#
chuser
chuser
chuser
chuser
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
crs
asm
rdbms
oracle
{node2:root}/
{node2:root}/
{node2:root}/
{node1:root}/
#
#
#
#
chuser
chuser
chuser
chuser
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
crs
asm
rdbms
oracle
To check user capabilities on each node, with crs user for example :
{node1:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node1:root}/crs/11.1.0/bin #
{node2:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node2:root}/crs/11.1.0/bin #
10.5.3
SetNCARGSparameter
11gRAC/ASM/AIX
87 of 393
10.5.4
ConfigureSystemConfigurationParameters
Verify that the maximum number of processes allowed per user is set to 16384 or greater :
Note:
For production systems, this value should be at least 128 plus the sum of the PROCESSES and
PARALLEL_MAX_SERVERS initialization parameters for each database running on the system.
To check
the value
To edit
and
modify the
value
11gRAC/ASM/AIX
88 of 393
10.5.5
Lru_file_repagesetting
11gRAC/ASM/AIX
89 of 393
Content of
/etc/tunables/nextboot
For node1 :
11gRAC/ASM/AIX
90 of 393
10.5.6
AsynchronousI/Osetting
11gRAC/ASM/AIX
True
True
True
True
True
True
91 of 393
Before installing Oracle clusterware, ASM softwate and Real Application clusters software, you must configure
user equivalence for the crs, asm and rdbms users on all cluster nodes.
You have two type of User Equivalence implementation :
When SSH is not available, the Installer uses the rsh and rcp commands
instead of ssh and scp. .
You have to choose one or the other, but dont implement both at the same time.
Usually, customers will implement SSH. AND if SSH is started and used, do configure SSH
11gRAC/ASM/AIX
node1,
node2,
node1,
node2,
92 of 393
10.6.1
RSHimplementation
Set up user equivalence for the oracle and root account, to enable rsh, rcp, rlogin commands.
You should have the entries on each node for : /etc/hosts, /etc/hosts.equiv and on root/oracle home directory
$HOME/.rhosts.
/etc/hosts.equiv
{node1:root}/ # pg /etc/hosts.equiv
....
node1
root
node2
root
node1
crs
node2
crs
....
{node1:root}/ #
on each node
$HOME/.rhosts
Update/check entries in .rhosts file on
each node for root user :
{node1:root}/
{node1:root}/
{node1:root}/
node1
node2
{node1:root}/
# su root
# cd
# pg$HOME/.rhosts
root
root
#
{node1:root}/ #
{node1:root}/ # su - crs
{node1:crs}/crs # pg .rhosts
node1
root
node2
root
node1
crs
node2
crs
{node1:crs}/crs #
{node1:root}/ # su - asm
{node1:asm}/oracle/asm #
node1
root
node2
root
node1
asm
node2
asm
{node1:asm}/oracle/asm #
pg .rhosts
{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms #
node1
root
node2
root
node1
rdbms
node2
rdbms
{node1:rdbms}/oracle/rdbms #
pg .rhosts
Note : It is possible, but not advised because of security reasons, to put a + in hosts.equiv and .rhosts files.
Test if the user equivalence is
correctly set up (node2 is the
secondary cluster machine).
You are logged on node1 as root :
Test for crs, asm and rdbms users !!!
11gRAC/ASM/AIX
93 of 393
10.6.2
SSHimplementation
Before you install and use Oracle Real Application clusters, you must configure secure shell (SSH) for the ocrs, asm
and rdbms users on all cluster nodes. Oracle Universal Installer uses the ssh and scp commands during installation to
run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands
do not prompt for a password.
Note:
This section describes how to configure OpenSSH version 3. If SSH is not available, then
Oracle Universal Installer attempts to use rsh and rcp instead.
To determine if SSH is running, enter the following command:
$ ps -ef | grep sshd
If SSH is running, then the response to this command is process ID numbers. To find out more
about SSH, enter the following command:
$ man ssh
For each user, crs, asm and rdbms repeat the next steps :
Configuring SSH on Cluster Member Nodes
To configure SSH, you must first create RSA and DSA keys on each cluster node, and then copy the keys from all
cluster node members into an authorized keys file on each node. To do this task, complete the following steps:
Create RSA and DSA
keys on each
node: Complete the
following steps on
each node:
11gRAC/ASM/AIX
94 of 393
Add keys to an
authorized key
file: Complete the
following steps:
Note:
Repeat this
process for each
node in the cluster
!!!
3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file
to the Oracle user .ssh directory on a remote node. The following example is with
SCP, on a node called node2, where the Oracle user path is /home/oracle:
[oracle@node1 .ssh]scp authorized_keys node2:/home/oracle/.ssh/
4. Repeat step 2 and 3 for each cluster node member. When you have added keys
from each cluster node member to the authorized_keys file on the last node you
want to have as a cluster node member, then use SCP to copy the complete
authorized_keys file back to each cluster node member
Note:
The Oracle user's /.ssh/authorized_keys file on every node must
contain the contents from all of the /.ssh/id_rsa.pub and
/.ssh/id_dsa.pub files that you generated on all cluster nodes.
5. Change the permissions on the Oracle user's /.ssh/authorized_keys file on all
cluster nodes:
$ chmod 600 ~/.ssh/authorized_keys
At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass
phrase that you specified when you created the DSA key.
11gRAC/ASM/AIX
95 of 393
1. On the system where you want to run Oracle Universal Installer, log in as the
oracle user.
2. Enter the following commands:
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
3. At the prompts, enter the pass phrase for each key that you generated.
If you have configured SSH correctly, then you can now use the ssh or scp
commands without being prompted for a password or a pass phrase.
4. If you are on a remote terminal, and the local node has only one visual (which is
typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY 0
For example, if you are using the Bash shell, and if your hostname is node1, then enter
the following command:
$ export DISPLAY=node1:0
5. To test the SSH configuration, enter the following commands from the same
terminal session, testing the configuration of each cluster node, where nodename1,
nodename2, and so on, are the names of nodes in the cluster:
$ ssh nodename1 date
$ ssh nodename2 date
These commands should display the date set on each node.
If any node prompts for a password or pass phrase, then verify that the
~/.ssh/authorized_keys file on that node contains the correct public keys.
If you are using a remote client to connect to the local node, and you see a message
similar to "Warning: No xauth data; using fake authentication data for X11 forwarding,"
then this means that your authorized keys file is configured correctly, but your ssh
configuration has X11 forwarding enabled. To correct this, proceed to step 6.
Note:
The first time you use SSH to connect to a node from a particular system,
you may see a message similar to the following:
The authenticity of host 'node1 (140.87.152.153)' can't be established.
RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?
Enter yes at the prompt to continue. You should not see this message again
when you connect from this system to that node.
If you see any other messages or text, apart from the date, then the
installation can fail. Make any changes required to ensure that only the date
is displayed when you enter these commands.
You should ensure that any parts of login scripts that generate any output, or
ask any questions, are modified so that they act only when the shell is an
interactive shell.
6. To ensure that X11 forwarding will not cause the installation to fail, create a
user-level SSH client configuration file for the Oracle software owner user, as
follows:
a. Using any text editor, edit or create the ~oracle/.ssh/config file.
11gRAC/ASM/AIX
96 of 393
b. Make sure that the ForwardX11 attribute is set to no. For example:
Host *
ForwardX11 no
7. You must run Oracle Universal Installer from this session or remember to
repeat steps 2 and 3 before you start Oracle Universal Installer from a
different terminal session.
Preventing Oracle Clusterware Installation Errors Caused by stty Commands
During an Oracle Clusterware installation, Oracle Universal Installer uses SSH (if available) to run commands and copy
files to the other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause
installation errors if they contain stty commands.
To avoid this problem,
you must modify
these files to
suppress all output
on STDERR, as in the
following
examples:
C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif
11gRAC/ASM/AIX
97 of 393
Time Synchronisation
MANDATORY
To ensure that RAC operates
efficiently, you must synchronize
the system time on all cluster
nodes.
Oracle recommends that you use
xntpd for this purpose. xntpd is a
complete implementation of the
Network Time Protocol (NTP) version
3 standard and is more accurate than
timed.
# vi /etc/ntp.conf
server
server
server
ip_address1
ip_address2
ip_address3
10.3.25.101
10.3.25.102
10.3.25.103
11gRAC/ASM/AIX
98 of 393
To be done on
each node.
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
if [ -t 0 ]; then
stty intr ^C
fi
Notes:
On AIX, when using multithreaded applications or LAN-free, especially when running on machines with multiple
CPUs, we strongly recommend setting AIXTHREADSCOPE=S in the environment before starting the application,
for better performance and more solid scheduling.
For example:
EXPORT AIXTHREADSCOPE=S
Setting AIXTHREAD_SCOPE=S means that user threads created with default attributes will be placed into
system-wide contention scope. If a user thread is created with system-wide contention scope, it is bound to a
kernel thread and it is scheduled by the kernel. The underlying kernel thread is not shared with any other user
thread.
11gRAC/ASM/AIX
99 of 393
Clusterware and
Database
Patchset needed
Application of patchset to the database software depends if your application has been
tested with it, its your choice, your desision.
Best is to
implement last
patchset for Oracle
Clusterware and
ASM software
Extra Clusterware
Patches needed
Extra ASM
Patches needed
Extra Database
Patches needed
11gRAC/ASM/AIX
100 of 393
11 PREPARING STORAGE
Storage is to be prepared for :
Local disks on each node to install Oracle Clusterware, ASM and RAC softwares MANDATORY
11gRAC/ASM/AIX
101 of 393
Shared disk for Oracle ASM Instances parameters (SPFILE) Recommended but not MANDATORY
11gRAC/ASM/AIX
102 of 393
11.1 Required local disks (Oracle Clusterware, ASM and RAC software)
The oracle code (clusterware, ASM and RAC) can be located on an internal disk and propagated on the other machines
of the cluster. The Oracle Universal Installer manage the cluster-wide installation, that is done only once. Regular file
systems are used for Oracle code.
11gRAC/ASM/AIX
103 of 393
On each node, create a volume group oraclevg, or use available space from rootvg to create the
Logical Volumes
On node 1
Create a 6GB file system /crs in the previous volume group (large file enabled) :
{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size=6G -m'/crs' -A'yes' -p'rw'
{node1:root}/ # mount /crs
{node1:root}/ # chown R crs:oinstall /crs
THEN On node 2
Create a 6GB file system /cr in the previous volume group (large file enabled) :
{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size='12G' -m'/oracle' -A'yes' -p'rw'
{node1:root}/ # mount /oracle
{node1:root}/ # chown oracle:oinstall /oracle
{node1:root}/ # chown R asm:oinstall /oracle/asm
{node1:root}/ # chown R rdbms:oinstall /oracle/rdbms
THEN On node 2
11gRAC/ASM/AIX
104 of 393
The Oracle Cluster Registry (OCR) stores cluster and database configuration information. You must have a shared
raw device containing at least 256 MB of free space that is accessible from all of the nodes in the cluster.
OCR protected by external mechanism
'The Oracle Clusterware voting disk contains cluster membership information and arbitrates cluster ownership
among the nodes of your cluster in the event of network failures. You must have a shared raw device containing at
least 256 MB of free space that is accessible from all of the nodes in the cluster.
Voting disk protected by external
mechanism
11gRAC/ASM/AIX
105 of 393
11.2.1
RequiredLUNs
11gRAC/ASM/AIX
LUNs ID Number
L1
L2
L3
L4
L5
LUNs Size
300 MB
300 MB
300 MB
300 MB
300 MB
106 of 393
11.2.2
HowtoIdentifyifaLUNisusedornot?
L
i
s
t
o
f
a
v
a
i
l
a
b
l
e
On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
h hdisk14
none
d {node1:root}/ #
i
NO PVID are assigned apart for the rootvg hdisk
s
k
s
rootvg
None
None
None
None
None
None
Nsd
Nsd1
Nsd2
Nsd3
Nsd4
None
None
None
active
o
n
n
o
d
e
1
:
r
o
o
t
v
g
i
s
h
d
i
s
k
0
!
!
!
11gRAC/ASM/AIX
107 of 393
When hdisk is marked rootvg, header of the disk might look like this :
Use the
following
command to
read hdisk
header :
rootvg
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|..}>m/..........|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
active
On node 2
11gRAC/ASM/AIX
108 of 393
When hdisk is not marked rootvg, but None, its important to check that its not used at all.
Use the
following
command to
read hdisk
header :
If all lines are
ONLY full of 0
THEN the hdisk is
free to be used.
If its not the case,
check first that
which LUN is
mapped to the
hdisk (next
pages), and if its
the LUN you
should use, you
must then dd
(zeroing) the
/dev/rhdisk2 rdisk
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
On node 2
BUT CAREFULL !!!, any hdisk not marked None as a result of the lspv command may show also a blank
header (GPFS used hdisk for example), so make sute that hdisk has None value out of the lspv command.
List of
all
hdisks
not
marked
rootvg
or
None
on node
1:
Use the
following
command to
read hdisk
header :
All lines are ONLY
full of 0
BUT the hdisk is
not free to be
used !!! Because
its used by IBM
GPFS in our
example
On node 1
{node1:root}/ # lspv
...
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
...
{node1:root}/ #
Nsd
Nsd1
Nsd2
Nsd3
Nsd4
11gRAC/ASM/AIX
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
109 of 393
{node1:root}/ #
11.2.3
RegisterLUNsatAIXlevel
Before
registration of
LUNs :
On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
{node1:root}/ #
rootvg
active
rootvg
active
On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
{node2:root}/ #
As root on each node, update the ODM repository using the following command : "cfgmgr"
You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.
After
registration of
LUNs in ODM
thru cfgmgr
command :
On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #
rootvg
None
None
None
None
None
active
rootvg
None
None
None
None
None
active
On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #
11gRAC/ASM/AIX
110 of 393
11.2.4
IdentifyLUNsandcorrespondinghdiskoneachnode
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
LUNs ID Number
L1
L2
L3
L4
L5
Identify the disks available for OCR and Voting disks, on each node, knowing the LUNs numbers.
Knowing the LUNs
number to use, we
know need to identify
the corresponding
hdisks on each node of
the cluster as detailed
in the following table :
Disks
LUNs ID
Number
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
L1
L2
L3
L4
L5
Node 1
Corresponding
hdisk
Node 2
Corresponding
hdisk
Node .
Corresponding
hdisk
Identify hdisks by assigning momently a PVID to each hdisk not having one
11gRAC/ASM/AIX
111 of 393
List of
available
hdisks on
node 1 :
rootvg is
hdisk0 !!!
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #
rootvg
None
None
None
None
None
active
Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :
Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node1:root}/ # for i in 2 3 4 5 6
do
lscfg -vl hdisk$i
done
hdisk2
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000
hdisk3
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000
hdisk4
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L3000000000000
hdisk5
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L4000000000000
hdisk6
U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L5000000000000
{node1:root}/ #
1722-600
1722-600
1722-600
1722-600
1722-600
(600)
(600)
(600)
(600)
(600)
Disk
Disk
Disk
Disk
Disk
Array
Array
Array
Array
Array
Device
Device
Device
Device
Device
LUNs ID
Number
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
L1
L2
L3
L4
L5
Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Node 2
Corresponding
hdisk
Node .
Corresponding
hdisk
hdisk?
hdisk?
11gRAC/ASM/AIX
112 of 393
On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #
rootvg
None
None
None
None
None
active
Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :
1722-600
1722-600
1722-600
1722-600
1722-600
(600)
(600)
(600)
(600)
(600)
Disk
Disk
Disk
Disk
Disk
Array
Array
Array
Array
Array
Device
Device
Device
Device
Device
LUNs ID
Number
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
L1
L2
L3
L4
L5
Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Node .
Corresponding
hdisk
hdisk?
hdisk?
hdisk?
hdisk?
Hdisk?
11gRAC/ASM/AIX
113 of 393
11.2.5
Removingreservelockpolicyonhdisksfromeachnode
Why is it MANDATORY to change the reserve policy on disks from each node accessing the same
LUNs ?
11gRAC/ASM/AIX
114 of 393
True
On IBM storage (ESS, FasTt, DSXXXX) : Change the reserve_policy attribute to no_reserve
On HDS storage with HDLM driver, and no disks in Volume Group : Change the dlmrsvlevel
attribute to no_reserve
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve
Change the
reserve_policy
attributes for
each disks
dedicated to OCR
and voting disks,
on each nodes of
the cluster :
In our case,
we have an
IBM storage !!!
On node 1
On node 2
{node1:root}/ # for i in 2 3 4 5 6
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed
changed
changed
{node1:root}/ #
{node2:root}/ # for i in 2 3 4 5 6
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed
changed
changed
{node2:root}/ #
115 of 393
reserve_policy no_reserve
{node1:root}/ #
11gRAC/ASM/AIX
Reserve Policy
True
116 of 393
11.2.6
IdentifyMajorandMinornumberofhdiskoneachnode
As described before, disks might have different names from one node to another for example
hdisk2 on node1 might be hdisk3 on node2, etc
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
LUNs
ID
Number
L1
L2
L3
L4
L5
Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
Node 1
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Major
Num.
Minor
Num.
Next steps will explain procedure to identify Major and Minor number necessary to create the
virtual device pointing to the right hdisk on each node.
11gRAC/ASM/AIX
117 of 393
Values as 21,
{node1:root}/ # for i in 2 3 4 5 6
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
{node1:root}/ #
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
11
11
11
11
11
11
11
11
14
14
15:58
15:58
15:58
15:58
15:58
15:58
15:58
15:58
21:18
21:18
/dev/hdisk2
/dev/rhdisk2
/dev/hdisk3
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/hdisk5
/dev/rhdisk5
/dev/hdisk6
/dev/rhdisk6
Disks
21, 6 Jan
21, 6 Jan
21, 7 Jan
21, 7 Jan
21, 8 Jan
21, 8 Jan
21, 9 Jan
21, 9 Jan
21, 10 Jan
21, 10 Jan
LUNs
ID
Number
L1
L2
L3
L4
L5
Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
type of disk.
order number.
Node 1
Correspondin
g hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Major
Num.
Minor
Num.
21
21
21
21
21
6
7
8
9
10
{node2:root}/ # for i in 2 3 4 5 6
do
ls -la /de> do
ls -la /dev/*hdisk$i
done
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
brw------1 root
system
crw------1 root
system
{node2:root}/ #
21, 6 Jan
21, 6 Jan
21, 7 Jan
21, 7 Jan
21, 8 Jan
21, 8 Jan
21, 9 Jan
21, 9 Jan
21, 10 Jan
21, 10 Jan
Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
11
11
11
11
11
11
11
11
14
14
16:12
16:12
16:12
16:12
16:12
16:12
16:12
16:12
22:20
22:20
Major
Num.
Minor
Num.
/dev/hdisk2
/dev/rhdisk2
/dev/hdisk3
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/hdisk5
/dev/rhdisk5
/dev/hdisk6
/dev/rhdisk6
Disks
LUNs
Device Name
Node 1
Major Minor
Node 2
Major Minor
ID
Correspondin
Num.
Num.
Corresponding
Num. Num.
Number
g hdisk
hdisk
OCR 1
L1
/dev/ocr_disk1
hdisk2
21
6
hdisk2
21
6
OCR 2
L2
/dev/ocr_disk2
hdisk3
21
7
hdisk3
21
7
Voting 1
L3
/dev/voting_disk1
hdisk4
21
8
hdisk4
21
8
Voting 2
L4
/dev/voting_disk2
hdisk5
21
9
hdisk5
21
9
Voting 3
L5
/dev/voting_disk3
hdisk6
21
10
hdisk6
21
10
As disks may have different names from one node to another for example L1 could correspond to hdisk2 on
node1, and hdisk1 on node2, etc
11gRAC/ASM/AIX
118 of 393
11.2.7
CreateUniqueVirtualDevicetoaccesssameLUNfromeachnode
THEN from the table, for node1 we need to create virtual devices :
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
LUNs
ID
Number
L1
L2
L3
L4
L5
Device Name
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
Node 1
Correspondin
g hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Major
Num.
Minor
Num.
21
21
21
21
21
6
7
8
9
10
Node 2
Corresponding
hdisk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
Major
Num.
Minor
Num.
21
21
21
21
21
6
7
8
9
10
#
#
#
#
#
mknod
mknod
mknod
mknod
mknod
/dev/ocr_disk1 c 21
/dev/ocr_disk2 c 21
/dev/voting_disk1 c
/dev/voting_disk2 c
/dev/voting_disk3 c
6
7
21 8
21 9
21 10
From the table, for node2 we need also to create virtual devices :
By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.
To create same virtual devices on each node
called :
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
11gRAC/ASM/AIX
#
#
#
#
#
mknod
mknod
mknod
mknod
mknod
/dev/ocr_disk1 c 21
/dev/ocr_disk2 c 21
/dev/voting_disk1 c
/dev/voting_disk2 c
/dev/voting_disk3 c
6
7
21 8
21 9
21 10
119 of 393
11.2.8
SetOwnership/PermissionsonVirtualDevices
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
#
#
#
#
#
chown
chown
chown
chown
chown
root:oinstall /dev/ocr_disk1
root:oinstall /dev/ocr_disk2
crs:dba /dev/voting_disk1
crs:dba /dev/voting_disk2
crs:dba /dev/voting_disk3
chmod
chmod
chmod
chmod
chmod
640
640
660
660
660
Then on node2
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
#
#
#
#
#
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
Then on node2
11gRAC/ASM/AIX
120 of 393
"21, 6"
21, 6 Jan 11 15:58 /dev/hdisk2
21, 6 Feb 06 17:43 /dev/ocr_disk1
21, 6 Jan 11 15:58 /dev/rhdisk2
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
6
6
6
7
7
7
8
8
8
9
9
9
9
9
9
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Feb
11
06
11
11
06
11
11
11
06
11
11
06
11
11
06
15:58
17:43
15:58
15:58
17:43
15:58
15:58
15:58
17:43
15:58
15:58
17:43
15:58
15:58
17:43
/dev/hdisk2
/dev/ocr_disk1
/dev/rhdisk2
/dev/hdisk3
/dev/ocr_disk2
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/voting_disk1
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk2
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk3
Then on node2 :
Checking the
modifications
After Oracle
clusterware
installation,
After Oracle
clusterware
installation,
ownership
and
permissions
of virtual
devices may
change.
"21, 6"
21, 6 Jan 11 15:58 /dev/hdisk2
21, 6 Feb 06 17:43 /dev/ocr_disk1
21, 6 Jan 11 15:58 /dev/rhdisk2
11gRAC/ASM/AIX
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
21,
6
6
6
7
7
7
8
8
8
9
9
9
9
9
9
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Jan
Feb
Jan
Jan
Feb
Jan
Jan
Feb
11
06
11
11
06
11
11
11
06
11
11
06
11
11
06
15:58
17:43
15:58
15:58
17:43
15:58
15:58
15:58
17:43
15:58
15:58
17:43
15:58
15:58
17:43
/dev/hdisk2
/dev/ocr_disk1
/dev/rhdisk2
/dev/hdisk3
/dev/ocr_disk2
/dev/rhdisk3
/dev/hdisk4
/dev/rhdisk4
/dev/voting_disk1
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk2
/dev/hdisk5
/dev/rhdisk5
/dev/voting_disk3
121 of 393
11.2.9
Formatingthevirtualdevices(zeroing)
On node 1
{node1:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #
{node1:root}/ # for i in 1 2 3
> do
> dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #
Verify devices concurrent read/write access by running at the same time dd command from each node :
on node1
on node2
On node 1
{node1:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
On node 2
{node2:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
Same for voting disks
11gRAC/ASM/AIX
122 of 393
11gRAC/ASM/AIX
123 of 393
11gRAC/ASM/AIX
124 of 393
11.3.1
RequiredLUNs
The following screen shows the LUN mapping for nodes used in our cluster.
These IDs will help us on to identify which hdisk will be used.
Disks
ASM spfile
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
11.3.2
LUNs ID Number
L0
L6
L7
L8
L9
LA (meaning Lun 10)
LB (meaning Lun 11)
LC (meaning Lun 12)
LD (meaning Lun 13)
LUNs Size
100 MB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB
4 GB
HowtoIdentifyifaLUNisusedornot?
11gRAC/ASM/AIX
125 of 393
When hdisk is marked rootvg, header of the disk might look like this :
Use the
following
command to
read hdisk
header :
rootvg
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|..}>m/..........|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
active
On node 2
When hdisk is not marked rootvg, but None, its imortant to check that its not used at all.
Use the
following
command to
read hdisk
header :
If all lines are
ONLY full of 0
THEN the hdisk is
free to be used.
If its not the case,
check first that
which LUN is
mapped to the
hdisk (next
pages), and if its
the LUN you
should use, you
must then dd
(zeroing) the
/dev/rhdisk2 rdisk
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
On node 2
11gRAC/ASM/AIX
126 of 393
11.3.3
RegisterLUNsatAIXlevel
Before
registration of
new LUNs for
ASM disks :
On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node1:root}/ #
rootvg
None
None
None
None
None
active
rootvg
None
None
None
None
None
active
On node 2
{node2:root}/ # lspv
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
{node2:root}/ #
As root on each node, update the ODM repository using the following command : "cfgmgr"
11gRAC/ASM/AIX
127 of 393
You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.
After
registration of
LUNs in ODM
thru cfgmgr
command :
On node 1
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node1:root}/ #
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
None
active
On node 2
{node2:root}/ # lspv
hdisk0
none
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node2:root}/ #
11gRAC/ASM/AIX
None
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
active
128 of 393
11.3.4
IdentifyLUNsandcorrespondinghdiskoneachnode
Disks
We know the LUNs to use
for :
LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD
Node 1
Node 2
Node .
Identify the disks available for ASM and ASM spfile disks, on each node, knowing the LUNs
numbers.
Identify hdisks by assigning momently a PVID to each hdisk not having one
11gRAC/ASM/AIX
129 of 393
List of
available
hdisks on
node 1 :
rootvg is
hdisk0 !!!
{node1:root}/ # lspv
hdisk0
00cd7d3e6d2fa8db
hdisk1
none
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node1:root}/ #
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
None
active
Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :
Identify LUN ID assign to hdisk, using lscfg vl hdisk? command
On node 1
{node1:root}/ # for i in 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lscfg -vl hdisk$i
done
hdisk1
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L0 3552
(500) Disk Array Device
hdisk2
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L1000000000000 3552
(500) Disk Array
hdisk3
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L2000000000000 3552
(500) Disk Array
hdisk4
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L3000000000000 3552
(500) Disk Array
hdisk5
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L4000000000000 3552
(500) Disk Array
hdisk6
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L5000000000000 3552
(500) Disk Array
hdisk7
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L6000000000000 3552
(500) Disk Array
hdisk8
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L7000000000000 3552
(500) Disk Array
hdisk9
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L8000000000000 3552
(500) Disk Array
hdisk10
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L9000000000000 3552
(500) Disk Array
hdisk11
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LA000000000000 3552
(500) Disk Array
hdisk12
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LB000000000000 3552
(500) Disk Array
hdisk13
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LC000000000000 3552
(500) Disk Array
hdisk14
U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LD000000000000 3552
(500) Disk Array
{node1:root}/ #
Disks
THEN
We get the
following table :
LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD
Node 1
hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
Node 2
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Node .
hdisk?
hdisk?
11gRAC/ASM/AIX
130 of 393
List of available
hdisks on node
2:
rootvg is hdisk1,
which is not
same hdisk as
on node 1,
which means
that it could be
the same for
each hdisk. On
both nodes,
same hdisk
names might not
be attached to
same LUN.
{node2:root}/ # lspv
hdisk0
none
hdisk1
00cd7d3e7349e441
hdisk2
none
hdisk3
none
hdisk4
none
hdisk5
none
hdisk6
none
hdisk7
none
hdisk8
none
hdisk9
none
hdisk10
none
hdisk11
none
hdisk12
none
hdisk13
none
hdisk14
none
{node2:root}/ #
None
rootvg
None
None
None
None
None
None
None
None
None
None
None
None
None
active
Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :
Disks
THEN
We get the
following table :
LUNs
Number
L0
L6
L7
L8
L9
LA
LB
LC
LD
Node 1
Node 2
Node .
hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
hdisk0
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
hdisk?
hdisk?
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
11gRAC/ASM/AIX
131 of 393
11.3.5
Removingreservelockpolicyonhdisksfromeachnode
As for OCR and Voting disks, SETUP reserve_policy on ASM spfile and ASM hdisks, on each
node :
Example for one hdisk :
Issue the command lsattr E l hdisk7 to vizualize all attributes for hdisk7
True
On IBM storage (ESS, FasTt, DSXXXX) : Change the reserve_policy attribute to no_reserve
chdev -l hdisk? -a reserve_policy=no_reserve
On HDS storage with HDLM driver, and no disks in Volume Group : Change the dlmrsvlevel
attribute to no_reserve
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve
Change the
reserve_p
olicy
attributes
for each
disks
dedicated
to ASM, on
each nodes
of the
cluster :
{node1:root}/ #
On node 2
{node2:root}/ # for i in 0 6 7 8 9 10 12 13 14
do
chdev -l hdisk$i -a reserve_policy=no_reserve
done
changed
changed
changed
{node2:root}/ #
True
As described before, disks might have different names from one node to another for example hdisk7 on node1
11gRAC/ASM/AIX
132 of 393
11gRAC/ASM/AIX
133 of 393
11.3.6
IdentifyMajorandMinornumberofhdiskoneachnode
Disks
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
L0
/dev/ASMspf_disk
hdisk1
hdisk0
L6
L7
L8
L9
LA
LB
LC
LD
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
We need to get major and minor number for each hdisk of node1 :
To obtain minor
and major
numbers of each
hdisk, on node1
We need to issue
the command :
ls la /dev/hdisk?
On node1 :
{node1:root}/ # for i in 1 7 8 9 10 11 12 13 14
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
21, 5 Mar
crw------1 root
system
21, 5 Mar
brw------1 root
system
21, 11 Mar
crw------1 root
system
21, 11 Mar
brw------1 root
system
21, 12 Mar
crw------1 root
system
21, 12 Mar
brw------1 root
system
21, 13 Mar
crw------1 root
system
21, 13 Mar
brw------1 root
system
21, 14 Mar
crw------1 root
system
21, 14 Mar
brw------1 root
system
21, 15 Mar
crw------1 root
system
21, 15 Mar
brw------1 root
system
21, 16 Mar
crw------1 root
system
21, 16 Mar
brw------1 root
system
21, 17 Mar
crw------1 root
system
21, 17 Mar
brw------1 root
system
21, 18 Mar
crw------1 root
system
21, 18 Mar
{node1:root}/ #
12
12
07
07
07
07
07
07
07
07
07
07
07
07
27
27
27
27
12:18
12:18
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
16:13
16:13
16:13
16:13
To fill in the following table, taking result from command for hdisk1, we get 21,
number, and 5 as minor number.
Disks
ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
/dev/hdisk1
/dev/rhdisk1
/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk11
/dev/rhdisk11
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
L0
/dev/ASMspf_disk
hdisk1
21
hdisk0
L6
L7
L8
L9
LA
LB
LC
LD
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
11
12
13
14
15
16
17
18
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
11gRAC/ASM/AIX
Major
Num.
Minor
Num.
134 of 393
We need to get major and minor number for each hdisk of node2 :
To obtain minor and
major numbers of
each hdisk, on
node2
We need to issue
the command :
ls la /dev/hdisk?
On node2 :
{node2:root}/ # for i in 0 6 7 8 9 10 12
do
ls -la /dev/*hdisk$i
done
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
brw------1 root
system
21,
crw------1 root
system
21,
{node2:root}/ #
13 14
5
5
11
11
12
12
13
13
14
14
15
15
16
16
17
17
18
18
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
12
12
07
07
07
07
07
07
07
12
07
07
07
07
27
27
27
27
12:17
12:17
10:32
10:32
10:32
10:32
10:32
10:32
10:32
13:55
10:32
10:32
10:32
10:32
16:13
16:13
16:13
16:13
To fill in the following table, taking result from command for hdisk0, we get 2&,
number, and 5 as minor number.
Disks
ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
/dev/hdisk0
/dev/rhdisk0
/dev/hdisk6
/dev/rhdisk6
/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
L0
/dev/ASMspf_disk
hdisk1
21
hdisk0
21
L6
L7
L8
L9
LA
LB
LC
LD
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
11
12
13
14
15
16
17
18
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
11
12
13
14
15
16
17
18
11gRAC/ASM/AIX
135 of 393
11.3.7
CreateUniqueVirtualDevicetoaccesssameLUNfromeachnode
As disks could have different names from one node to another for example L0 correspond to hdisk1 on node1,
and could have been hdisk0 or other on node2, etc
For ASM spfile disk, it is madatory to create a vitual device if hdisk name is different on each node. Even so hdisk
could be same for 2 nodes, adding an extra node could introduce a different hdisk name for the same LUN on the new
node, so its best advice to create a virtual device for ASM spfile disk !!!
THEN from the following table, for the ASM spfile disk :
Disks
LUNs
ID
Number
Device Name
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
L0
L6
L7
L8
L9
LA
LB
LC
LD
/dev/ASMspf_disk
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
18
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
18
11gRAC/ASM/AIX
136 of 393
OPTION 1, Using the /dev/rhdisk, then we dont need to create virtual device. Well just have to set the
right user ownership and unix read/write permissions.
For this option, move to next chapter and follow option 1.
Or
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
L0
L6
L7
L8
L9
LA
LB
LC
LD
/dev/ASMspf_disk
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7
/dev/ASM_Disk8
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
18
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
18
By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.
/dev/ASM_Disk
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
.....
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
c
c
c
c
21
21
21
21
11
12
13
14
c
c
c
c
21
21
21
21
11
12
13
14
11gRAC/ASM/AIX
#
#
#
#
#
#
#
#
mknod
mknod
mknod
mknod
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
137 of 393
11.3.8
SetOwnership/PermissionsonVirtualDevices
11gRAC/ASM/AIX
138 of 393
THEN set
read/write
permissions
of the created
virtual devices
to 660
Checking the
modifications
After Oracle
clusterware
installation,
LUNs
ID
Number
Device Name
seen on node1 (1) and node2 (2)
Node 1
Corresponding
hdisk
Node 2
Corresponding
hdisk
L0
L6
L7
L8
L9
LA
LB
LC
/dev/ASMspf_disk
/dev/rhdisk7 on 1 --- /dev/rhdisk6 on 2
/dev/rhdisk8 on 1 --- /dev/rhdisk7 on 2
/dev/rhdisk9 on 1 --- /dev/rhdisk8 on 2
/dev/rhdisk10 on 1 --- /dev/rhdisk9 on 2
/dev/rhdisk11 on 1 --- /dev/rhdisk10 on 2
/dev/rhdisk12 on 1 --- /dev/rhdisk12 on 2
/dev/rhdisk13 on 1 --- /dev/rhdisk13 on 2
hdisk1
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
We just need to set oracle:dba ownerchip to /dev/rhdisk? mapped to LUN for ASM.
And to set 660 read/write permissions to these rhdisk? :
THEN set ownership of the created virtual devices to oracle:dba
For first node, as root user
{node1:root}/ # for i in 7 8 9 10 11 12 13
>do
>chown asm:dba /dev/rhdisk$i
>done
node1:root-/>
{node2:root}/ # for i in 6 7 8 9 10 12 13
>do
>chown asm:dba /dev/rhdisk$i
>done
node2:root-/>
139 of 393
{node1:root}/ # for i in 7 8 9 10 11 12 13 14
{node2:root}/ # for i in 6 7 8 9 10 12 13
>do
>chmod 660 /dev/rhdisk$i
>done
node1:root-/>
14
>do
>chmod 660 /dev/rhdisk$i
>done
node2:root-/>
Checking the
modifications
Check also
on second
node ...
grep oracle
11 Mar 07 10:31
12 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
13 Mar 07 10:31
/dev/rhdisk7
/dev/rhdisk8
/dev/rhdisk9
/dev/rhdisk10
/dev/rhdisk11
/dev/rhdisk12
/dev/rhdisk13
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
L0
L6
L7
L8
L9
LA
LB
LC
/dev/ASMspf_disk
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
hdisk0
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
21
21
21
21
21
21
21
21
5
11
12
13
14
15
16
17
>do
>chown asm:dba /dev/ASM_Disk$i
>done
node1:root-/>
THEN set
read/write
permissions of
the created
virtual devices to
660
5 >do
>chmod 660 /dev/ASM_Disk$i
>done
node1:root-/>
>do
>chmod 660 /dev/ASM_Disk$i
>done
node2:root-/>
Checking the
modifications
11gRAC/ASM/AIX
140 of 393
11.3.9
Formatingthevirtualdevices(zeroing)
Now, we need to format the virtual devices, or rhdisk. In both option, zeroing rhdisk is sufficient for :
On node 1
{node1:root}/ # for i in 7 8 9 10 11 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...
records
records
records
records
in.
out.
in.
out.
node:root-/> for i in 6 7 8 9 10 12 13 14
Verify devices concurrent read/write access by running at the same time dd command from each node :
on node1
On node 1
{node1:root}/ # for i in 7 8 9 10 11 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...
on node2
records
records
records
records
in.
out.
in.
out.
On node 2
{node2:root}/ # for i in 6 7 8 9 10 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0
25000+0
25000+0
25000+0
...
11gRAC/ASM/AIX
records
records
records
records
in.
out.
in.
out.
141 of 393
11.3.10 RemovingassignedPVIDonhdisk
PVID MUST NOT BEEN SET on hdisk used for ASM, and should not be set for OCR, voting and ASM spfile
disks !!!!
From following table, make sure that no PVID are assigned to hdisks from each node mapped to LUNs.
Disks
OCR 1
OCR 2
Voting 1
Voting 2
Voting 3
ASM Spfile
Disk
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
LUNs
ID
Number
Device Name
seen on node1 (1) and
node2 (2)
Node 1
Corresponding
hdisk
Major
Num.
Minor
Num.
Node 2
Corresponding
hdisk
Major
Num.
Minor
Num.
L1
L2
L3
L4
L5
L0
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
/dev/asmspf_disk
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk1
21
21
21
21
21
21
6
7
8
9
10
5
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk0
21
21
21
21
21
21
6
7
8
9
10
5
L6
L7
L8
L9
LA
LB
LC
LD
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
/dev/rhdisk?
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
11
12
13
14
15
16
17
18
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk12
hdisk13
hdisk14
21
21
21
21
21
21
21
21
11
12
13
14
15
16
17
18
To remove PVID
from hdisk, we will
use the chdev
command :
PVID must be
removed from hdisk
on each node.
IMPORTANT !!!!!
Dont remove PVID
to hdisk which are
not yours !!!!
Check whith lspv command as root on each node, if PVID are still assigned or not !!!
11gRAC/ASM/AIX
142 of 393
11gRAC/ASM/AIX
143 of 393
11gRAC/ASM/AIX
144 of 393
OCR/Votingdisks
!!! IN ANY case, DONT assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.
AFTER CRS All hdisks prepared for OCR and voting disks have dba, or oinstall group assigned :
Installation :
For OCR :
How to
identify
hdisks used
as OCR, and
votings
disks :
Then
6"
6 Mar 07 10:31 /dev/hdisk2
6 Mar 12 15:03 /dev/ocr_disk1
6 Mar 07 10:31 /dev/rhdisk2
{node1:crs}/crs/11.1.0/bin ->lquerypv
00000010
C2BA0000 00001000 00012BFF
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv
FFC00000 00000000 00000000'
00000000
00820000 FFC00000 00000000
{node1:root}/ #
-h /dev/rhdisk2|grep 'z{|}'
7A7B7C7D |..........+.z{|}|
-h /dev/rhdisk2|grep '00820000
00000000
|................|
|................|
|..........+.z{|}|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
OR
11gRAC/ASM/AIX
145 of 393
8"
8 Mar 07 10:31 /dev/hdisk4
8 Mar 07 10:31 /dev/rhdisk4
8 Apr 05 14:13 /dev/voting_disk1
{node1:crs}/crs/11.1.0/bin ->lquerypv
00000010
A4120000 00000200 00095FFF
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv
FFC00000 00000000 00000000'
00000000
00220000 FFC00000 00000000
{node1:root}/ #
-h /dev/rhdisk4|grep 'z{|}'
7A7B7C7D |.........._.z{|}|
-h /dev/rhdisk4|grep '00220000
00000000
|."..............|
|."..............|
|.........._.z{|}|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
OR
11gRAC/ASM/AIX
146 of 393
You must reset the hdisk header having the OCR or Voting disk stamp :
How to free
hdisk at AIX
level, when
hdisk are not
anymore used
for OCR or
voting disk, or
need to be
reset for a
failed CRS
installation ?
11gRAC/ASM/AIX
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
147 of 393
11.5.2
ASMdisks
!!! IN ANY case, DONT assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.
AFTER ASM
Installation,
and ASM
diskgroup
creation :
All hdisks prepared for ASM are owned by oracle user, and group dba :
oracle
11 Mar
11 Mar
12 Mar
12 Mar
13 Mar
13 Mar
14 Mar
14 Mar
15 Mar
15 Mar
16 Mar
16 Mar
17 Mar
17 Mar
18 Mar
18 Mar
07
07
07
07
07
07
07
07
07
07
07
07
27
27
27
27
| grep
11 Mar
12 Mar
13 Mar
14 Mar
15 Mar
16 Mar
17 Mar
18 Mar
oracle
07 10:31
07 10:31
07 10:31
07 10:31
07 10:31
07 10:31
27 16:13
27 16:13
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
10:31
16:13
16:13
16:13
16:13
/dev/hdisk7
/dev/rhdisk7
/dev/hdisk8
/dev/rhdisk8
/dev/hdisk9
/dev/rhdisk9
/dev/hdisk10
/dev/rhdisk10
/dev/hdisk11
/dev/rhdisk11
/dev/hdisk12
/dev/rhdisk12
/dev/hdisk13
/dev/rhdisk13
/dev/hdisk14
/dev/rhdisk14
/dev/ASM_Disk1
/dev/ASM_Disk2
/dev/ASM_Disk3
/dev/ASM_Disk4
/dev/ASM_Disk5
/dev/ASM_Disk6
/dev/ASM_Disk7
/dev/ASM_Disk8
THEN, for example with /dev/ASM_Disk1, we use major/minor number to identify the equivalent
rhdisk
11gRAC/ASM/AIX
148 of 393
How to identify
hdisks used by
ASM.
ORCLDISK
standing for
{node1:root}/ # lquerypv -h /dev/rhdisk2
oracle ASM disk 00000000
00820101 00000000 80000001 D12A3D5B
00000010
00000000 00000000 00000000 00000000
00000020
4F52434C 4449534B 00000000 00000000
00000030
00000000 00000000 00000000 00000000
00000040
0A100000 00010203 41534D44 425F4752
ASMDB_GROU 00000050
4F55505F 30303031 00000000 00000000
P standing for
00000060
00000000 00000000 41534D44 425F4752
ASM Disk
00000070
4F555000 00000000 00000000 00000000
Group used for 00000080
00000000 00000000 41534D44 425F4752
the ASMDB
00000090
4F55505F 30303031 00000000 00000000
Database we
000000A0
00000000 00000000 00000000 00000000
have created in 000000B0
00000000 00000000 00000000 00000000
our example
000000C0
00000000 00000000 01F5874B ED6CE000
(the one you
000000D0
01F588CA 150BA800 02001000 00100000
will create later 000000E0
0001BC80 00001400 00000002 00000001
)
000000F0
00000002 00000002 00000000 00000000
{node1:root}/ #
|.............@|
|................|
|ORCLDISK........|
|................|
|........ASMDB_GR|
|OUP_0001........|
|........ASMDB_GR|
|OUP.............|
|........ASMDB_GR|
|OUP_0001........|
|................|
|................|
|...........K.l..|
|................|
|................|
|................|
NON used ASM disks with query on the hdisk header will return nothing more than all lines full of
0:
11gRAC/ASM/AIX
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
149 of 393
|...............@|
|................|
|ORCLDISK........|
|................|
|........ASMDB_FL|
|ASHRECOVERY_0000|
|........ASMDB_FL|
|ASHRECOVERY.....|
|........ASMDB_FL|
|ASHRECOVERY_0000|
|................|
|................|
|...........NNG..|
|................|
|................|
|................|
You must reset the hdisk header having ASM disk stamp :
How to free
hdisk at AIX
level, when
hdisk are not
anymore used
for OCR or
voting disk, or
need to be
reset for a
failed CRS
installation ?
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
|................|
Assigning PVID on the hdisk used by ASM will give same result, on the disk header !!!!
11gRAC/ASM/AIX
150 of 393
11gRAC/ASM/AIX
151 of 393
UnderstandingandUsingClusterVerificationUtility
Oracle Clusterware
Oracle Real Application Clusters installation.
Introduction to Installing and Configuring Oracle Clusterware and Oracle Real Application Clusters
http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/intro.htm#i1026198
Oracle Clusterware and Oracle Real Application Clusters Pre-Installation Procedures
http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/part2.htm
12.1.2
UsingCVUtoDetermineifInstallationPrerequisitesareComplete
On node2
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:14:39
Kernel extension /etc/pw-syscall.64bit_kernel is loaded.
Unloading the existing extension: /etc/pw-syscall.64bit_kernel....
{node2:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:15:22
Kernel extension /etc/pw-syscall.64bit_kernel is loaded.
Unloading the existing extension: /etc/pw-syscall.64bit_kernel....
11gRAC/ASM/AIX
152 of 393
http://download-uk.oracle.com/docs/cd/B19306_01/relnotes.102/b19074/toc.htm
Third Party Clusterware
If your deployment environment does not use HACMP, ignore the HACMP version and patches
errors reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2, the expected patch for
HACMP v5.2 is IY60759. On AIX 5L version 5.3, the expected patches for HACMP v5.2 are
IY60759, IY61034, IY61770, and IY62191.
If your deployment environment does not use GPFS, ignore the GPFS version and patches errors
reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2 and version 5.3, the expected
patches for GPFS 2.3.0.3 are IY63969, IY69911, and IY70276.
11gRAC/ASM/AIX
153 of 393
MAKE SURE You have unzip tool or symbolic link to unzip in /usr/bin on both node
On node1
Execute the
following script
As oracle user :
From
From
From
From
node1 to node2
node1 to node1
node2 to node1
node2 to node2
154 of 393
ERROR:
Couldnotfindasuitablesetof
interfacesforVIPs.
Result:Nodeconnectivitycheck
failed.
Just carry on (it will not affect the
installation).
ERROR:CouldnotfindasuitablesetofinterfacesforVIPs.
This is due to a CVU issue as explain bellow :
11gRAC/ASM/AIX
155 of 393
At this
stage, node
system
requirement
s for crs is
checked !!!
Checking memory
requirements
At this
stage, node
system
requiremen
ts for crs is
checked !!!
CVU is testing
existence of all
prerequirements
needed for all
implementations as :
RAC implementation
ASM implementation
GPFS implementation
HACMP implementation
(Concurrent raw devices
or cohabitation between
ASM or GPFS with
HACMP)
xlC MUST be at
156 of 393
At this
stage, node
system
requiremen
ts for crs is
checked !!!
passed
passed
passed
passed
157 of 393
node1
IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770
passed
Node2
IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770
passed
Result: Operating system patch check passed for "IY61770".
Check: Operating system patch for "IY62191"
Node Name Applied
Required
Comment
------------ ------------------------ ------------------------ ---------node1
IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191
node2
IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191
Result: Operating system patch check passed for "IY62191".
passed
passed
158 of 393
passed
passed
Comment
159 of 393
passed
passed
passed
passed
160 of 393
161 of 393
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "freeware.zip.rte:2.3".
Check: Package existence for "freeware.gcc.rte:3.3.2.0"
Node Name
Status
Comment
------------------------------ ------------------------------ ---------------node1
missing
failed
node2
missing
failed
Result: Package existence check failed for "freeware.gcc.rte:3.3.2.0".
Check: Group existence for "dba"
Node Name Status
Comment
------------ ------------------------ -----------------------node1
exists
passed
node2
exists
passed
Result: Group existence check passed for "dba".
Check: User existence for "nobody"
Node Name Status
Comment
------------ ------------------------ -----------------------node1
exists
passed
node2
exists
passed
Result: User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.
Dont worry about the Pre-check for cluster services setup was unsuccessful on all the nodes.
this is a normal message as we dont want all APAR and FILESETS to be installed.
11gRAC/ASM/AIX
162 of 393
12.2 Installation
This installation just have to be done only starting from one node. Once the first node is installed, Oracle OUI
automatically starts the copy of the mandatory files on the others nodes, using rcp command. This step should not last
long. But in any case, dont think the OUI is stalled, and look at the network traffic before canceling the
installation !
As root user on each node, DO Create a symbolic link from /usr/sbin/lsattr to /etc/lsattr
ln -s /usr/sbin/lsattr /etc/lsattr
/etc/lsattr is used in vip check action
On each node :
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
#
#
#
#
#
#
Login as crs or oracle user (crs in our case) and follow the procedure hereunder
11gRAC/ASM/AIX
#
#
#
#
export
export
export
export
DISPLAY=node1:1
TMP=/tmp
TEMP=/tmp
TMPDIR=/tmp
163 of 393
Make sure to
{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ls
cluvfy
install
rootpre
rpm
runcluvfy.sh upgrade
doc
response
rootpre.sh runInstaller stage
welcome.html
{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ./runInstaller
********************************************************************************
execute rootpre.sh on
each node before you
click to the next step (If
not done yet with CVU).
Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle
installation.
Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.
********************************************************************************
Has 'rootpre.sh' been run by root? [y/n] (n)
11gRAC/ASM/AIX
164 of 393
11gRAC/ASM/AIX
165 of 393
Product-Specific
Prerequisite Checks :
11gRAC/ASM/AIX
166 of 393
Details of the
prerequisite
checks done
by
runInstaller
Passed
11gRAC/ASM/AIX
167 of 393
Network
Interface name
en0
en1
en0
en1
11gRAC/ASM/AIX
Host Type
Defined Name
Assigned IP
Observation
Public
Hostname
node1
10.3.25.81
Virtual
Hostname
node1-vip
Private
Hostame
node1-rac
Public
Hostname
node2
Virtual
Hostname
node2-vip
Private
Hostame
node2-rac
(Public Network)
10.3.25.181
10.10.25.81
10.3.25.82
10.3.25.182
10.10.25.82
168 of 393
11gRAC/ASM/AIX
169 of 393
Network
Interface
name
en0
en1
en0
en1
Host Type
Defined
Name
Assigned IP
SUBNET
Observation
Public
Hostname
node1
10.3.25.81
10.3.25.0
Virtual
Hostname
node1-vip
Private
Hostame
node1-rac
Public
Hostname
node2
Virtual
Hostname
node2-vip
Private
Hostame
node2-rac
(Public Network)
10.3.25.181
10.3.25.0
10.10.25.81
10.10.25.0
10.3.25.82
10.3.25.0
10.3.25.182
10.3.25.0
10.10.25.82
10.10.25.0
170 of 393
For each entry (en0,en1,en2), Click Edit to Specify Interface Type for each network card correspond to the
public network, and which one correspond to the private network (RAC Interconnect).
11gRAC/ASM/AIX
171 of 393
At this stage, you must have already configured the shared storage. At least for :
1 Voting Disk
/dev/ocr_disk1
/dev/voting_disk1
/dev/ocr_disk1
/dev/ocr_disk2
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
OR
In our example, We will implement 2 Oracle Cluster Registry Disks, and 3 Voting Disks.
ALL virtual devices must be reachable by all nodes participating to the RAC cluster in concurrency mode.
11gRAC/ASM/AIX
172 of 393
/dev/ocr_disk1
/dev/ocr_disk2
/dev/ocr_disk1
OR
/dev/ocr_disk1
/dev/ocr_disk2
11gRAC/ASM/AIX
173 of 393
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
/dev/voting_disk1
OR
/dev/voting_disk1
/dev/voting_disk2
/dev/voting_disk3
11gRAC/ASM/AIX
174 of 393
Summary :
Check Cluster Nodes and Remote Nodes
lists.
The OUI will install the Oracle CRS
software on to the local node, and then
copy this information to the other selected
nodes.
Install :
The Oracle Universal Installer will
proceed the installation on the first
node, then will copy automatically the
code on the 2 others selected nodes.
11gRAC/ASM/AIX
175 of 393
Execute Configuration
Scripts :
KEEP THIS WINDOWS
OPEN
AND Do execute scripts
in the following order,
waiting for each to
succeed before running
the next one !!!
AS root :
1
On node1, Execute
orainstRoot.sh
On node2, Execute
orainstRoot.sh
On node1, Execute
root.sh
On node2, Execute
root.sh
orainstRoot.sh :
Execute the orainstRoot.sh on all nodes.
The file is located in $ORACLE_BASE/oraInventory (OraInventory home) on each nodes
/oracle/oraInventory in our case.
{node1:root}/oracle/oraInventory # ./orainstRoot.sh
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node1:root}/oracle/oraInventory #
{node2:root}/oracle/oraInventory # ./orainstRoot.sh
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node2:root}/oracle/oraInventory #
/oracle/oraInventory/orainstRoot.sh
11gRAC/ASM/AIX
176 of 393
Before running root.sh script on each node, please check the following :
Check if your public network card (en0 in our case) is a standard network adapter, or a virtual network ethernet.
Issue the following command as root :
{node1:root} / -> entstat d en0
Receive Statistics:
------------------Packets: 0
Bytes: 0
Interrupts: 0
Receive Errors: 0
Packets Dropped: 0
Bad Packets: 0
Broadcast Packets: 0
Multicast Packets: 0
CRC Errors: 0
DMA Overrun: 0
Alignment Errors: 0
No Resource Errors: 0
Receive Collision Errors: 0
Packet Too Short Errors: 0
Packet Too Long Errors: 0
Packets Discarded by Adapter: 0
Receiver Start Count: 0
General Statistics:
------------------No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 2000
Driver Flags: Up Broadcast Simplex
Limbo 64BitSupport ChecksumOffload
PrivateSegment LargeSend DataRateSet
2-Port Gigabit Ethernet-SX PCI-X Adapter (14108802) Specific Statistics:
-------------------------------------------------------------------Link Status : Up
Media Speed Selected: Auto negotiation
Media Speed Running: Unknown
PCI Mode: PCI-X (100-133)
PCI Bus Width: 64-bit
Latency Timer: 144
Cache Line Size: 128
Jumbo Frames: Disabled
TCP Segmentation Offload: Enabled
TCP Segmentation Offload Packets Transmitted: 0
TCP Segmentation Offload Packet Errors: 0
Transmit and Receive Flow Control Status: Disabled
Transmit and Receive Flow Control Threshold (High): 45056
Transmit and Receive Flow Control Threshold (Low): 24576
Transmit and Receive Storage Allocation (TX/RX): 16/48
11gRAC/ASM/AIX
{node1:root}/ #
entstat -d en0 | grep -iE ".*link.*status.*:.*up.*"
Link Status : Up
{node1:root}/ #
177 of 393
OR
Transmit Statistics:
-------------------Packets: 136156
Bytes: 19505561
Interrupts: 0
Transmit Errors: 0
Packets Dropped: 0
Receive Statistics:
------------------Packets: 319492
Bytes: 142069339
Interrupts: 285222
Receive Errors: 0
Packets Dropped: 0
Bad Packets: 0
General Statistics:
------------------No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex 64BitSupport DataRateSet
Virtual I/O Ethernet Adapter (l-lan) Specific Statistics:
--------------------------------------------------------RQ Length: 4481
No Copy Buffers: 0
Trunk Adapter: False
Filter MCast Mode: False
Filters: 255 Enabled: 1 Queued: 0 Overflow: 0
LAN State: Operational
Buffers
tiny
small
medium
large
huge
Reg
512
512
128
24
24
Alloc
512
512
128
24
24
Min
512
512
128
24
24
Max
2048
2048
256
64
64
MaxA
512
553
128
24
24
LowReg
509
502
128
24
24
With 10gRAC it was know as Bug 4437469: RACGVIP NOT WORKING ON SERVER WITH ETHERNET
VIRTUALISATION (This was only required if using AIX Virtual Interfaces for the Oracle Database 10.2.0.1 RAC
11gRAC/ASM/AIX
178 of 393
11gRAC/ASM/AIX
179 of 393
FIRST On node1
As root, Execute /crs/11.1.0/root.sh
When finished, CSS deamon should be active on node 1.
Check for line CSS is active on these nodes.
node1
{node1:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
Now formatting voting device: /dev/voting_disk1
Now formatting voting device: /dev/voting_disk2
Now formatting voting device: /dev/voting_disk3
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
Cluster Synchronization Services is inactive on these nodes.
node2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
{node1:root}/crs/11.1.0 #
11gRAC/ASM/AIX
180 of 393
The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if youve chosen an
implementation without HACMP. If this layer is implemented for other purpose, disks ressources necessary to
install and run CRS data will have to be part of an HACMP volume group resource.
If you have previously installed HACMP, you must remove :
rsct.hacmp.rte
rsct.compat.basic.hacmp.rte
rsct.compat.clients.hacmp.rte
If you did run a first installation of the Oracle Clusterware (CRS) with HACMP installed,
Check if /opt/ORCLcluster directory does exist and if so, remove it on all nodes.
TO BE ABLE TO RUN AGAIN the root.sh script on the node, you must :
Either
Clean the failed CRS installation, and start again the CRS installation procedure.
Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install (same procedure for 10g and
11g)
Only Supported method by Oracle.
OR
Do the following just to find out and solve the problem without installing again at each try, And when
solved, follow again the Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install to
clean properly the system, and start again the installation as supported by oracle.
As root user on each node :
Do execute
OR execute
THEN
Change owner, group (oracle:dba) and permission (660) for /dev/ocr_disk1 and /dev/voting_disk1
on each node of the cluster on node1 and node2
11gRAC/ASM/AIX
Erase OCR and Voting diks content Format (Zeroing) on the disks from one node :
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
#
#
#
#
#
dd
dd
dd
dd
dd
if=/dev/zero
if=/dev/zero
if=/dev/zero
if=/dev/zero
if=/dev/zero
181 of 393
THEN On node2
As root, Execute
/crs/11.1.0/root.sh
If CSS is not active on all nodes, or on one of the nodes, this means that you
could have a problem with the network configuration, or the shared disks configuration for
accessing OCR and Voting Disks.
Check your network , shared disks configuration, and owner and access permissions
(read/write) on OCR and Voting disks from each participating node. And execute again the
root.sh script on node having the problem.
Check also as oracle user the following command from each node :
{node1:root}/ # su - crs
{node1:crs}/crs/11.1.0 # /crs/11.1.0/bin/olsnodes
node1
node2
{node1:crs}/crs/11.1.0 # rsh node2
{node2:crs}/crs/11.1.0 # olsnodes
node1
node2
{node2:crs}/crs/11.1.0 #
11gRAC/ASM/AIX
182 of 393
THEN On node2
As root, Execute /crs/11.1.0/root.sh
When finished, CSS deamon should be active on node 1 and 2.
Check for line CSS is active on these nodes.
node1
node2
{node2:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
node2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating
Creating
Creating
Starting
Starting
Starting
VIP
GSD
ONS
VIP
GSD
ONS
application
application
application
application
application
application
resource
resource
resource
resource
resource
resource
on
on
on
on
on
on
(2)
(2)
(2)
(2)
(2)
(2)
nodes...
nodes...
nodes...
nodes...
nodes...
nodes...
Done.
{node2:root}/crs/11.1.0 #
11gRAC/ASM/AIX
183 of 393
VIP
GSD
ONS
VIP
GSD
ONS
application
application
application
application
application
application
resource
resource
resource
resource
resource
resource
on
on
on
on
on
on
(2)
(2)
(2)
(2)
(2)
(2)
nodes...
nodes...
nodes...
nodes...
nodes...
nodes...
Done.
{node2:root}/crs/11.1.0 #
IF NOT, Check for the line :
...
Running vipca(silent) for configuring nodeapps
"en0 is not public. Public interfaces should be used to configure virtual IPs,
{node2:root}/crs/11.1.0 #
Ifyouhave:
"en0 is not public. Public interfaces should be used to configure virtual IPs
THEN
Read next pages to understand and solve it !!!
RUN VIPCA manualy as explained on next page !!!
OTHERWISE, skip the next VIPCA pages
11gRAC/ASM/AIX
184 of 393
11gRAC/ASM/AIX
185 of 393
VIP
en0
RAC
Interconnect
RAC Interconnect
en1
en2
en0
Backup
Just to remember
!!!
Public
VIP
en0
en1
Node Name
IP
Node Name
IP
Node Name
IP
node1
10.3.25.81
node1-vip
10.3.25.181
node1-rac
10.10.25.81
node2
10. 3.25.82
node2-vip
10. 3.25.182
node1-rac
10.10.25.82
2 of 2 :
In the Virtual IPs for cluster nodes
screen, you must provide the VIP node
name for node1 and stroke the TAB key to
automatically feel the rest.
Check validity of the entries before
proceeding.
Then click Next ...
11gRAC/ASM/AIX
186 of 393
11gRAC/ASM/AIX
187 of 393
check that each network card configured for Public network is mapping a virtual IP.
On node 1 :
{node1:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.81 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.81 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.81 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node1:root}/ #
11gRAC/ASM/AIX
188 of 393
On node 2 :
{node2:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #
11gRAC/ASM/AIX
check that each network card configured for Public network is mapping a virtual IP.
189 of 393
If node 1 is rebooted :
On node1 failure or reboot,
THEN On node 2 : Well see VIP from node1, VIP from node1 is then still reachable.
{node2:root}/ # ifconfig -a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #
11gRAC/ASM/AIX
190 of 393
Just click OK to
continue ...
Configuration
Assistants :
11gRAC/ASM/AIX
1 of 393
2 of 393
Check passed.
If all or parts of assistants are failed or not executed, check for problems in log
If you miss those steps or closed the previous screen of the runInstaller, you will have to run them manually before
moving to the next step. Just adapt the lines with your own setting (node names, public/private network).
On one node as crs user :
For Oracle notification Server Configuration Assistant
/crs/11.1.0/bin/racgons add_config node1:6200 node2:6200
For crs Private Interconnect Assistant
/crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.25.25.0:cluster_interconnect
For crs Cluster Verification Utility
/crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2
End of Installation
11gRAC/ASM/AIX
3 of 393
UpdatetheClusterwareunixuser.profile
To be done on each node for crs (in our case) or oracle unix user.
vi $HOME/.profile file in crss home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
then echo "$MAILMSG"
fi
ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORA_CRS_HOME
export LD_LIBRARY_PATH=$CRS_HOME/lib:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$CRS_HOME/bin:$PATH
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from crs user, and reconnect to load modified $HOME/.profile
Content of
.kshrc file
11gRAC/ASM/AIX
4 of 393
12.3.2
VerifyparameterCSSmisscount
30 Seconds
VMS
30 Seconds
Windows 30 Seconds
Check css
misscount
*CSS misscount default value when using vendor (non-Oracle) clusterware is 600
seconds. This is to allow the vendor clusterware ample time to resolve any possible
split brain scenarios.
Subject:CSSTimeoutComputationinRAC10g(10gRelease1and10gRelease2)
DocID:Note:294430.1
Check css
disktimeout
Check css
reboottime
200
To co ampute the right values, do read metalink note 294430.1, and use following note to change the value :
Subject:10gRAC:StepsToIncreaseCSSMisscount,ReboottimeandDisktimeoutDocID:Note:284752.1
Set css misscount
And check
30
Restart all other nodes !!!
11gRAC/ASM/AIX
5 of 393
12.3.3
ClusterReadyServicesHealthCheck
You have completed the CRS install. Now you want to verify if the install is valid.
To Ensure that the CRS install on all the nodes is valid, the following should be checked on all the nodes.
1. Ensure that you have successfully completed running root.sh on all nodes during the install. (Please
do not re-run root.sh, this is very dangerous and might corrupt your installation, The object of this step is to only
confirm if the root.sh was run successfully after the install
2. Run the command $ORA_CRS_HOME/bin/crs_stat. Please ensure that this command does not error out but
dumps the information for each resource. It does not matter what CRS stat returns for each resource. If the
crs_stat exits after printing information about each resource then it means that the CRSD daemon is up and
the client crs_stat utility can communicate with it.
This will also indicate that the CRSD can read the OCR.
If the crs_stat errors out with CRS-0202: No resources are registered, Then this means that there are
no resources registered, and at this stage you missed the VIP configuration. This is not an error but is
mostly because at this stage you missed the VIP configuration.
If the crs_stat errors out with CRS-0184: Cannot communicate with the CRS daemon, Then this means
the CRS deamons are not started.
11gRAC/ASM/AIX
6 of 393
Execute
crs_stat t
on one node
as oracle user :
Execute
crs_stat ls
on one node
as oracle user :
{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node2:crs}/crs/11.1.0/bin #
11gRAC/ASM/AIX
{node1:root}/ # su - crs
{node1:crs}/crs # olsnodes
node1
node2
{node1:crs # rsh node2
{node2:crs}/crs # olsnodes
node1
node2
{node2:crs}/crs #
7 of 393
#
# olsnodes -n -p -i
node1-rac
node1-vip
node2-rac
node2-vip
#
4. Output of crsctl check crs / cssd / crsd / evmd returns ". daemon appears healthy"
CRS health
check
cssd, crsd,
evmd
health
check
CRS
software
version
query
11gRAC/ASM/AIX
8 of 393
12.3.4
Addingenhancedcrsstatscript
11gRAC/ASM/AIX
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
on
on
on
on
on
on
node1
node1
node1
node2
node2
node2
9 of 393
11gRAC/ASM/AIX
10 of 393
12.3.5
InterconnectNetworkconfigurationCheckup
After CRS installation is completed, verify that the public and cluster interconnect have been set to the desired
values by entering the following commands as root:
]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]
<nodename>
<if_name>
<subnet>
<if_type>
{node1:crs}/crs #
oifcfg getif
Subject:HowtoChangeInterconnect/PublicInterfaceIPSubnetina10gClusterDocID:Note:283684.1
This command should return values for global public and global cluster_interconnect; for example:
{node1:root}/crs/11.1.0/bin # oifcfg getif
en0 10.3.25.0 global public
en1 10.10.25.0 global cluster_interconnect
{node1:root}/crs/11.1.0/bin #
If the command does not return a value for global cluster_interconnect, enter the following
commands:
{node1:crs}/crs/11.1.0/bin # oifcfg delif -global
# oifcfg setif -global <interface name>/<subnet>:public
# oifcfg setif global <interface name>/<subnet>:cluster_interconnect
For example:
{node1:crs}/crs/11.1.0/bin # oifcfg delif -global
{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en0/10.3.25.0.0:public
{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en1/10.10.25.0.0:cluster_interconnect
If necessary and only for troubleshooting purpose, disable the automatic reboot of AIX nodes when node
fail to communicate with CRS daemons, or fail to access OCR and Voting disk.
Subject: 10g RAC: Stopping Reboot Loops When CRS Problems Occur Doc ID: Note:239989.1
Subject: 10g RAC: Troubleshooting CRS Reboots Doc ID: Note:265769.1
If one node crashed after running dbca, netca tools, with CRS codedump and Authentication OSD error,
check crsd.log file for missing $CRS_HOME/crs/crs/auth directory.
THEN you need to re-create manually the missing directory, create the auth directory with correct owner,
group, and permission using following metalink note :
Subject: Crs Crashed With Authentication Osd Error Doc ID: Note:358400.1
11gRAC/ASM/AIX
11 of 393
12.3.6
OracleCLusterRegistrycontentCheckandBackup
{node1:crs}/crs # ocrcheck
Check Oracle Status of Oracle Cluster Registry
Version
Cluster
Total space (kbytes)
Registry
Integrity
Used space (kbytes)
Available space (kbytes)
As oracle
ID
user,
Device/File Name
Execute
ocrcheck
Device/File Name
is as follows :
:
2
:
306972
:
5716
:
301256
: 1928316120
: /dev/ocr_disk1
Device/File integrity check succeeded
: /dev/ocr_disk2
Device/File integrity check succeeded
Check OCR
disks
locations :
11gRAC/ASM/AIX
css votedisk
12 of 393
11gRAC/ASM/AIX
13 of 393
{node1:crs}/crs # ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a file
-import <filename>
- Import cluster registry contents from a file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname>
- Configure periodic backup location
-showbackup [auto|manual]
- Show backup information
-manualbackup
- Perform OCR backup
-restore <filename>
- Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite
- Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename>
- Repair local OCR configuration
-help
- Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
{node1:crs}/crs #
Export content of OCR :
{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export.dmp1 -s online
{node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
-rw-r--r-1 root
system
106420 Jan 30 18:30 /oracle/ocr_export.dmp
{node1:root}/crs/11.1.0/bin #
you must not edit/modify this exported file
11gRAC/ASM/AIX
14 of 393
/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr
After few hours, weeks, months, ocrconfig showbackup could display the following :
View OCR automatic periodic backup managed by Oracle Clusterware
{node1:crs}/crs # ocrconfig -showbackup
node2
2008/03/16 14:45:48
/crs/11.1.0/cdata/crs_cluster/backup00.ocr
node2
2008/03/16 10:45:47
/crs/11.1.0/cdata/crs_cluster/backup01.ocr
node2
2008/03/16 06:45:47
/crs/11.1.0/cdata/crs_cluster/backup02.ocr
node2
2008/03/15 06:45:46
/crs/11.1.0/cdata/crs_cluster/day.ocr
node2
2008/03/07 02:52:46
/crs/11.1.0/cdata/crs_cluster/week.ocr
node1
2008/02/24 08:09:21
/crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr
node1
2008/02/24 08:08:48
{node1:crs}/crs #
/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr
11gRAC/ASM/AIX
15 of 393
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
config
config
config
config
config
config
database
database -d <name> [-a] [-t]
service -d <name> [-s <service_name>] [-a] [-S <level>]
nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
asm -n <node_name>
listener -n <node_name>
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
disable
disable
disable
disable
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
enable
enable
enable
enable
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
getenv
getenv
getenv
getenv
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]
Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY |
PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
11gRAC/ASM/AIX
16 of 393
Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]
Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
remove
remove
remove
remove
remove
remove
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
setenv
setenv
setenv
setenv
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
start
start
start
start
start
start
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
status
status
status
status
status
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
stop
stop
stop
stop
stop
stop
17 of 393
11gRAC/ASM/AIX
18 of 393
AS root user
Command to start/stop
the CRS deamons :
11gRAC/ASM/AIX
19 of 393
11gRAC/ASM/AIX
20 of 393
To view CRS
logs
cd /crs/11.1.0/log/nodename/.
In our case nodename will be node1 for CRS logs on node1
cd /crs/11.1.0/log/node1
And nodename will be node2 for CRS logs on node2
cd /crs/11.1.0/log/node2
Contents example of ../crs/log/node1 with node1 :
{node1:root}/crs/11.1.0/log/node1 # ls -la
total 256
drwxr-xr-t
8 root
dba
256 Mar 12 16:21 .
drwxr-xr-x
4 oracle
dba
256 Mar 12 16:21 ..
drwxr-x--2 oracle
dba
256 Mar 12 16:21 admin
-rw-rw-r-1 root
dba
24441 Apr 17 12:27 alertnode1.log
drwxr-x--2 oracle
dba
98304 Apr 17 12:27 client
drwxr-x--2 root
dba
256 Apr 16 13:56 crsd
drwxr-x--4 oracle
dba
256 Mar 22 21:56 cssd
drwxr-x--2 oracle
dba
256 Mar 12 16:22 evmd
drwxrwxr-t
5 oracle
dba
4096 Apr 16 12:48 racg
{node1:root}/crs/11.1.0/log/node1 #
Oracle Clusterware
consolidated logging in
10gR2
Look at following Metalink Note to get all logs necessary for needed support diagnostics :
Subject: CRS 10g R2 Diagnostic Collection Guide Doc ID: Note:330358.1
11gRAC/ASM/AIX
21 of 393
Reboot node1
11gRAC/ASM/AIX
Check on node1, and VIP from node2 should appear while node2 is out of order.
When node2 is back with CRS up and running, the VIP will come back to its original
position on node2.
After reboot, both nodes will come back with CRS up and running, with VIP from
both nodes on their respective positions, VIP (node1) on node1 and VIP (node2) on
node2.
22 of 393
At this stage :
The Oracle Cluster Registry and Voting Disk are created and configured
The Oracle Cluster Ready Services is installed, and started on all nodes.
The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.
OptionsDescription:
n<node_name>Nodename.
o<oracle_home>Oraclehomefortheclusterdatabase.
A<new_vip_address>ThenodelevelVIPaddress(<name|ip>/netmask[/if1[|if2|...]]).
Anexampleofthe'modifynodeapps'commandisasfollows:
$srvctlstopnodeappsnnode1
$srvctlmodifynodeappsnnode1A10.3.25.181/255.255.255.0/en0
$srvctlstartnodeappsnnode1
Note:Thiscommandshouldberunasroot.
Metalink Note 298895.1- Modifying the default gateway address used by the Oracle 10g VIP
Metalink Note 264847.1- How to Configure Virtual IPs for 10g RAC
11gRAC/ASM/AIX
23 of 393
Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
On both Nodes :
As root user on each node :
$ORA_CRS_HOME/bin/crsctl stop crs (to clean any remaining crs deamons)
rmitab h1 this will remove oracle CRS entry in the /etc/inittab
rmitab h2
rmitab h3
rm Rf /opt/ORCL*
kill remaining process from output : ps-ef|grep crs and ps-ef|grep d.bin
rm R /etc/oracle/*
You should now remove the CRS installation, 2 options :
1/ You want to keep the oraInventory as its used for other oracle products which are installed and
used, THEN run runInstaller as oracle user to uninstall the CRS installation. When done remove the
content of the CRS directory on both nodes : rm Rf /crs/11.1.0/*
OR
2/ You dont care about the oraInventory, and theres no other oracle products installed on
the nodes, THEN remove the full content of the $ORACLE_BASE including the oraInventory
directory : rm Rf /oracle/*
Change owner, group and permission for /dev/ocr_disk and /dev/vote_disk on each node of the cluster :
On node1
{node1:root}/
{node1:root}/
{node1:root}/
{node1:root}/
#
#
#
#
chown
chown
chmod
chmod
On node2
{node1:root}/
{node2:root}/
{node2:root}/
{node2:root}/
{node2:root}/
#
#
#
#
#
rsh node2
chown root.oinstall /dev/ocr_disk1*
chown crs.oinstall /dev/voting_disk*
chmod 660 /dev/ocr_disk*
chmod 660 /dev/voting_disk*
root.oinstall /dev/ocr_disk*
crs.oinstall /dev/voting_disk*
660 /dev/ocr_disk*
660 /dev/voting_disk*
To erase ALL OCR and Voting diks content Format (Zeroing) on the disks from one node :
for i in 1 2
do
dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.
.
for i in 1 2 3
do
dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.
11gRAC/ASM/AIX
24 of 393
At this stage, Oracle Clusterware is installed and MUST be started on all nodes.
ASM is an Oracle cluster files system (Oracle Automated Storage Management), and as a CFS it can be
independent from the database software, and updated to upper release independently from database software.
That is the reason why we will install ASM software in its own ORACLE_HOME directory that well define as
ASM_HOME ( /oracle/asm/11.1.0 ).
Now, in order to make available ASM cluster files systems to Oracle RAC database, we need to :
11gRAC/ASM/AIX
25 of 393
13.1 Installation
Oracle ASM installation just have to be done only starting from one node. Once the first node is installed, Oracle
OUI automatically starts the copy of the mandatory files on the second node, using rcp or scp command. This step
could last long, depending on the network speed (one hour), without any message. So, dont think the OUI is
stalled, and look at the network traffic before canceling the installation !
You can also create a staging area. The name of the subdirectories is in the format Disk1 to Disk3.
On each node :
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #
Login as asm (in our case), or oracle user and follow the procedure hereunder
Setup and
export your
DISPLAY, TMP and
TEMP variables
With /tmp or other destination having enough free space, about 500Mb on
each node.
{node1:asm}/ # export DISPLAY=node1:0.0
If not set in asm .profile, do :
{node1:asm}/ # export TMP=/tmp
{node1:asm}/ # export TEMP=/tmp
{node1:asm}/ # export TMPDIR=/tmp
Check that Oracle Clusterware (including VIP, ONS and GSD) is started on each node !!!
As asm user from
node1 :
11gRAC/ASM/AIX
{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #
26 of 393
11gRAC/ASM/AIX
27 of 393
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ls
doc
install
response
runInstaller stage
welcome.html
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer
Checking Temp space: must be greater than 190 MB. Actual 1950 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-13_03-47-34PM. Please wait
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.
At the OUI Welcome screen
You can check the installed Products :
11gRAC/ASM/AIX
28 of 393
11gRAC/ASM/AIX
29 of 393
Details of the
prerequisite
checks done
by
runInstaller
11gRAC/ASM/AIX
30 of 393
Create Database :
Choose Install database Software Only
Choose Configure Automatic Storage
Management (ASM), we want to install the
ASM software in its own ORACLE_HOME at this
stage.
11gRAC/ASM/AIX
31 of 393
Summary :
The Summary screen will be presented.
Check Cluster Nodes and Remote Nodes lists.
The OUI will install the Oracle 11g ASM
software on to the local node, and then copy this
information to the other selected nodes.
Install :
The Oracle Universal Installer will proceed
the installation on the first node, then will
copy automatically the code on the others
selected nodes.
At 50%, it may take time to pass over the 50%,
dont be affrais, installation is processing,
running a test script on remote nodes
Just wait for the next screen ...
At this stage, you could hit message about Failed Attached Home !!!
if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on OK
:
From node2 :
{node2:root}/oracle/asm/11.1.0 # su asm
{node2:asm}/oracle/asm -> /oracle/asm/11.1.0/oui/bin/runInstaller attachHome
noClusterEnabled ORACLE_HOME=/oracle/asm/11.1.0 ORACLE_HOME_NAME=OraAsm11g_home1
CLUSTER_NODES=node1,node2 INVENTORY_LOCATION=/oracle/oraInventory LOCAL_NODE=node2
Starting Oracle Universal Installer
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be
executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
AttachHome was successful.
{node2:asm}/oracle/asm/11.1.0/bin #
11gRAC/ASM/AIX
32 of 393
Execute Configuration
Scripts :
/oracle/asm/11.1.0/root.sh to
execute on each node as root
user, on node1, then on node2.
11gRAC/ASM/AIX
33 of 393
On node1 :
{node1:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node1:root}/oracle/asm/11.1.0 #
On node2 :
{node2:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node2:root}/oracle/asm/11.1.0 #
End of Installation :
Click on Installed Products to chek with the
oraInventory that ASM is installed on each
node.
11gRAC/ASM/AIX
34 of 393
11gRAC/ASM/AIX
35 of 393
ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORACLE_BASE/asm/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_SID=+ASM1
if [ -t 0 ]; then
stty intr ^C
fi
On node2, ORACLE_SID will be set as bellow :
...
export ORACLE_SID=+ASM2
11gRAC/ASM/AIX
36 of 393
At this stage, Oracle ASM software is installed in its own ORACLE_HOME directory that well define as
ASM_HOME ( /oracle/asm/11.1.0 ).
Now, we need to :
11gRAC/ASM/AIX
37 of 393
In order to configure ASM instances, and then create ASM Disk Groups, we need to configure and start a default
listener on each node.
Listener on each node must :
11gRAC/ASM/AIX
38 of 393
Click Next
Click Next
Click Next
11gRAC/ASM/AIX
39 of 393
Select Add
Click Next
Click Next
Click Next
11gRAC/ASM/AIX
40 of 393
Using other port will mean that configuration of LOCAL and REMOTE Listeners will be MANDATORY.
Later, If you need to configure more than a listener, for example one for each database in the cluster,
you should :
In case of extra listeners, they should be configured from their own $ORACLE_HOME directory, and not from the
$ASM_HOME disrectory. So creation of extra listeners will have to be done when RDBMS will be installed ...
Click Next
11gRAC/ASM/AIX
41 of 393
THEN
On the telnet session, we can see :
Oracle Net Services
Configuration:
Configuring Listener:LISTENER
node1...
node2...
Listener configuration complete.
Oracle Net Configuration Assistant,
Listener Configuration, Listener
Configuration Done :
11gRAC/ASM/AIX
42 of 393
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
OFFLINE
OFFLINE
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm/11.1.0/bin #
11gRAC/ASM/AIX
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm/11.1.0/bin #
43 of 393
To get
information
about the
listener
resource on
one node, from
the Oracle
Clusterware
Registry :
Node1 for
example
11gRAC/ASM/AIX
44 of 393
Check that
entry is OK,
showing
LISTENER_NODE LISTENER_NODE1 =
1, and with vip from
(DESCRIPTION_LIST =
node1, and static IP
(DESCRIPTION =
from node1.
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
Check that entry is OK, showing LISTENER_NODE2, and with vip from node2, and static IP from node2.
It may happens that
on remote nodes,
the listener.ora file
get not properly
configured, and
may show same
content as the one
from node1.
Chech on extra
nodes, if any
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
13-FEB-2008 21:39:08
Uptime
0 days 0 hr. 13 min. 20 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
11gRAC/ASM/AIX
45 of 393
11gRAC/ASM/AIX
46 of 393
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The listener supports no services
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #
To get help on
listeners commands :
stop
version
trace
quit
show*
status
reload
spawn
exit
{node1:asm}/oracle/asm/11.1.0/bin #
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
TNS for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Unix Domain Socket IPC NT Protocol Adaptor for IBM/AIX RISC System/6000:
Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Oracle Bequeath NT Protocol Adapter for IBM/AIX RISC System/6000: Version
11.1.0.6.0 - Production,,
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #
11gRAC/ASM/AIX
47 of 393
11gRAC/ASM/AIX
48 of 393
13.5.1
ThruDBCA
Database Configuration
Assistant : Welcome
Database Configuration
Assistant : Operations
Select Configure
Automatic Storage
Management
THEN click Next
for next screen
Database Configuration
Assistant : Node Selection
11gRAC/ASM/AIX
49 of 393
Database Configuration
Assistant : Create ASM Instance
On this screen, you can :
Click on
11gRAC/ASM/AIX
50 of 393
ASM_DISKGROUPS *
Description: This value is the list of the Disk Group names to be mounted by the ASM at startup or when ALTER DISKGROUP ALL MOUNT
command is used.
ASM_DISKSTRING *
Description: A comma separated list of paths used by the ASM to limit the set of disks considered for discovery when a new disk is added to a
Disk Group. The disk string should match the path of the disk, not the directory containing the disk. For example: /dev/rdsk/*.
ASM_POWER_LIMIT *
Description: This value is the maximum power on the ASM instance for disk rebalancing.
Range of Values: 1 to 11
Default Value: 1
CLUSTER_DATABASE
Description : Set CLUSTER_DATABASE to TRUE to enable Real Application Clusters option.
Range of Values: TRUE | FALSE
Default Value : FALSE
DB_UNIQUE_NAME
DIAGNOSTIC_DEST
Replace USER_DUMP_DEST, CORE_DUMP_DEST, BACKGROUND_DUMP_DEST
Specifies the pathname for a directory where the server will write debugging trace files on behalf of a user process.
Specifies the pathname (directory or disc) where trace files are written for the back ground processes (LGWR, DBW n, and so on) during
Oracle operations. It also defines the location of the database alert file which logs significant events and messages.
The directory name specifying the core dump location (for UNIX).
11gRAC/ASM/AIX
51 of 393
INSTANCE_TYPE
LARGE_POOL_SIZE
Description : Specifies the size of the large pool allocation heap, which is used by Shared Server for session memory, parallel execution for
message buffers, and RMAN backup and recovery for disk I/O buffers.
Range of Values: 600K (minimum); >= 20000M (maximum is operating system specific).
Default Value : 0, unless parallel execution or DBWR_IO_SLAVES are configured
LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))
SHARED_POOL_SIZE
Description : Specifies the size of the shared pool in bytes. The shared pool contains objects such as shared cursors, stored procedures,
control structures, and Parallel Execution message buffers. Larger values can improve performance in multi-user systems.
Range of Values: 300 Kbytes - operating system dependent.
Default Value : If 64 bit, 64MB, else 16MB
SPFILE
Description: Specifies the name of the current server parameter file in use.
Range of Values: static parameter
Default Value: The SPFILE parameter can be defined in a client side PFILE to indicate the name of the server parameter file to use. When the
default server parameter file is used by the server, the value of SPFILE will be internally set by the server.
INSTANCE_NUMBER
Description : A Cluster Database parameter that assigns a unique number for mapping the instance to one free list group owned by a
database object created with storage parameter FREELIST GROUPS. Use this value in the INSTANCE clause of the ALTER TABLE ...
ALLOCATE EXTENT statement to dynamically allocate extents to this instance.
Range of Values: 1 to MAX_INSTANCES (specified at database creation).
Default Value : Lowest available number (depends on instance startup order and on the INSTANCE_NUMBER values assigned to other
instances)
LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))
PleasereadfollowingMetalinknote:
Subject:SGAsizingforASMinstancesanddatabasesthatuseASMDocID:Note:282777.1
11gRAC/ASM/AIX
52 of 393
Choose the type of parameter file that you would like to use for the new ASM instances
OPTION 1
Create initialization
parameter file (IFILE)
And specify the filename
..
OPTION 2
Select Create server
parameter file (SPFILE),
Replace
{ORACLE_HOME}/dbs/s
pfile+ASM.ora by
/dev/ASMspf_disk
11gRAC/ASM/AIX
53 of 393
Database Configuration
Assistant :
Click on OK, DBCA will create
an ASM Instance on each node
Database Configuration
Assistant :
DBCA is creating the ASM
instances.
Database Configuration
Assistant :
ASM instances are created,
Click on Create New to create
an ASM Disk Group.
11gRAC/ASM/AIX
54 of 393
11gRAC/ASM/AIX
55 of 393
su - asm
export ORACLE_SID=+ASM1
export ORACLE_HOME=/oracle/asm/11.1.0
sqlplus /nolog
connect /as sysdba
Then execute following SQL :
CREATE DISKGROUP DATA_DISKGROUP
NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';
11gRAC/ASM/AIX
56 of 393
11gRAC/ASM/AIX
57 of 393
11gRAC/ASM/AIX
58 of 393
11gRAC/ASM/AIX
59 of 393
11gRAC/ASM/AIX
60 of 393
11gRAC/ASM/AIX
61 of 393
GROUP2
o /dev/ASM_disk4
o /dev/ASM_disk5
Click on
start
11gRAC/ASM/AIX
62 of 393
11gRAC/ASM/AIX
63 of 393
GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2
GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5
Database Configuration
Assistant : ASM Disk Groups
ASM disk group is then
displayed with information as :
11gRAC/ASM/AIX
64 of 393
From
If you want to create a new ASM diskgroup, click on
, and ONLY candidate disks will be displayed :
/dev/ASM_disk1, /dev/ASM_disk2, /dev/ASM_disk4 and /dev/ASM_disk5 are not anymore Candidate !!!
11gRAC/ASM/AIX
65 of 393
From
If you want to add disks in an ASM diskGroup, Select the DiskGroup DATA_DG1, and click on
11gRAC/ASM/AIX
66 of 393
kfdp_queryBg(): 9
kfdp_query(): 10
kfdp_queryBg(): 10
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
SUCCESS: ALTER DISKGROUP ALTER DISKGROUP DATA_DG1 ADD FAILGROUP GROUP1 DISK '/dev/ASM_disk3' SIZE 4096M FAILGROUP
GROUP2 DISK '/dev/ASM_disk6' SIZE 4096M;
NOTE: starting rebalance of group 1/0xba2b9ad0 (DATA_DG1) at power 1
Starting background process ARB0
Thu Feb 14 12:25:49 2008
ARB0 started with pid=20, OS id=606380
NOTE: assigning ARB0 to group 1/0xba2b9ad0 (DATA_DG1)
NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 4:2
Thu Feb 14 12:26:13 2008
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:26:16 2008
NOTE: requesting all-instance membership refresh for group=1
NOTE: initiating PST update: grp = 1
kfdp_update(): 11
Thu Feb 14 12:26:19 2008
kfdp_updateBg(): 11
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp = 1
kfdp_update(): 12
kfdp_updateBg(): 12
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xba2b9ad0 (DATA_DG1)
kfdp_query(): 13
kfdp_queryBg(): 13
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:44:31 2008
GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2
o /dev/ASM_disk3
GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5
o /dev/ASM_disk6
11gRAC/ASM/AIX
67 of 393
From
Click on
to :
From
Click on
or click on
Database Configuration
Assistant :
Click on Finish, then
Yes to exit
11gRAC/ASM/AIX
68 of 393
13.5.2
ManualSteps
On node1, do create a
pfile initASM.ora in
/oracle/asm/11.1.0/dbs
With the specified
values :
On each node,
node1, then node2
AS asm user, Do
create the following
directories
On node1
{node1:asm}/oracle/asm # cd ..
{node1:asm}/oracle # mkdir admin
{node1:asm}/oracle # mkdir admin/+ASM
{node1:asm}/oracle # mkdir admin/+ASM/hdump
{node1:asm}/oracle # mkdir admin/+ASM/pfile
On node2
{node2:asm}/oracle/asm # cd ..
{node2:asm}/oracle # mkdir admin
{node2:asm}/oracle # mkdir admin/+ASM
{node2:asm}/oracle # mkdir admin/+ASM/hdump
{node2:asm}/oracle # mkdir admin/+ASM/pfile
11gRAC/ASM/AIX
69 of 393
/as sysasm
an idle instance.
nomunt pfile=/oracle/asm/11.1.0/initASM.ora ;
started
125829120
1301456
124527664
0
0
bytes
bytes
bytes
bytes
bytes
SQL>
11gRAC/ASM/AIX
70 of 393
/as sysasm
an idle instance.
nomunt
started
bytes
bytes
bytes
bytes
bytes
HOST_NAME, VERSION, STATUS from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION
STATUS
--------------- ---------------- -------------------- ----------------- -----------1 +ASM2
node1
11.1.0.6.0
STARTED
SQL>
Do query the global view or cluster view, know as gv$instance
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from
gv$instance;
INSTANCE_NUMBER
--------------2
1
INSTANCE_NAME
---------------+ASM2
+ASM1
HOST_NAME
-------------------node2
node1
VERSION
----------------11.1.0.6.0
11.1.0.6.0
STATUS
-----------STARTED
STARTED
SQL>
11gRAC/ASM/AIX
71 of 393
# orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1
#
# ls -la orapw*
1536 Feb 14 10:53 orapw+ASM1
#
On node2
{node2:asm}/oracle/asm/11.1.0/dbs
password=oracle entries=20
{node2:asm}/oracle/asm/11.1.0/dbs
{node2:asm}/oracle/asm/11.1.0/dbs
-rw-r----1 asm
oinstall
{node2:asm}/oracle/asm/11.1.0/dbs
Checking Oracle
Clusterware
11gRAC/ASM/AIX
# orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1
#
# ls -la orapw*
1536 Feb 14 10:53 orapw+ASM2
#
{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #
72 of 393
For node1
{node1:asm}/oracle/asm # srvctl add asm n node1 o /oracle/asm/11.1.0
{node1:asm}/oracle/asm #
For node2
{node1:asm}/oracle/asm # srvctl add asm n node2 o /oracle/asm/11.1.0
{node1:asm}/oracle/asm #
Checking
Oracle
Clusterware
after ASM
resources
registration.
{node1:asm}/oracle/asm # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #
Add entry in
/etc/oratab from
each node :
Add +ASM1:/oracle/asm/11.1.0:N
On node1 :
{node1:asm}/oracle/asm # cat /etc/oratab
+ASM1:/oracle/asm/11.1.0:N
{node1:asm}/oracle/asm #
On node2 :
{node2:asm}/oracle/asm # cat /etc/oratab
+ASM2:/oracle/asm/11.1.0:N
{node2:asm}/oracle/asm #
11gRAC/ASM/AIX
73 of 393
{node1:asm}/oracle/asm #
On node2
{node2:asm}/oracle/asm #
11gRAC/ASM/AIX
74 of 393
from node1 :
TYPE
VALUE
----------- -----------------------------string
string
remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE
PRESENT in the tnsnames.ora from each node.
local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT
in the tnsnames.ora from each node.
local_listener from node1, and node2 are not the ones defined in the listener.ora files from
each node.
11gRAC/ASM/AIX
TYPE
----------string
string
VALUE
-----------------------------LISTENER_+ASM1
LISTENERS_+ASM
75 of 393
Check for
node2,
from node2 :
TYPE
----------string
string
VALUE
-----------------------------LISTENER_+ASM2
LISTENERS_+ASM
Checking the listener status on each node, we will see 2 instances registered, +ASM1 and +ASM2
{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008
00:01:57
Copyright (c) 1991, 2007, Oracle.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
13-FEB-2008 22:04:55
Uptime
5 days 1 hr. 57 min. 2 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/network/admin #
11gRAC/ASM/AIX
76 of 393
11gRAC/ASM/AIX
77 of 393
Checking the listener services on each node, we will see 2 instances registered, +ASM1 and +ASM2
{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl services
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008
00:02:47
Copyright (c) 1991, 2007, Oracle.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/network/admin #
11gRAC/ASM/AIX
78 of 393
{node1:asm}/oracle/asm #
11gRAC/ASM/AIX
79 of 393
11gRAC/ASM/AIX
80 of 393
node1
node2
node1
node2
11gRAC/ASM/AIX
81 of 393
To see resources
read/write
permissions and
ownership within
the clusterware :
Check ASM
deamons
For example on
node1 :
{node1:root}/ # su - asm
{node1:asm}/oracle/asm # ps -ef|grep ASM
asm 163874
1
0
Feb 14
- 10:07 asm_lmon_+ASM1
asm 180340
1
0
Feb 14
- 0:18 asm_gmon_+ASM1
asm 221216
1
0
Feb 14
- 2:19 asm_rbal_+ASM1
asm 229550
1
0
Feb 14
- 0:12 asm_lgwr_+ASM1
asm 237684
1
0
Feb 14
- 11:14 asm_lmd0_+ASM1
asm 364670
1
0
Feb 14
- 0:08 asm_mman_+ASM1
asm 417982
1
0
Feb 14
- 17:15 asm_dia0_+ASM1
asm 434260
1
0
Feb 14
- 0:36
/oracle/asm/11.1.0/bin/racgimon daemon ora.node1.ASM1.asm
asm 458862
1
0
Feb 14
- 0:15 asm_ckpt_+ASM1
asm 463014
1
0
Feb 14
- 0:36 asm_lck0_+ASM1
asm 540876
1
0
Feb 14
- 0:07 asm_psp0_+ASM1
asm 544906
1
0
Feb 14
- 1:24 asm_pmon_+ASM1
asm 577626
1
0
Feb 14
- 0:26 asm_ping_+ASM1
asm 598146
1
0
Feb 14
- 2:15 asm_diag_+ASM1
asm 606318
1
0
Feb 14
- 0:10 asm_dbw0_+ASM1
asm 630896
1
0
Feb 14
- 12:30 asm_lms0_+ASM1
asm 635060
1
0
Feb 14
- 0:07 asm_smon_+ASM1
asm 639072
1
0
Feb 14
- 2:27 asm_vktm_+ASM1
asm 913652 954608
0 09:52:02 pts/5 0:00 grep ASM
{node1:asm}/oracle/asm #
11gRAC/ASM/AIX
82 of 393
11gRAC/ASM/AIX
node1
node2
83 of 393
11gRAC/ASM/AIX
84 of 393
To access the
content of the
ASM Disks
Groups :
From node1 :
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # export ORACLE_SID=+ASM1
{node1:asm}/oracle/asm # asmcmd -a sysasm -p
ASMCMD [+] > help
asmcmd [-v] [-a <sysasm|sysdba>] [-p] [command]
lsdsk
remap
Subject:ASMCMDASMcommandlineutilityDocID:Note:332180.1
11gRAC/ASM/AIX
85 of 393
V$ASM_ALIAS.NAME
-l
V$ASM_ALIAS.NAME, V$ASM_ALIAS.SYSTEM_CREATED;
V$ASM_FILE.TYPE, V$ASM_FILE.REDUNDANCY,
V$ASM_FILE.STRIPED, V$ASM_FILE.MODIFICATION_DATE
-s
V$ASM_ALIAS.NAME;
V$ASM_FILE.BLOCK_SIZE, V$ASM_FILE.BLOCKS,
V$ASM_FILE.BYTES, V$ASM_FILE.SPACE
If the user specifies both flags, then the command shows an union of
their respective columns, with duplicates removed.
If an entry in the list is an user-defined alias or a directory,
then -l displays only the V$ASM_ALIAS columns, and -s shows only
the alias name and its size, which is zero because it is negligible.
Moreover, the displayed name contains a suffix that is in the form of
an arrow pointing to the absolute path of the system-created filename
it references:
t_db1.f => +diskgroupName/DBName/DATAFILE/SYSTEM.256.1
See the -L option below for an exception to this rule.
For disk group information, this command queries V$ASM_DISKGROUP_STAT
by default, which can be modified by the -c and -g flags.
The remaining flags have the following meanings:
-d
If an argument is a directory,
(not its contents).
-r
-t
-L
-a
-c
-g
-H
Note that "ls +" would return information on all diskgroups, including
whether they are mounted.
11gRAC/ASM/AIX
86 of 393
Not all possible file columns or disk group columns are included.
To view the complete set of columns for a file or a disk group,
query the V$ASM_FILE and V$ASM_DISKGROUP views.
ASMCMD [+] >
If group is
-g
-H
Not all possible disk group columns are included. To view the
complete set of columns for a disk group, query the V$ASM_DISKGROUP
11gRAC/ASM/AIX
87 of 393
view.
ASMCMD [+] >
11gRAC/ASM/AIX
88 of 393
Sector
512
Inst_ID
1
2
Rebal
N
N
State
MOUNTED
MOUNTED
Type
NORMAL
NORMAL
Block
4096
Sector
512
512
AU
1048576
Block
4096
4096
Total_MB
24576
AU
1048576
1048576
Free_MB
24382
Total_MB
24576
24576
Req_mir_free_MB
4096
Free_MB
24382
24382
Usable_file_MB
10143
Req_mir_free_MB
4096
4096
11gRAC/ASM/AIX
89 of 393
Usable_file_MB
10143
10143
Offline_disks
0
Offline_disks
0
0
Name
DATA_DG1/
Name
DATA_DG1/
DATA_DG1/
Name
DATA_DG1/
Total_MB
24576
Free_MB
24382
Req_mir_free_MB
4096
Usable_file_MB
10143
Offline_disks
0
11gRAC/ASM/AIX
Disk_Group
DATA_DG1
DATA_DG1
90 of 393
Name
DATA_DG1/
Subject:HowtocleanupASMinstallation(RACandNonRAC)DocID:Note:311350.1
Subject:BackingUpanASMInstanceDocID:Note:333257.1
Subject:HowtoReconfigureAsmDiskGroup?DocID:Note:331661.1
Subject:AssigningaPhysicalVolumeID(PVID)ToAnExistingASMDiskCorruptstheASMDiskHeader
DocID:Note:353761.1
Subject:HowToReclaimAsmDiskSpace?DocID:Note:351866.1
Subject:RecreatingASMInstancesandDiskgroupsDocID:Note:268481.1
Subject:HowToConnectToAsmInstanceRemotelyDocID:Note:340277.1
At this stage :
The Oracle Cluster Registry and Voting Disk are created and configured
The Oracle Cluster Ready Services is installed, and started on all nodes.
The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.
ASM Home is installed
Default node listener for ASM are created
ASM instances are created and started
ASM Diskgroup is created
11gRAC/ASM/AIX
91 of 393
For 11g RAC database(s), all new features from 11g ASM will be available for use.
For 10g RAC database(s), not all 11g ASM new features are available for use, you have to use different ASM
diskgroups for 10g and 11g RAC database(s), and set proper ASM disgroup compatibleattribute.
11gRAC/ASM/AIX
92 of 393
At this stage :
BUT for now, we need to install the Oracle 11g Database Software in its own $ORACLE_HOME
(/oracle/rdbms/11.1.0) using the rdbms unix user.
11gRAC/ASM/AIX
93 of 393
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #
Login as rdbms (in our case), or oracle user and follow the procedure hereunder
Setup and
export your
DISPLAY, TMP and
TEMP variables
With /tmp or other destination having enough free space, about 500Mb on
each node.
{node1:rdbms}/ # export DISPLAY=node1:0.0
If not set in asm .profile, do :
{node1:rdbms}/ # export TMP=/tmp
{node1:rdbms}/ # export TEMP=/tmp
{node1:rdbms}/ # export TMPDIR=/tmp
Check that Oracle Clusterware (including VIP, ONS, GSD, Listeners and ASM ressources) is started on
each node !!!. ASM instances and listeners are not mandatory for the RDBMS installation.
As asm user from
node1 :
11gRAC/ASM/AIX
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:asm}/oracle/asm #
94 of 393
{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 190 MB. Actual 1933 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-14_02-41-49PM. Please wait
...{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.
11gRAC/ASM/AIX
95 of 393
11gRAC/ASM/AIX
96 of 393
Details of the
prerequisite
checks done
by
runInstaller
Passed
Passed
Passed
Passed
11gRAC/ASM/AIX
97 of 393
11gRAC/ASM/AIX
98 of 393
Summary :
The Summary screen will be presented.
Confirm that the RAC database software and
other selected options will be installed.
Check Cluster Nodes and Remote Nodes lists.
The OUI will install the Oracle 10g software on to
the local node, and then copy this information to
the other selected nodes.
Then click Install ...
Install :
The Oracle Universal Installer will proceed the
installation on the first node, then will copy
automatically the code on the others selected
nodes.
At 50%, it may take time to pass over the 50%,
dont be affrais, installation is processing, running
a test script on remote nodes
Just wait for the next screen ...
At this stage, you could hit message about Failed Attached Home !!!
if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on OK :
From node2 :
{node2:root}/oracle/rdbms/11.1.0 # su rdbms
{node2:rdbms}/oracle/rdbms/11.1.0 -> /oracle/rdbms/11.1.0/oui/bin/runInstaller attachHome
noClusterEnabled ORACLE_HOME=/oracle/rdbms/11.1.0 ORACLE_HOME_NAME=OraDb11g_home1
CLUSTER_NODES=node1,node2 INVENTORY_LOCATION=/oracle/oraInventory LOCAL_NODE=node2
Starting Oracle Universal Installer
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
AttachHome was successful.
{node2:rdbms}/oracle/rdbms/11.1.0/bin #
11gRAC/ASM/AIX
99 of 393
will pop-up :
AS root, execute root.sh on each
node.
For our case, this script is
located in the
/oracle/products/asm
{node1:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node1:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node1:root}/ #
--------------------------------------------------------------{node2:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node2:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
11gRAC/ASM/AIX
100 of 393
Coming back to
this previous
screen,
Just click OK
End of Installation :
This screen will automatically appear.
Check that it is successful and write down the
URL list of the J2EE applications that have
been deployed (isqlplus, ).
11gRAC/ASM/AIX
101 of 393
11gRAC/ASM/AIX
102 of 393
ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from asm user, and reconnect to load modified $HOME/.profile
11gRAC/ASM/AIX
103 of 393
{node2:root}/
{node2:root}/
Change permission {node2:root}/
{node2:root}/
to allow rdbms user to
{node2:root}/
write on directories
{node2:root}/
onwned by asm user.
{node2:root}/
{node2:root}/
MANDATORY
11gRAC/ASM/AIX
#
#
#
#
#
#
#
#
rsh
rsh
rsh
rsh
rsh
rsh
rsh
rsh
node1
node2
node1
node2
node1
node2
node1
node2
chmod
chmod
chmod
chmod
chmod
chmod
chmod
chmod
-R
-R
-R
-R
-R
-R
-R
-R
g+w
g+w
g+w
g+w
g+w
g+w
g+w
g+w
/oracle/asm/11.1.0/network
/oracle/asm/11.1.0/network
/oracle/cfgtoollogs
/oracle/cfgtoollogs
/oracle/admin
/oracle/admin
/oracle/diag
/oracle/diag
104 of 393
From node1 :
11gRAC/ASM/AIX
105 of 393
Operations :
Select the Create a Database option.
Then click Next ...
Node Selection :
Database Templates :
Select General Purpose
Or Custom Database if you want to generate
the creation scripts.
Then click Next ...
11gRAC/ASM/AIX
106 of 393
Database Identification :
Specify the Global Database Name
The SID Prefix will be automatically
updated. (by default it is the Global Database
Name)
For our example : JSC1DB
Then click Next ...
Management Options :
Check Configure the database with
Enterprise Manager if you want to use the
Database Control (local administration).
Or Dont check if you plan to administrate the
database using the Grid Control (global
network administration)
You can also set Alert and backup :
11gRAC/ASM/AIX
107 of 393
Storage Options :
Then Click OK
11gRAC/ASM/AIX
108 of 393
Recovery Configuration :
Select the DiskGroup to be used for the
Flash Recovery Area.
In our case :
+DATA_DG1
and size of 4096
11gRAC/ASM/AIX
109 of 393
Database Content :
Select the options needed
Initialization Parameters :
Select the parameters needed
You can click on All Initialization
Parameters to view or modify
Sizing
Character Sets
Connection Mode
About Oracle Cluster database instances parameters, well have to look on these parameters :
11gRAC/ASM/AIX
110 of 393
Instance
Name
Value
asm_preferred_read_failure_groups
If you want to set a different location than the default choosen ASM disk group for the control files, you
should modify the value of parameter (control_files)
*
control_files
(+DATA_DG1/{DB_NAME}/control01.ctl,
+DATA_DG1/{DB_NAME}/control02.ctl,
+DATA_DG1/{DB_NAME}/control03.ctl)
db_create_file_dest
db_recovery_file_dest
*
JSC1DB1
JSC1DB2
JSC1DB1
JSC1DB2
*
JSC1DB1
JSC1DB2
JSC1DB1
JSC1DB2
*
*
*
remote_listener
undo_tablespace
undo_tablespace
local_listener
Local_listener
log_archive_dest_1
Instance_number
Instance_number
thread
thread
db_name
db_recovery_file_dest_size
diagnostic_dest
+DATA_DG1
(By default, all files from the database will be created on the
specified ASM Diskgroup)
+DATA_DG1
(Default location for backups, and archives logs)
LISTENERS_JSC1DB
UNDOTBS1
UNDOTBS2
LISTENER_JSC1DB1
LISTENER_JSC1DB2
'LOCATION=+DATA_DG1/'
1
2
1
2
JSC1DB
4294967296
(ORACLE_BASE)
(Default location for the database traces and logs files, it
replace the USER, BACKGROUND and DUMP destinations)
LISTENERS_JSC1DB =
$ORACLE_HOME/network/admin (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
, do edit tnsnames.ora file on each
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
node, and add the following lines :
)
In
LISTENERS_JSC1DB could be
automatically added by DBCA in
the tnsnames.ora
LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_JSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
11gRAC/ASM/AIX
111 of 393
Security Settings :
Select the options
needed
By default
Keep the enhanced 11g
default security
settings.
Then click Next ...
Or
11gRAC/ASM/AIX
112 of 393
Database Storage :
Check the datafiles organization
Add one extra log member on each thread
now with the assistant,
REMINDER
{node2:root}/
{node2:root}/
Change permission {node2:root}/
{node2:root}/
to allow rdbms user to
{node2:root}/
write on directories
{node2:root}/
onwned by asm user.
{node2:root}/
{node2:root}/
11gRAC/ASM/AIX
#
#
#
#
#
#
#
#
rsh
rsh
rsh
rsh
rsh
rsh
rsh
rsh
node1
node2
node1
node2
node1
node2
node1
node2
chmod
chmod
chmod
chmod
chmod
chmod
chmod
chmod
-R
-R
-R
-R
-R
-R
-R
-R
g+w
g+w
g+w
g+w
g+w
g+w
g+w
g+w
/oracle/asm/11.1.0/network
/oracle/asm/11.1.0/network
/oracle/cfgtoollogs
/oracle/cfgtoollogs
/oracle/admin
/oracle/admin
/oracle/diag
/oracle/diag
113 of 393
Creation Options :
Select the options needed
Create Database
Summary :
Check the description
Save the HTML summary file if needed
Then click Ok ...
Click Ok ...
11gRAC/ASM/AIX
114 of 393
Passwords Management
Enter in password management, if you
need to change pasword, and unlock
some user accounts that are locked by
default (for security purpose).
11gRAC/ASM/AIX
115 of 393
Execute
crs_stat t
on one node
as rdbms
user :
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
Target
-----OFFLINE
OFFLINE
OFFLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----OFFLINE
OFFLINE
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2
Starting cluster
database
Execute
crs_stat t
on one node
as rdbms
user :
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
Target
-----ONLINE
ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node2
ONLINE on node1
11gRAC/ASM/AIX
on
on
on
on
on
on
on
on
on
on
on
on
on
node1
node2
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2
116 of 393
On node1 :
{node1:root}/ # su rdbms
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms #
mkdir
mkdir
mkdir
mkdir
mkdir
mkdir
$ORACLE_BASE/admin/JSC2DB
$ORACLE_BASE/admin/JSC2DB/scripts
$ORACLE_BASE/admin/JSC2DB/adump
$ORACLE_BASE/admin/JSC2DB/hdump
$ORACLE_BASE/admin/JSC2DB/dpdump
$ORACLE_BASE/admin/JSC2DB/pfile
On node2 :
{node2:root}/ # su rdbms
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
{node2:rdbms}/home/rdbms #
mkdir
mkdir
mkdir
mkdir
mkdir
mkdir
$ORACLE_BASE/admin/JSC2DB
$ORACLE_BASE/admin/JSC2DB/scripts
$ORACLE_BASE/admin/JSC2DB/adump
$ORACLE_BASE/admin/JSC2DB/hdump
$ORACLE_BASE/admin/JSC2DB/dpdump
$ORACLE_BASE/admin/JSC2DB/pfile
From node1 :
{node1:root}/ # su asm
{node1:asm}/home/asm #
{node1:asm}/home/asm # export ORACLE_SID=+ASM1
{node1:asm}/home/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 10:28:54 2008
Copyright (c) 1982, 2007, Oracle.
11gRAC/ASM/AIX
117 of 393
HEADER_STATU PATH
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------MEMBER
/dev/ASM_disk1
MEMBER
/dev/ASM_disk2
MEMBER
/dev/ASM_disk3
MEMBER
/dev/ASM_disk4
MEMBER
/dev/ASM_disk5
MEMBER
/dev/ASM_disk6
CANDIDATE
/dev/ASM_disk7
CANDIDATE
/dev/ASM_disk8
CANDIDATE
/dev/ASM_disk9
CANDIDATE
/dev/ASM_disk10
CANDIDATE
/dev/ASM_disk11
CANDIDATE
/dev/ASM_disk12
12 rows selected.
SQL>
You can use the disks that are in disk header status CANDIDATE or FORMER
REDUNDANCY
Diskgroup created.
SQL> select HEADER_STATUS,PATH from v$asm_disk order by PATH;
HEADER_STATU PATH
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------MEMBER
/dev/ASM_disk1
MEMBER
/dev/ASM_disk2
MEMBER
/dev/ASM_disk3
MEMBER
/dev/ASM_disk4
MEMBER
/dev/ASM_disk5
MEMBER
/dev/ASM_disk6
MEMBER
/dev/ASM_disk7
MEMBER
/dev/ASM_disk8
EMBER
/dev/ASM_disk9
MEMBER
/dev/ASM_disk10
MEMBER
/dev/ASM_disk11
CANDIDATE
/dev/ASM_disk12
12 rows selected.
SQL>
SQL> select INST_ID, NAME, STATE, OFFLINE_DISKS from gv$asm_diskgroup;
INST_ID NAME
STATE
OFFLINE_DISKS
---------- ------------------------------ ----------- ------------2 DATA_DG1
MOUNTED
0
11gRAC/ASM/AIX
118 of 393
2 DATA_DG2
1 DATA_DG1
1 DATA_DG2
DISMOUNTED
MOUNTED
MOUNTED
0
0
0
6 rows selected.
SQL>
We need to mount DATA_DG2 on the second node !!!
On node2 :
{node2:root}/oracle/diag # su - asm
{node2:asm}/oracle/asm # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 13:41:23 2008
Copyright (c) 1982, 2007, Oracle.
NAME
-----------------------------DATA_DG1
DATA_DG2
DATA_DG1
DATA_DG2
STATE
OFFLINE_DISKS
----------- ------------MOUNTED
0
MOUNTED
0
MOUNTED
0
MOUNTED
0
6 rows selected.
SQL>
3.
Create necessary sub directories for the JSC2DB database within the choosen ASM diskgroup :
11gRAC/ASM/AIX
119 of 393
4.
ClusterWide
Parameter
s for
Database "
JSC2DB ":
##############################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
##############################################################################
###########################################
# Archive
###########################################
log_archive_dest_1='LOCATION=+DATA_DG2/'
log_archive_format=%t_%s_%r.dbf
###########################################
# Cache and I/O
###########################################
db_block_size=8192
###########################################
# Cluster Database
###########################################
cluster_database_instances=2
#remote_listener=LISTENERS_JSC2DB
###########################################
# Cursors and Library Cache
###########################################
open_cursors=300
###########################################
# Database Identification
###########################################
db_domain=""
db_name=JSC2DB
###########################################
# File Configuration
###########################################
db_create_file_dest=+DATA_DG2
db_recovery_file_dest=+DATA_DG2
db_recovery_file_dest_size=4294967296
###########################################
# Miscellaneous
###########################################
asm_preferred_read_failure_groups=""
compatible=11.1.0.0.0
diagnostic_dest=/oracle
memory_target=262144000
###########################################
# Processes and Sessions
###########################################
processes=150
###########################################
# Security and Auditing
###########################################
audit_file_dest=/oracle/admin/JSC2DB/adump
audit_trail=db
remote_login_passwordfile=exclusive
11gRAC/ASM/AIX
120 of 393
###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=JSC2DBXDB)"
###########################################
# Cluster Database
###########################################
db_unique_name=JSC2DB
control_files=/oracle/admin/JSC1DB/scripts/tempControl.ctl
JSC2DB1.instance_number=1
JSC2DB2.instance_number=2
JSC2DB2.thread=2
JSC2DB1.thread=1
JSC2DB1.undo_tablespace=UNDOTBS1
JSC2DB2.undo_tablespace=UNDOTBS2
5.
From node1 :
{node1:root}/ # su rdbms
{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms # export ORACLE_SID=JSC2DB1
{node1:rdbms}/home/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/home/rdbms # sqlplus /nolog
connect / as sysdba
6.
From node1 :
7.
Create the Database (ASM diskgroup(s) must be created and mounted on all nodes) :
From node1 :
SQL> CREATE DATABASE "JSC2DB"
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K
MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K
MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P15
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 51200K, GROUP 2 SIZE 51200K;
11gRAC/ASM/AIX
121 of 393
8.
From node1 :
9.
From node1 :
10.
From node1 :
11.
Edit init<SID>.ora and set appropriate values for the 2nd instance on the 2nd Node:
instance_name=JSC2DB2
instance_number=2
local_listener=LISTENER_JSC2DB2
thread=2
undo_tablespace=UNDOTBS2
12.
From node1 :
SQL> connect SYS/oracle as SYSDBA
SQL> shutdown immediate;
SQL> startup mount pfile="/oracle/admin/JSC1DB/scripts/init.ora";
SQL> alter database archivelog;
SQL> alter database open;
SQL> select group# from v$log where group# =3;
SQL> select group# from v$log where group# =4;
SQL> select group# from v$log where group# =6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 SIZE 51200K,
GROUP 4 SIZE 51200K,
GROUP 6 SIZE 51200K;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;
SQL> create spfile='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' FROM
pfile='/oracle/admin/JSC2DB/scripts/init.ora';
SQL> shutdown immediate;
From node1 :
11gRAC/ASM/AIX
122 of 393
From node1 :
{node1:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB1 password=oracle force=y
From node2 :
{node2:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB2 password=oracle force=y
14.
Start the second Instance. (Assuming that your cluster configuration is up and running)
15.
16.
17.
Check /etc/oratab
The file should contain a reference to the database name, not to the instance name.
The last field should always be N on a RAC environment to avoid 2 instances of the same name to be
started.
{node1:rdbms}/oracle/rdbms # cat /etc/oratab
+ASM1:/oracle/asm/11.1.0:N
JSC1DB:/oracle/rdbms/11.1.0:N
JSC2DB:/oracle/rdbms/11.1.0:N
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
123 of 393
18.
From node1 :
{node1:rdbms}/home/rdbms # srvctl add database d JSC2DB o /oracle/rdbms/11.1.0
{node1:rdbms}/home/rdbms # srvctl add instance d JSC2DB i JSC2DB1 n node1
{node1:rdbms}/home/rdbms # srvctl add instance d JSC2DB i JSC2DB2 n node2
{node1:rdbms}/home/rdbms #
{node1:rdbms}/oracle/rdbms # crsstat JSC2DB
HA Resource
----------ora.JSC2DB.JSC2DB1.inst
ora.JSC2DB.JSC2DB2.inst
ora.JSC2DB.db
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
Target
-----ONLINE
ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node2
ONLINE on node1
124 of 393
UpdatetheRDBMSunixuser.profile
To be done on each node for rdbms (in our case) or oracle unix user.
vi $HOME/.profile file in rdbmss home directory. Add the entries in bold blue color
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
export PATH
if [ -s "$MAIL" ]
# This is at Shell startup. In normal
then echo "$MAILMSG"
# operation, the Shell checks
fi
# periodically.
ENV=$HOME/.kshrc
export ENV
#The following line is added by License Use Management installation
export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin
export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=JSC1DB1
if [ -t 0 ]; then
stty intr ^C
fi
Do disconnect from asm user, and reconnect to load modified $HOME/.profile
THEN on node2
..
export ORACLE_SID=JSC1DB2
..
AdministerandCheck
11gRAC/ASM/AIX
125 of 393
11gRAC/ASM/AIX
126 of 393
{node1:rdbms}/oracle/rdbms # crs_stat -p
ora.JSC1DB.JSC1DB1.inst
{node1:rdbms}/oracle/rdbms # crs_stat -p
ora.JSC1DB.db
NAME=ora.JSC1DB.JSC1DB1.inst
TYPE=application
ACTION_SCRIPT=/oracle/rdbms/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=300
DESCRIPTION=CRS application for Instance
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=ora.node1.ASM1.asm
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=900
STOP_TIMEOUT=300
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
NAME=ora.JSC1DB.db
TYPE=application
ACTION_SCRIPT=/crs/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=600
DESCRIPTION=CRS application for the
Database
FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_THRESHOLD=1
HOSTING_MEMBERS=
OPTIONAL_RESOURCES=
PLACEMENT=balanced
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=600
START_TIMEOUT=600
STOP_TIMEOUT=600
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
127 of 393
11gRAC/ASM/AIX
128 of 393
JSC1DB1.thread=1
JSC1DB1.undo_tablespace='UNDOTBS1'
JSC1DB2.undo_tablespace='UNDOTBS2'
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
129 of 393
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
19-FEB-2008 13:59:26
Uptime
1 days 7 hr. 15 min. 36 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #
Check also :
{node1:rdbms}/oracle/rdbms # lsnrctl service
11gRAC/ASM/AIX
130 of 393
About Oracle Cluster database instances parameters, well have to look on these parameters :
Instance
*
JSC1DB1
JSC1DB2
Name
remote_listener
local_listener
Local_listener
Value
LISTENERS_JSC1DB
LISTENER_JSC1DB1
LISTENER_JSC1DB2
remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
local_listener from node1, and node2 are not the ones defined in the listener.ora files from each node.
TYPE
VALUE
----------- -----------------------------string
string
LISTENERS_JSC1DB
From node2 :
TYPE
VALUE
----------- -----------------------------string
string
LISTENERS_JSC1DB
ONLY remote_listener parameter is set, local_listener parameter for each instance is not mandatory as were using
port 1521 (automatic registration) for the default node listeners. But if not using 1521 port, it is mandatory to set them
to have a proper load balancing and failover of user connections.
11gRAC/ASM/AIX
131 of 393
$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin
MANDATORY !!!
On all nodes, for
JSC1DB
Database, you
should have or
add the following
lines :
LISTENERS_JSC1DB =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
LISTENER_AJSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
If remote_listener, and local_listener are not set, empty or not well set, you should issue the following
commands :
From node1 :
TYPE
----------string
string
VALUE
-----------------------------LISTENER_JSC1DB1
LISTENERS_JSC1DB
TYPE
----------string
string
VALUE
-----------------------------LISTENER_JSC1DB2
LISTENERS_JSC1DB
From node2 :
11gRAC/ASM/AIX
132 of 393
15.5.1
Creationthrusrvctlcommand
Database name
Service name
for services, list of available instances, this list cannot include preferred instances
for services, TAF preconnect policy NONE, BASIC, PRECONNECT
for services, list of preferred instances, this list cannot include available instances.
updates the preferred or available list for the service to support the specified instance.
Only one instance may be specified with the
-u switch. Instances that already support the service should not be included.
Examples for a 4 RAC nodes Cluster.
With a cluster database named ORA.
4 instances named ORA1, ORA2, ORA3 and ORA4.
Add a STD_BATCH service to an existing database with preferred instances (-r) and available instances (-a).
Use basic failover to the available instances.
srvctl add service -d RAC -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4
Add a STD_BATCH service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4 -P PRECONNECT
11gRAC/ASM/AIX
133 of 393
11gRAC/ASM/AIX
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
134 of 393
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
OFFLINE
OFFLINE
ora....DB2.srv application
OFFLINE
OFFLINE
ora....OLTP.cs application
OFFLINE
OFFLINE
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
To query after
OLTP service
creation
To query and
get full name
of OLTP
service
resource
Entries added
by the creation
of Oracle
services at
Oracle
Clusterware
level
11gRAC/ASM/AIX
Target
-----OFFLINE
OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
OFFLINE
135 of 393
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
ONLINE
ONLINE
node1
ora....DB2.srv application
ONLINE
ONLINE
node2
ora....OLTP.cs application
ONLINE
ONLINE
node1
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
To query after
stopping OLTP
service
11gRAC/ASM/AIX
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
OFFLINE
OFFLINE
ora....DB2.srv application
OFFLINE
OFFLINE
ora....OLTP.cs application
OFFLINE
OFFLINE
ora.JSC1DB.db application
ONLINE
ONLINE
node1
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
136 of 393
THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :
OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.
11gRAC/ASM/AIX
137 of 393
11gRAC/ASM/AIX
138 of 393
Let stop the OLTP service to see if we can still connect thru SQLPlus
From node2 :
{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s OLTP
{node2:rdbms}/oracle/rdbms # crsstat |grep OLTP
ora.JSC1DB.OLTP.JSC1DB1.srv
OFFLINE
OFFLINE
ora.JSC1DB.OLTP.JSC1DB2.srv
OFFLINE
OFFLINE
ora.JSC1DB.OLTP.cs
OFFLINE
OFFLINE
{node2:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:10:18 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node2:rdbms}/oracle/rdbms #
From node1 :
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:09:35 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
{node1:rdbms}/oracle/rdbms #
Conclusion : if the OLTP service is stopped, no new connections will be allowed on any database
instances even so nodes listeners are still up and running !!!
Restarting OLTP service will enable the connections
{node2:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s OLTP
11gRAC/ASM/AIX
139 of 393
{node1:rdbms}/oracle/rdbms # crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....DB2.srv application
OFFLINE
OFFLINE
ora....ATCH.cs application
OFFLINE
OFFLINE
ora....B1.inst application
ONLINE
ONLINE
node1
ora....B2.inst application
ONLINE
ONLINE
node2
ora....DB1.srv application
ONLINE
ONLINE
node1
ora....DB2.srv application
ONLINE
ONLINE
node2
ora....OLTP.cs application
ONLINE
ONLINE
node1
ora.JSC1DB.db application
ONLINE
ONLINE
node2
ora....SM1.asm application
ONLINE
ONLINE
node1
ora....E1.lsnr application
ONLINE
ONLINE
node1
ora.node1.gsd application
ONLINE
ONLINE
node1
ora.node1.ons application
ONLINE
ONLINE
node1
ora.node1.vip application
ONLINE
ONLINE
node1
ora....SM2.asm application
ONLINE
ONLINE
node2
ora....E2.lsnr application
ONLINE
ONLINE
node2
ora.node2.gsd application
ONLINE
ONLINE
node2
ora.node2.ons application
ONLINE
ONLINE
node2
ora.node2.vip application
ONLINE
ONLINE
node2
{node1:rdbms}/oracle/rdbms #
To query
after BATCH
service
creation
11gRAC/ASM/AIX
Target
-----OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
140 of 393
Target
-----ONLINE
ONLINE
State
----ONLINE on node2
ONLINE on node2
Target
-----OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :
BATCH =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.
11gRAC/ASM/AIX
141 of 393
SQL>
Test connexion using BATCH connexion string from node 1
{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'
SQL>
As you can see, connections will only goes to instance JSC1DB2 (node2), as this is the preferred instance
for the BATCH service, no connections will goes to instance JSC1DB1 (node1) unless node2 fails, then
BATCH will use the available instance as set in the configuration.
11gRAC/ASM/AIX
142 of 393
Let stop the BATCH service to see if we can still connect thru SQLPlus
From node2 :
{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s BATCH
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node2:rdbms}/oracle/rdbms #
Target
-----OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
11gRAC/ASM/AIX
143 of 393
11gRAC/ASM/AIX
144 of 393
Let crash node2 which host the BATCH service preferred instance called JSC1DB2 to see what will
happened to existing connection
BATCH service MUST BE started !!!
From node2 :
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node2:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
State
----ONLINE on node2
ONLINE on node2
SQL>
From node2 :
{node2:root}/ # reboot
Rebooting . . .
SQL>
11gRAC/ASM/AIX
145 of 393
vip
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
BATCH
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE on
ONLINE on
ONLINE on
OFFLINE
ONLINE on
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
OFFLINE
OFFLINE
OFFLINE
OFFLINE
ONLINE on
Target
-----ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node1
Target
-----ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
11gRAC/ASM/AIX
146 of 393
When node2 is back ONLINE, Oracle Clusterware will start, VIP from node2 failed on node1 will get back
on its home node (node2), ASM instance2, Database instance will restart as others resources linked to
node2.
{node2:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node2:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
node1
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2
BUT service status will not change unless we decide to switch back to JSC1DB2 as preferred instance for
BATCH service
To relocate the service on its preferred instance JSC1DB2
{node2:rdbms}/oracle/rdbms # srvctl relocate service -d JSC1DB -s BATCH -i JSC1DB1 -t
JSC1DB2 -f
{node2:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH
Service BATCH is running on instance(s) JSC1DB2
{node2:rdbms}/oracle/rdbms # crsstat BATCH
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
Target
-----ONLINE
ONLINE
State
----ONLINE on node2
ONLINE on node1
11gRAC/ASM/AIX
147 of 393
SQL>
We switched back to preferred instance
11gRAC/ASM/AIX
148 of 393
To query
after
DISCO
service
creation
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
OFFLINE
OFFLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE on
ONLINE on
OFFLINE
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
Target
-----OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
node2
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2
11gRAC/ASM/AIX
149 of 393
11gRAC/ASM/AIX
150 of 393
Target
-----ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node1
Target
-----OFFLINE
OFFLINE
State
----OFFLINE
OFFLINE
THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes
Srvctl tool does not
add it when creating
the service.
Looking at
description, we can
see :
DSICO =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DISCO)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.
11gRAC/ASM/AIX
151 of 393
OLTP running on preferred database instance JSC1DB1 on node1 and JSC1DB2 on node2.
BATCH running on preferred database instance JSC1DB2 on node2.
DISCO running on preferred database instance JSC1DB1 on node1.
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.DISCO.JSC1DB1.srv
ora.JSC1DB.DISCO.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
on
node2
node1
node1
node1
node1
node2
node1
node2
node1
node1
node1
node1
node1
node1
node1
node2
node2
node2
node2
node2
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
on
on
on
on
node2
node1
node1
node2
11gRAC/ASM/AIX
152 of 393
At database level, you should check database parameter service_names, ensuring that services created
are listed :
From node1 :
From node2 :
11gRAC/ASM/AIX
153 of 393
If service_names not set, empty or not well set, you should issue the following commands :
From node1 :
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE1
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Start Date
22-FEB-2008 07:57:23
Uptime
1 days 13 hr. 48 min. 21 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
154 of 393
Check that services are registered with command : lsnrctl status . On node2 with listener_node2
{node2:rdbms}/oracle/rdbms # lsnrctl status
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-FEB-2008 21:55:48
Copyright (c) 1991, 2007, Oracle.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
-----------------------Alias
LISTENER_NODE2
Version
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 Production
Start Date
22-FEB-2008 11:11:32
Uptime
1 days 10 hr. 44 min. 17 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
ON
Listener Parameter File
/oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File
/oracle/diag/tnslsnr/node2/listener_node2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.182)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.82)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
11gRAC/ASM/AIX
155 of 393
11gRAC/ASM/AIX
156 of 393
Let check what will happend as database level when failling node2
Before failure of one node, crs_stat t will show :
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
{node1:rdbms}/oracle/rdbms #
Target
-----ONLINE
ONLINE
State
----ONLINE on node2
ONLINE on node2
If node2 fails, or reboot for any reason : What should we see on node1 ?
After failure : On node1, AS BATCH was prefered on JSC1DB2, and available on JSC1DB1,
THEN after node2 failure, BATCH service will be swithed to available instance JSC1DB1.
{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 22:14:50 2008
Copyright (c) 1982, 2007, Oracle.
11gRAC/ASM/AIX
157 of 393
ONS, GSD, Listener, ASM2 instance, and JSC1DB2 instance are switch to OFFLINE state.
OLTP service is switch to OFFLINE state on node2. But still ONLINE state on node1
{node1:rdbms}/oracle/rdbms # crsstat
HA Resource
----------ora.JSC1DB.BATCH.JSC1DB2.srv
ora.JSC1DB.BATCH.cs
ora.JSC1DB.JSC1DB1.inst
ora.JSC1DB.JSC1DB2.inst
ora.JSC1DB.OLTP.JSC1DB1.srv
ora.JSC1DB.OLTP.JSC1DB2.srv
ora.JSC1DB.OLTP.cs
ora.JSC1DB.db
ora.node1.ASM1.asm
ora.node1.LISTENER_NODE1.lsnr
ora.node1.gsd
ora.node1.ons
ora.node1.vip
ora.node2.ASM2.asm
ora.node2.LISTENER_NODE2.lsnr
ora.node2.gsd
ora.node2.ons
ora.node2.vip
{node1:rdbms}/oracle/rdbms #
11gRAC/ASM/AIX
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE on
ONLINE on
ONLINE on
OFFLINE
ONLINE on
OFFLINE
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
ONLINE on
OFFLINE
OFFLINE
OFFLINE
OFFLINE
ONLINE on
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
node1
158 of 393
$ORACLE_HOME/network/admin/listener.ora
Which will be in /oracle/asm/11.1.0/network/admin
11gRAC/ASM/AIX
159 of 393
11gRAC/ASM/AIX
160 of 393
LISTENER
Target
-----ONLINE
ONLINE
State
----ONLINE on node1
ONLINE on node2
11gRAC/ASM/AIX
161 of 393
$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin
REMOTE_LISTENER
for ASM instances :
+ASM1
+ASM2
(Added by netca)
LOCAL_LISTENER
for ASM instance :
+ASM2
LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
(Added manualy)
LOCAL_LISTENER
for ASM instance :
+ASM1
LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
(Added manualy)
Load Balancing
Failover (Session
and Select)
11gRAC/ASM/AIX
ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = +ASM)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
162 of 393
REMOTE_LISTENER
for Database
instances :
JSC1DB1
JSC1DB2
LISTENERS_JSC1DB =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
(Added by netca/dbca)
LOCAL_LISTENER
for ASM instance :
JSC1DB1
LISTENER_JSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LOCAL_LISTENER
for ASM instance :
JSC1DB2
(Added manualy)
This connection
string will only allow
to connect to
database instance
JSC1DB2
NO Failover
NO Load Balancing
LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
JSC1DB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(INSTANCE_NAME = JSC1DB2)
)
)
(Added by netca/dbca)
This connection
string will only allow
to connect to
database instance
JSC1DB1
NO Failover
NO Load Balancing
JSC1DB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(INSTANCE_NAME = JSC1DB1)
)
)
(Added by netca/dbca)
11gRAC/ASM/AIX
163 of 393
This connection
string will allow to
connect to
database instances
JSC1DB1 and
JSC1DB2
NO Failover
NO Load
Balancing
(Added by netca)
JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(P<ORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
)
)
Without lines as :
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
Load Balancing will be achieved, BUT NO Failover will be ensured !!!
So Entry for JSC1DB should me modified as below :
This connection
string will allow to
connect to
database instances
JSC1DB1 and
JSC1DB2
Failover
(Session and
Select)
Load Balancing
(Added by netca)
11gRAC/ASM/AIX
JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
164 of 393
Connection String
for OLTP Database
cluster Service
instances with :
Load Balancing
Failover (Session
and Select)
Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)
Connection String
for BATCH Database
cluster Service
instances with :
Load Balancing
Failover (Session
and Select)
Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)
Connection String
for DISCO Database
cluster Service
instances with :
Load Balancing
Failover (Session
and Select)
Preferred,
Available and Not
Used will be
managed thru Oracle
Clusterware
(Added manualy)
OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
BATCH =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
DISCO =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DISCO)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
{node1:rdbms}/oracle/rdbms/11.1.0/network/admin #
11gRAC/ASM/AIX
165 of 393
IMPORTANT, What should be on the client side with the local tnsnames.ora file ?
For Load Balancing and failover on all nodes without going thru Oracle clusterware services, I will have to
add the JSC1DB entries in my local client tnsnames.ora.
The name of the
connection string
JSC1DB could be
replaced by an other
name, as long as the
description including
SERVICE_NAME is
not changed !!!
JSC1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
For Load Balancing and failover on all nodes going thru Oracle clusterware services, I will have to choice
add the OLTP entries in my local client tnsnames.ora.
The name of the
connection string
OLTP could be
replaced by an other
name, as long as the
description including
SERVICE_NAME is
not changed !!!
OLTP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
For example, changing the name of the connection string OLTP to TEST will look like
TEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
For instance 2 as preferred instance and failover to available instance 1 thru Oracle clusterware services, I
will have to choice add the BATCH entries in my local client tnsnames.ora.
For instance 1 as preferred instance and failover to available instance 2 thru Oracle clusterware services, I
will have to choice add the DISCO entries in my local client tnsnames.ora.
IMPORTANT, On the client side, make sure IP/hostname resiolution for VIPs are solved either by using DNS or
hosts file, even so you changed the VIP hotname by real IP. Otherwise you may end-up with intermittent connection
11gRAC/ASM/AIX
166 of 393
problems.
15.7 About DBCONSOLE
15.7.1
CheckingDBCONSOLE
Since 10gRAC R2, and with 11gRAC also, dbconsole is only configured on first nore at database creation.
When using DBCONSOLE, each database will have its own dbconsole !!!
Look at Metalink Note :
Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1
Subject: Troubleshooting Database Control Startup Issues Doc ID: Note:549079.1
How to
start/stop the
dbconsole
agent :
When using DBCONSOLE, the oracle agent start and stop automatically with the dbconsole.
If needed, its still possible to start/stop the agent using the emctl tool as follow :
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms
OR
{node1:rdbms}/oracle/rdbms
# export ORACLE_HOME=/oracle/rdbms/11.1.0
# export ORACLE_SID=JSC1DB1
# emctl start agent
# emctl stop agent
11gRAC/ASM/AIX
167 of 393
Commands to
start
DBCONSOLE :
Commands to
stop
DBCONSOLE :
Check status of
dbconsole
11gRAC/ASM/AIX
168 of 393
11gRAC/ASM/AIX
169 of 393
Look at Metalink Note to discover dbconsole, or get help to solve any issues :
Subject: How to configure dbconsole to display information from the ASM DISKGROUPS Doc ID: Note:329581.1
Subject: Em Dbconsole 10.1.0.X Does Not Discover Rac Database As A Cluster Doc ID: Note:334546.1
Subject: DBConsole Shows Everything Down Doc ID: Note:332865.1
Subject: How To Config Dbconsole (10.1.x or 10.2) EMCA With Another Hostname Doc ID: Note:336017.1
Subject: How To Change The DB Console Language To English Doc ID: Note:370178.1
Subject: Dbconsole Fails After A Physical Move Of Machine To New Domain/Hostnam Doc ID: Note:401943.1
Subject: How to Troubleshoot Failed Login Attempts to DB Control Doc ID: Note:404820.1
Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1
Subject: How To Drop, Create And Recreate DB Control In A 10g Database Doc ID: Note:278100.1
Subject: Overview Of The EMCA Commands Available for DB Control 10.2 Installations Doc ID: Note:330130.1
Subject:Howtochangethepasswordofthe10gdatabaseuserdbsnmpDocID:Note:259387.1
Subject:EM10.2DBConsoleDisplayingWrongOldInformationDocID:Note:336179.1
Subject:HowToConfigureAListenerTargetFromGridControl10gDocID:Note:427422.1
For each RAC node :
Testing if
dbconsole is
working or
not !!!
dbconsole is OK on Node 1
ELSE
!!!
ELSE
http://node1:1158/em
using sys as user, connected as sysdba, with its password
IF dbconsole is reachable with http://node1:1158/em
THEN
dbconsole is OK on Node 1
ELSE
!!!
END
11gRAC/ASM/AIX
170 of 393
The dbconsole may not start after doing an install through the Oracle Enterprise Manager (OEM)
Troubleshooting and selecting a database.
Tips
The solution is to:
Editthefile:${ORACLE_HOME}/<hostname>_${ORACLE_SID}/sysman/config/emd.properties
LocatetheentrywheretheEMD_URLisset.
Thisentryshouldhavetheformat:
EMD_URL=http://<hostname>:%EM_SERVLET_PORT%/emd/main
Ifyouseethestring:%EM_SERVLET_PORT%intheentry,thenreplacethecompletestringwithan
unusedportnumberthatisnotdefinedinthe"/etc/services"file.Ifthisstringismissingandnoport
numberisinitsplace,theninsertanunusedportnumberthatisnotdefinedinthe"/etc/services"filein
betweenthe"http://<hostname>:"andthe"/emd/main"strings.
Usethecommandemctlstartdbconsoletostartthedbconsoleaftermakingthischange.
Forexample:
EMD_URL=http://myhostname.us.oracle.com:5505/emd/main
15.7.2
MovingfromdbconsoletoGridControl
You must either use DBCONSOLE, or GRID CONTROL, not both at the same time !!!
To move from Locally Managed (DBConsole) to Centrally Managed (Grid Control) :
You must have an existing GRID Control, or install a new Grid Control on aseparate LPAR, or server. Grid
Control is available on AIX5L, but could be installed on any supported operating system.
You must install the Oracle Grid Agent on each RAC node AIX LPAR, with same unix user as Oracle
Clusterware owner, and in the same oraInventory.
Subject: How to change a 10.2.0.x Database from Locally Managed to Centrally Managed Doc ID:
Note:400476.1
11gRAC/ASM/AIX
171 of 393
11gRAC/ASM/AIX
172 of 393
For database
{node1:rdbms}/oracle/rdbms # srvctl start database d ASMDB
{node1:rdbms}/oracle/rdbms # srvctl stop database d ASMDB
For instance 1
{node1:rdbms}/oracle/rdbms # srvctl start instance d ASMDB i ASMDB1 to start the DB.. instance
{node1:rdbms}/oracle/rdbms # srvctl stop instance d ASMDB i ASMDB1 to stop the DB.. instance
For instance 2
{node2:rdbms}/oracle/rdbms # srvctl start instance d ASMDB i ASMDB2 to start the DB.. instance
{node2:rdbms}/oracle/rdbms # srvctl stop instance d ASMDB i ASMDB2 to stop the DB.. instance
From node2 :
{node2:rdbms}/oracle/rdbms # set ORACLE_SID=+ASM2
{node2:rdbms}/oracle/rdbms # sqlplus /nolog
11gRAC/ASM/AIX
173 of 393
{node1:crs}/crs # ocrcheck
Status of Oracle Cluster Registry
Check
Version
Oracle
Cluster
Total space (kbytes)
Registry
Used space (kbytes)
Integrity
Available space (kbytes)
ID
As oracle
Device/File Name
user,
Execute
Device/File Name
ocrcheck
is as follows :
:
2
:
306972
:
6540
:
300432
: 1928316120
: /dev/ocr_disk1
Device/File integrity check succeeded
: /dev/ocr_disk2
Device/File integrity check succeeded
AS root
user :
{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export4.dmp -s
Export online
Oracle {node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
Cluster -rw-r--r-1 root
system
106420 Jan 30 18:30
Registry /oracle/ocr_export.dmp
content {node1:crs}/crs/11.1.0/bin #
you must not edit/modify this exported file
View OCR automatic periodic backup managed by Oracle Clusterware
{node1:crs}/crs/11.1.0 # ocrconfig -showbackup
node1
2008/03/25 09:24:19
/crs/11.1.0/cdata/crs_cluster/backup00.ocr
node1
2008/03/25 05:24:18
/crs/11.1.0/cdata/crs_cluster/backup01.ocr
node1
2008/03/25 01:24:18
/crs/11.1.0/cdata/crs_cluster/backup02.ocr
node1
2008/03/23 13:24:15
/crs/11.1.0/cdata/crs_cluster/day.ocr
node2
2008/03/14 06:45:44
/crs/11.1.0/cdata/crs_cluster/week.ocr
node2
2008/03/16 17:13:39
/crs/11.1.0/cdata/crs_cluster/backup_20080316_171339.ocr
node1
2008/02/24 08:09:21
/crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr
node1
2008/02/24 08:08:48
{node1:crs}/crs/11.1.0 #
/crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr
11gRAC/ASM/AIX
174 of 393
11gRAC/ASM/AIX
175 of 393
Content Type:
Creation Date:
Last Revision
Date:
TEXT/XHTML
05-DEC2003
15-MAR2005
PURPOSE
------This document is to provide additional information on CRS (Cluster Ready Services)
in 10g Real Application Clusters.
SCOPE & APPLICATION
------------------This document is intended for RAC Database Administrators and Oracle support
enginneers.
CRS and 10g REAL APPLICATION CLUSTERS
------------------------------------CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters
that provides a standard cluster interface on all platforms and performs
new high availability operations not available in previous versions.
CRS KEY FACTS
------------Prior to installing CRS and 10g RAC, there are some key points to remember about
CRS and 10g RAC:
- CRS is REQUIRED to be installed and running prior to installing 10g RAC.
- CRS can either run on top of the vendor clusterware (such as Sun Cluster,
HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster,
etc...) or can run without the vendor clusterware. The vendor clusterware
was required in 9i RAC but is optional in 10g RAC.
- The CRS HOME and ORACLE_HOME must be installed in DIFFERENT locations.
- Shared Location(s) or devices for the Voting File and OCR (Oracle
Configuration Repository) file must be available PRIOR to installing CRS. The
voting file should be at least 20MB and the OCR file should be at least 100MB.
- CRS and RAC require that the following network interfaces be configured prior
to installing CRS or RAC:
- Public Interface
- Private Interface
- Virtual (Public) Interface
For more information on this, see Note 264847.1.
11gRAC/ASM/AIX
176 of 393
- The root.sh script at the end of the CRS installation starts the CRS stack.
If your CRS stack does not start, see Note 240001.1.
- Only one set of CRS daemons can be running per RAC node.
- On Unix, the CRS stack is run from entries in /etc/inittab with "respawn".
- If there is a network split (nodes loose communication with each other).
or more nodes may reboot automatically to prevent data corruption.
One
- The supported method to start CRS is booting the machine. MANUAL STARTUP OF
THE CRS STACK IS NOT SUPPORTED UNTIL 10.1.0.4 OR HIGHER.
- The supported method to stop is shutdown the machine or use "init.crs stop".
- Killing CRS daemons is not supported unless you are removing the CRS
installation via Note 239998.1 because flag files can become mismatched.
- For maintenance, go to single user mode at the OS.
Once the stack is started, you should be able to see all of the daemon processes
with a ps -ef command:
[rac1]/u01/home/beta> ps -ef | grep crs
oracle
oracle
root
oracle
1363
999
1003
1002
999
1
1
1
0
0
0
0
11:23:21
11:21:39
11:21:39
11:21:39
?
?
?
?
0:00
0:01
0:01
0:01
/u01/crs_home/bin/evmlogger.bin -o /u01
/u01/crs_home/bin/evmd.bin
/u01/crs_home/bin/crsd.bin
/u01/crs_home/bin/ocssd.bin
11gRAC/ASM/AIX
177 of 393
- Runs as Oracle.
- Restarted automatically on failure
11gRAC/ASM/AIX
178 of 393
Not used
$ORA_CRS_HOME/evm/init - Pid and lock files for EVM. Core files for EVM should
also be written here. Note 1812.1 could be used to debug these.
$ORA_CRS_HOME/srvm/log - Log files for OCR.
11gRAC/ASM/AIX
179 of 393
11gRAC/ASM/AIX
180 of 393
11gRAC/ASM/AIX
Target
-----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
on
on
on
on
on
on
on
on
on
on
on
on
on
opcbsol1
opcbsol2
opcbsol2
opcbsol1
opcbsol1
opcbsol1
opcbsol1
opcbsol1
opcbsol2
opcbsol2
opcbsol2
opcbsol2
opcbsol2
181 of 393
EXAMPLES:
Status of the database, all instances and all services.
srvctl status database -d ORACLE -v
Status of named instances with their current services.
srvctl status instance -d ORACLE -i RAC01, RAC02 -v
Status of a named services.
srvctl status service -d ORACLE -s ERP -v
Status of all nodes supporting database applications.
srvctl status node
START CRS RESOURCES
srvctl start database -d <database-name> [-o < start-options>]
[-c <connect-string> | -q]
srvctl start instance -d <database-name> -i <instance-name>
[,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start service -d <database-name> [-s <service-name>[,<service-name-list>]]
[-i <instance-name>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start nodeapps -n <node-name>
srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
EXAMPLES:
Start the database with all enabled instances.
srvctl start database -d ORACLE
Start named instances.
srvctl start instance -d ORACLE -i RAC03, RAC04
Start named services. Dependent instances are started as needed.
srvctl start service -d ORACLE -s CRM
Start a service at the named instance.
srvctl start service -d ORACLE -s CRM -i RAC04
Start node applications.
srvctl start nodeapps -n myclust-4
11gRAC/ASM/AIX
182 of 393
EXAMPLES:
Stop the database, all instances and all services.
srvctl stop database -d ORACLE
Stop named instances, first relocating all existing services.
srvctl stop instance -d ORACLE -i RAC03,RAC04
Stop the service.
srvctl stop service -d ORACLE -s CRM
Stop the service at the named instances.
srvctl stop service -d ORACLE -s CRM -i RAC04
Stop node applications. Note that instances and services also stop.
srvctl stop nodeapps -n myclust-4
OPTIONS:
-A
-a
-m
-n
-o
-P
-r
-s
-u
11gRAC/ASM/AIX
183 of 393
included.
EXAMPLES:
Add a new node:
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME A
139.184.201.1/255.255.255.0/hme0
Add a new database.
srvctl add database -d ORACLE -o $ORACLE_HOME
Add named instances to an existing database.
srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3
Add a service to an existing database with preferred instances (-r) and
available instances (-a). Use basic failover to the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
Add a service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
PRECONNECT
-P
remove
remove
remove
remove
database -d <database-name>
instance -d <database-name> [-i <instance-name>]
service -d <database-name> -s <service-name> [-i <instance-name>]
nodeapps -n <node-name>
EXAMPLES:
Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4
MODIFY CRS RESOURCES
srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>]
[-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s <start_options>]
srvctl modify instance -d <database-name> -i <instance-name> -n <node-name>
srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-t <instance-name> [-f]
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-r [-f]
srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x]
11gRAC/ASM/AIX
184 of 393
OPTIONS:
-i <instance-name> -t <instance-name> the instance name (-i) is replaced by the
instance name (-t)
-i <instance-name> -r the named instance is modified to be a preferred instance
-A address-list for VIP application, at node level
-s <asm_inst_name> add or remove ASM dependency
EXAMPLES:
Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 r
RELOCATE SERVICES
srvctl relocate service -d <database-name> -s <service-name> [-i <instance-name >]t<instance-name > [-f]
EXAMPLES:
Relocate a service from one instance to another
srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01
ENABLE CRS RESOURCES (The resource may be up or down to use this function)
srvctl enable database -d <database-name>
srvctl enable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl enable service -d <database-name> -s <service-name>] [, <service-name-list>]
[-i <instance-name>]
EXAMPLES:
Enable the database.
srvctl enable database -d ORACLE
Enable the named instances.
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service.
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance.
srvctl enable service -d ORACLE -s CRM -i RAC03
DISABLE CRS RESOURCES (The resource must be down to use this function)
srvctl disable database -d <database-name>
srvctl disable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl disable service -d <database-name> -s <service-name>] [,<service-name-list>]
[-i <instance-name>]
11gRAC/ASM/AIX
185 of 393
EXAMPLES:
Disable the database globally.
srvctl disable database -d ORACLE
Disable the named instances.
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service globally.
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service at the named instance.
srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04
For more information on this see the Oracle10g Real Application Clusters
Administrators Guide - Appendix B
RELATED DOCUMENTS
Oracle10g Real Application Clusters Installation and Configuration
Oracle10g Real Application Clusters Administrators Guide
19.2 About RAC ...
282036.1 - Minimum software versions and patches required to Support Oracle Products on ...
283743.1 - Pre-Install checks for 10g RDBMS on AIX
220970.1 - RAC: Frequently Asked Questions
183408.1 - Raw Devices and Cluster Filesystems With Real Application Clusters
293750.1 - 10g Installation on Aix 5.3, Failed with Checking operating system version mu...
11gRAC/ASM/AIX
186 of 393
296856.1 - Configuring the IBM AIX 5L Operating System for the Oracle 10g VIP
294336.1 - Changing the check interval for the Oracle 10g VIP
276434.1 - Modifying the VIP of a Cluster Node
298895.1 - Modifying the default gateway address used by the Oracle 10g VIP
264847.1 - How to Configure Virtual IPs for 10g RAC
240052.1 - 10g Manual Database Creation in Oracle (Single Instance and RAC)
11gRAC/ASM/AIX
187 of 393
11gRAC/ASM/AIX
188 of 393
Ocrconfig : To backup, export, import repair ... OCR (Oracle Cluster Registry) contents.
11gRAC/ASM/AIX
189 of 393
11gRAC/ASM/AIX
190 of 393
file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous
version
-downgrade [-version <version string>]
- Downgrade cluster registry to the
specified version
-backuploc <dirname>
- Configure periodic backup location
-showbackup [auto|manual]
- Show backup information
-manualbackup
- Perform OCR backup
-restore <filename>
- Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite
- Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename>
- Repair local OCR configuration
-help
- Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
{node1:crs}/crs #
11gRAC/ASM/AIX
191 of 393
11gRAC/ASM/AIX
192 of 393
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
config
config
config
config
config
config
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
disable
disable
disable
disable
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
enable
enable
enable
enable
database
database -d <name> [-a] [-t]
service -d <name> [-s <service_name>] [-a] [-S <level>]
nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
asm -n <node_name>
listener -n <node_name>
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]
database -d <name>
instance -d <name> -i "<inst_name_list>"
service -d <name> -s "<service_name_list>" [-i <inst_name>]
asm -n <node_name> [-i <inst_name>]
193 of 393
Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY |
LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]
Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
remove
remove
remove
remove
remove
remove
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
setenv
setenv
setenv
setenv
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
start
start
start
start
start
start
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
status
status
status
status
status
Usage:
Usage:
Usage:
Usage:
Usage:
Usage:
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
stop
stop
stop
stop
stop
stop
194 of 393
11gRAC/ASM/AIX
195 of 393
[ -help ]
stage { -list | -help }
stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
comp { -list | -help }
comp <component-name> <component-specific options> [-verbose]
{node1:crs}/crs #
[-verbose]
11gRAC/ASM/AIX
196 of 393
[-verbose]
SYNTAX
cluvfy
cluvfy
cluvfy
cluvfy
(for Stages):
stage -post hwos -n <node_list> [ -s <storageID_list> ] [-verbose]
stage -pre cfs -n <node_list> -s <storageID_list> [-verbose]
stage -post cfs -n <node_list> -f <file_system>
[-verbose]
stage -pre crsinst -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -c <ocr_location> ] [ -q <voting_disk> ]
[ -osdba <osdba_group> ]
[ -orainv <orainventory_group> ] [-verbose]
cluvfy stage -post crsinst -n <node_list> [-verbose]
cluvfy stage -pre dbinst -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -osdba <osdba_group> ] [-verbose]
cluvfy stage -pre dbcfg -n <node_list> -d <oracle_home> [-verbose]
{node1:crs}/crs #
[-verbose]
11gRAC/ASM/AIX
197 of 393
[-verbose]
(for
comp
comp
comp
comp
comp
Components):
nodereach -n <node_list> [ -srcnode <node> ] [-verbose]
nodecon -n <node_list> [ -i <interface_list> ] [-verbose]
cfs [ -n <node_list> ] -f <file_system>
[-verbose]
ssa [ -n <node_list> ] [ -s <storageID_list> ] [-verbose]
space [ -n <node_list> ] -l <storage_location>
-z <disk_space> {B|K|M|G} [-verbose]
cluvfy comp sys [ -n <node_list> ] -p { crs | database } [-r { 10gR1 | 10gR2 | 11gR1
} ]
[ -osdba <osdba_group> ] [ -orainv <orainventory_group> ]
[-verbose]
cluvfy comp clu [ -n <node_list> ] [-verbose]
cluvfy comp clumgr [ -n <node_list> ] [-verbose]
cluvfy comp ocr [ -n <node_list> ] [-verbose]
cluvfy comp crs [ -n <node_list> ] [-verbose]
cluvfy comp nodeapp [ -n <node_list> ] [-verbose]
cluvfy comp admprv [ -n <node_list> ] [-verbose]
-o user_equiv [-sshonly]
-o crs_inst [-orainv <orainventory_group> ]
-o db_inst [-osdba <osdba_group> ]
-o db_config -d <oracle_home>
cluvfy comp peer [ -refnode <node> ] -n <node_list>
[-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -orainv <orainventory_group> ] [ -osdba <osdba_group> ] [-verbose]
{node1:crs}/crs #
11gRAC/ASM/AIX
198 of 393
Each node MUST have same network interfaces layout and usage, Do identify your PUBLIC, and PRIVATE network interface, and fill in table on next page
11gRAC/ASM/AIX
199 of 393
Node
Public network
Virtual network
Private network
Network
Interface name
en____
en____
en____
en____
en____
en____
en____
en____
en____
en____
11gRAC/ASM/AIX
Host Type
Defined hostname
Assigned IP
Observation
Public
Virtual
Private
Public
Virtual
Private
Public
Virtual
Private
Public
Virtual
Private
Public
Virtual
Private
200 of 393
11gRAC/ASM/AIX
Node 1
Node 2
Node 3
Done ?
Node 4
201 of 393
Node 5
Node 6
Node ..
LUNs
Number
Device Name
L0
/dev/example_disk
hdisk
Hdisk1
Node 1
Major
Num.
30
Minor
Num.
2
Node 2
Major
hdisk
Num.
Hdisk0
30
Minor
Num.
2
11gRAC/ASM/AIX
202 of 393
hdisk
Hdisk1
Node 3
Major
Num.
30
Minor
Num.
2
hdisk
Hdisk1
Node 4
Major
Num.
30
Minor
Num.
2
11gRAC/ASM/AIX
203 of 393