AIX Tuning For Oracle DB PDF
AIX Tuning For Oracle DB PDF
AIX Tuning For Oracle DB PDF
IBM Power Systems Technical University October 10-14 | Fontainebleau Miami Beach | Miami, FL IBM
Copyright IBM Corporation 2011 Materials may not be reproduced in whole or in part without the prior written permission of IBM.
5.3
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
Power 7 (Socket/Chip/Core/Threads)
IBM Power Systems Technical University Dublin 2012
chip
Power Hardware
1 Power7 Chip
SMT
SMT
SMT
SMT
Software
In AIX, when SMT is enable, each SMT thread is seen like a logical CPU
32 sockets = 32 chips = 256 cores = 1024 SMT threads = 1024 AIX logical CPU
Copyright IBM Corporation 2011
Use SMT4
Give a cpu boost performance to handle more concurrent threads in parallel
Disabling HW prefetching.
Usually improve performance on Database Workload on big SMP Power system (> P750) # dscrctl n b s 1 (this will dynamically disable HW memory prefetch and keep this
configuration across reboot)
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
1 - AIX is started, applications load some computational pages into the memory. As a UNIX system, AIX will try to take advantage of the free memory by using it as a cache file to reduce the IO on the physical drives. 2 - The activity is increasing, the DB needs more memory but there is no free pages available. LRUD (AIX page stealer) is starting to free some pages into the memory. 3 On older version of AIX (< AIX 6.1) with default settings, LRUD will page out some computational pages instead of removing only pages from the File System Cache.
!
FS CACHE
Objective :
Tune the VMM to protect computational pages (Programs, SGA, PGA) from being paged out and force the LRUD to steal pages from FS-CACHE only.
Memory : Use jointly AIX dynamic LPAR and Oracle dynamic allocation of memory + AMM
IBM Power Systems Technical University Dublin 2012
Initial configuration
Memory_max_size = 18 GB
Scenario :
Oracle tuning advisor indicates that SGA+PGA need to be increased to 11GB: memory_target can be increased dynamically to 11GB but real memory is only 12GB, so it needs to be increased as well.
Final configuration
Memory_max_size = 18 GB
Real memory = 12 GB
AIX + free
Memory_target = 8 GB
Memory_target = 8 GB
SGA + PGA
SGA + PGA
Memory allocated to the system has been increased dynamically, using AIX DLPAR Memory allocated to Oracle (SGA and PGA) has been increased on the fly
Copyright IBM Corporation 2011
11
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
12
Active Memory Expansion is a POWER7 new feature that expands systems effective memory capacity by dynamicall compressing real memory. Its activation is on LPAR level and transparent for applications. AME goal is to improve the system memory usage. It allows to increase the global systems throughtput and/or reduces the Memory/core ratio application requirements with a low impact on performances.
Memory Config
Sample Config
Mem Size: 120GB AME Factor: Disabled
13
TEST Nb CPU 0 1 2 24 24 24
CPU Consumption avg: 16.3 cpu avg: 16.8 cpu avg: 17.5 cpu
The impact of AME on batch duration is really low (<10%) with few cpu overhead (7%), even with 3 times less memory.
POWER7+ processor embeds on chip hardware compression, expect less CPU consumption for even more compressed memory
Note: This is an illustrative scenario based on using a sample workload. This data represents measured results in a controlled lab environment. Your results may vary. Copyright IBM Corporation 2011
14
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
15
IO : Database Layout
IBM Power Systems Technical University Dublin 2012
Having a good Storage configuration is a key point : Because disk is the slowest part of an infrastructure Reconfiguration can be difficult and time consuming
Stripe and mirror everything (S.A.M.E) approach: Goal is to balance I/O activity across all disks, loops, adapters, etc... Avoid/Eliminate I/O hotspots
rec O om rac me le nd a ti o
Manual file-by-file data placement is time consuming, resource intensive and iterative Additional advices to implement SAME :
apply the SAME strategy to data, indexes if possible separate redologs (+archivelogs)
16
Storage
RAID-5 vs. RAID-10 Performance Comparison
HW Striping
I/O Profile Sequential Read Sequential Write Random Read Random Write
With Enterprise class storage (with huge cache), RAID-5 performances are comparable to
RAID-10 (for most customer workloads) Consider RAID-10 for workloads with a high percentage of random write activity (> 25%) and high I/O access densities (peak > 50%)
Use RAID-5 or RAID-10 to create striped LUNs If possible try to minimize the number of LUNs per RAID array to avoid contention on physical disk.
17
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
18
AIX
Volume Group
Storage
HW Striping
1. 2. 3. 4.
Luns are striped across physical disks (stripe-size of the physical RAID : ~ 64k, 128k, 256k) LUNs are seen as hdisk device on AIX server. Create AIX Volume Group(s) (VG) with LUNs from multiple arrays Logical Volume striped across hdisks (stripe-size : 8M, 16M, 32M, 64M) => each read/write access to the LV are well balanced accross LUNs and use the maximum number of physical disks for best performance.
LVM Striping
19
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
21
Check if the definition of your disk subsystem is present in the ODM. If the description shown in the output of lsdev Cc disk the word Other, then it means that AIX doesnt have a correct definition of your disk device in the ODM and use a generic device definition.
# lsdev Cc disk
In general, a generic device definition provides far from optimal performance since it doesnt properly customize the hdisk device : exemple : hdisk are created with a queue_depth=1 1. Contact your vendor or go to their web site to download the correct ODM definition for your storage subsystem. It will setup properly the hdisk accordingly to your hardware for optimal performance. If AIX is connected to the storage subsystem with several Fiber Channel Cards for performance, dont forget to install a multipath device driver or path control module.
- sdd or sddpcm for IBM DS6000/DS8000 - powerpath for EMC disk subsystem - hdlm for Hitachi etc....
Copyright IBM Corporation 2011
2.
22
AIX
Pbuf Pbuf Pbuf Pbuf
1. 2. 3.
Each LVM physical volume as a physical buffer (pbuf) # vmstat v command help you to detect lack of pbuf. If there is a lack of pbuf, 2 solutions: Add more luns (this will add more pbuf) Or increase pbuf size : # lvmo v <vg_name> -o pv_pbuf_count=XXX
LVM Striping
23
AIX
Pbuf Pbuf Pbuf Pbuf
iostat -D output
LVM Striping
queue : avgtime
read/write : avgserv
2.2 ms
1.
0.2 ms
Each AIX hdisk has a Queue called queue depth. This parameter set the number of // queries that can be send to Physical disk. To know if you have to increase qdepth, use iostat -D and monitor : avgserv, avgtime If you have : avgserv < 2-3ms => this mean that Storage behave well (can handle more load) And avgtime > 1ms => this mean that disk queue are full, IO wait to be queued => INCREASE hdisk queue depth (# chdev l hdiskXX a queue_depth=YYY)
Copyright IBM Corporation 2011
2.
1.
24
AIX
MPIO sddpcm
Pbuf Pbuf Pbuf Pbuf
Nb_cmd_elems
LVM Striping
fcs0
queue
fcs1
queue
External Storage
1.
Each HBA FC adapter has a queue nb_cmd_elems. This queue has the same role for the HBS as the qdepth for the disk. Rule of thumb: nb_cmd_elems= (sum of qdepth) / nb HBA Changing nb_cmd_elems : # chdev l fcsX o nb_cmd_elems=YYY You can also change the max_xfer_size=0x200000 and lg_term_dma=0x800000 with the same command
2. 3.
These changes use more memory and must be made with caution, check first with : # fcstat fcsX
25
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
26
Virtual SCSI
IBM Power Systems Technical University DublinVirtual I/O helps reduce hardware costs by sharing disk drives 2012
Micro-partition sees disks as vSCSI (Virtual SCSI) devices Virtual SCSI devices added to partition via HMC LUNs on VIOS accessed as vSCSI disk VIOS must be active for client to boot
FC Adapter
DS8000
FC Adapters
VIOS EMC
VIOS owns physical disk resources LVM based storage on VIO Server Physical Storage can be SCSI or FC Local or remote
SAN
DS8000
EMC
Copyright IBM Corporation 2011
27
MPIO sddpcm
AIX
AIX MPIO hdisk qdepth vscsi0
pHyp
vhost0 hdisk qdepth
VIOS
queue fcs0
# lsdev Cc disk
hdisk0 Available Virtual SCSI Disk Drive
Storage driver cannot be installed on the lpar Default qdepth=3 !!! Bad performance # chdev l hdisk0 a queue_depth=20 + monitor svctime/wait time with nmon to adjust queue depth * You have to set same queue_depth for the source hdisk on the VIOS
HA: (dual vios conf.) change vscsi_err_recov to fast_fail # chdev l vscsi0 a vscsi_err_recov=fast_fail Performance: No perf tuning can be made; We just know that each vscsi can handle 512 cmd_elems. (2 are reserved for the adapter and 3 reserved for each vdisk) So, use the following formula to find the number of disk you can attache behind a vscsi adapter. Nb_luns= ( 512 2 ) / ( Q + 3 ) With Q=qdepth of each disk If more disks are needed => add2011 Copyright IBM Corporation vscsi
Monitor VIO FC activity with nmon (interactive: press a or ^) (reccord with -^ option) Adapt num_cmd_elems accordingly with sum of qdepth. check with: # fcstat fcsX HA: Change the following parameters: # chdev l fscsi0 a fc_err_recov=fast_fai l # chdev l fscsi0 a dyntrk=yes
28
N-Port ID Virtualization
LPARs own virtual FC adapters LPARs have direct visibility on SAN (Zoning/Masking) Virtual adapter can be assigned to multiple operating systems sharing the physical adapter Tape Library Support
POWER6 or Later Disks
Shared FC Adapter
VIOS
FC Adapters
VIOS owns physical FC adapters VIOS virtualizes FC to clients partitions VIOS Fiber Channel adapter supports Multiple World Wide Port Names / Source Identifiers Physical adapter appears as multiple virtual adapters to SAN / end-point device VIOS must be active for client to boot
SAN
DS8000
Copyright IBM Corporation 2011
EMC
29
N-Port ID Virtualization
POWER6 or Later
FC Adapter
Virtual SCSI
Shared FC Adapter
VIOS
FC Adapters
VIOS
FC Adapters
SAN
SAN
DS8000
EMC
Copyright IBM Corporation 2011
DS8000
EMC
30
MPIO sddpcm
AIX
wwpn
pHyp
vfchost0
VIOS
queue fcs0
hdisk
qdepth
queue fcs0
# lsdev Cc disk hdisk0 Available MPIO FC 2145 Storage driver must be installed on the lpar Default qdepth is set by the drivers Monitor svctime / wait time with nmon or iostat to tune the queue depth
HA: (dual vios conf.) Change the following parameters: # chdev l fscsi0 a fc_err_recov=fast_fail # chdev l fscsi0 a dyntrk=yes Performance: Monitor fc activity with nmon (interactive: option a or ^) (reccording : option -^) Adapt num_cmd_elems Check fcstat fcsX
Performance: Monitor fc activity with nmon (interactive: option ^ only) (reccording : option -^) Adapt num_cmd_elems Check fcstat fcsX Should be = Sum of vfcs num_cmd_elems connected to the backend device
31
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
32
33
If Oracle data are stored in a Filesystem, some mount option can improve performance :
Direct IO (DIO) introduced in AIX 4.3. Data is transfered directly from the disk to the
application buffer, bypassing the file buffer cache hence avoiding double caching (filesystem cache + Oracle SGA). Emulates a raw-device implementation. To mount a filesystem in DIO $ mount o dio /data
Bench throughput over run duration higher tps indicates better performance.
34
Benefits :
1. 2. 3. Avoid double caching : Some data are already cache in the Application layer (SGA) Give a faster access to the backend disk and reduce the CPU utilization Disable the inode-lock to allow several threads to read and write the same file (CIO only)
Restrictions :
1. Because data transfer is bypassing AIX buffer cache, jfs2 prefetching and write-behind cant be used. These functionnalities can be handled by Oracle. (Oracle parameter) db_file_multiblock_read_count = 8, 16, 32, ... , 128 according to workload 2. When using DIO/CIO, IO requests made by Oracle must by aligned with the jfs2 blocksize to avoid a demoted IO (Return to normal IO after a Direct IO Failure) => When you create a JFS2, use the mkfs o agblksize=XXX Option to adapt the FS blocksize with the application needs. Rule : IO request = n x agblksize Exemples: if DB blocksize > 4k ; then jfs2 agblksize=4096 Redolog are always written in 512B block; So jfs2 agblksize must be 512
35
Application
512
4096
36
Application
512
4096
37
Application
512
38
Application
To Conclued : demoted io will consume : - more CPU time (kernel) - more physical IO : 1 io write = 1 phys io read + 1 phys io write
Copyright IBM Corporation 2011
39
IO : Direct IO demoted
IBM Power Systems Technical University Dublin 2012
Extract from Oracle AWR (test made in Montpellier with Oracle 10g) Waits on redolog (with demoted IO, FS blk=4k) Waits% Time -outs Total Wait Time (s) 2,229,324 0.00 62,628 Avg wait (ms) 28 Waits /txn 1.53
Waits on redolog (without demoted IO, FS blk=512) Waits% 494,905 Time -outs Total Wait Time (s) 0.00 1,073 Avg wait (ms) 2 Waits /txn 1.00
How to detect demoted IO : Trace command to check demoted io : # trace aj 59B,59C ; sleep 2 ; trcstop ; trcrpt o directio.trcrpt # grep i demoted directio.trcrpt
40
Oracle binaries
mount o rw
mount o noatime,cio
Cached by Oracle (SGA)
Oracle Datafile
mount o rw
Oracle Redolog
mount o rw
Cached by AIX (fscache)
mount o noatime,rbrw
Use jfs2 cache, but memory is released after read/write.
mount o rw
Cached by AIX (fscache)
mount o noatime
Cached by AIX (fscache)
41
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
42
IO : Asynchronous IO (AIO)
IBM Power Systems Technical University Dublin 2012
Allows multiple requests to be sent without to have to wait until the disk subsystem has completed the physical IO. Utilization of asynchronous IO is strongly advised whatever the type of file-system and mount option implemented (JFS, JFS2, CIO, DIO).
Application
2
aio Q
aioservers
Disk
Posix vs Legacy Since AIX5L V5.3, two types of AIO are now available : Legacy and Posix. For the moment, the Oracle code is using the Legacy AIO servers.
Copyright IBM Corporation 2011
43
44
With fsfastpath, IO are queued directly from the application into the LVM layer without any aioservers kproc operation. Better performance compare to non-fastpath No need to tune the min and max aioservers No aioservers proc. => ps k | grep aio | wc l is not relevent, use iostat A instead
Application
2
AIX Kernel
Disk
ASM :
enable asynchronous IO fastpath. : AIX 5L : chdev -a fastpath=enable -l aio0 (default since AIX 5.3) AIX 6.1 : ioo p o aio_fastpath=1 (default setting) AIX 7.1 : ioo p o aio_fastpath=1 (default setting + restricted tunable)
45
How to set filesystemio_options parameter Possible values ASYNCH : enables asynchronous I/O on file system files (default) DIRECTIO : enables direct I/O on file system files (disables AIO) SETALL : enables both asynchronous and direct I/O on file system files NONE : disables both asynchronous and direct I/O on file system files Since version 10g, Oracle will open data files located on the JFS2 file system with the O_CIO (O_CIO_R with Oracle 11.2.0.2 and AIX 6.1 or Later) option if the filesystemio_options initialization parameter is set to either directIO or setall. Advice : set this parameter to ASYNCH, and let the system managed CIO via mount option (see CIO/DIO implementation advices)
If needed, you can still re-mount an already mounted filesystem to another mount point to have it accessed with different mounting options. Example, your oracle datafiles are on a CIO mounted filesystem, you want to copy them for a cold backup and would prefer to access them with filesystem cache to backup them faster. Then just re-mount this filesystem to another mount point in rw mode only.
46
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
47
NUMA architecture
IBM Power Systems Technical University Dublin 2012
NUMA stands for Non Uniform Memory Access. It is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.
local
ne ar
far
48
Oracle DB NUMA support have been introduced since 1998 on the first NUMA systems. It provides a memory/processes models relying on specific OS features to better perform on this kind of architecture. On AIX, the NUMA support code has been ported, default is off in Oracle 11g. _enable_NUMA_support=true is required to enable NUMA features. When NUMA enabled Oracle checks for AIX rset named ${ORACLE_SID}/0 at startup. For now, it is assumed that it will use rsets ${ORACLE_SID}/0, ${ORACLE_SID}/1, ${ORACLE_SID}/2, etc if they exist.
49
The test is done on a POWER7 machine with the following CPU and memory distribution (dedicated LPAR). It has 4 domains with 8 CPU and >27GB each. If the lssrad output shows unevenly distributed domains, fix the problem before proceeding.
Listing SRAD (Affinity Domain)
# lssrad -va REF1 SRAD 0 0 1 1 2 3 MEM 27932.94 31285.00 29701.00 29701.00 CPU 0-31 32-63 64-95 96-127 SA/1 SA/3 SA/0 SA/2
We will set up 4 rsets, namely SA/0, SA/1, SA/2, and SA/3, one for each domain.
# # # # mkrset mkrset mkrset mkrset -c -c -c -c 0-31 -m 0 SA/0 32-63 -m 0 SA/1 64-95 -m 0 SA/2 96-127 -m 0 SA/3
Before starting the DB, lets set vmo options to cause process private memory to be local.
# vmo -o memplace_data=1 -o memplace_stack=1
Copyright IBM Corporation 2011
50
The following messages are found in the alert log. It finds the 4 rsets and treats them as NUMA domains.
LICENSE_MAX_USERS = 0 SYS auditing is disabled NUMA system found and support enabled (4 domains - 32,32,32,32) Starting up Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The shared memory segments. There are total of 7, one of which owned by ASM. The SA instance has 6 shared memory segments instead of 1.
Without NUMA optimization dba 31 285220864 13369718 23987126 23:15:57 23:15:57 12:41:22 dba 59 54089760768 17105120 23331318 23:16:13 23:16:13 23:15:45
With NUMA optimization 29 285220864 13369718 7405688 23:27:32 23:32:38 12:41:22 59 2952790016 23987134 23920648 23:32:42 23:32:42 23:27:21 59 2952790016 23987134 23920648 23:32:42 23:32:42 23:27:21 59 20480 23987134 23920648 23:32:42 23:32:42 23:27:21 59 3087007744 23987134 23920648 23:32:42 23:32:42 23:27:21 59 42144366592 23987134 23920648 23:32:42 23:32:42 23:27:21 59 2952790016 23987134 23920648 23:32:42 23:32:42 23:27:21
51
Server process 1
PGA
Server process 2
PGA
Background process
PGA
Java pool
53
Server process 1
PGA Background process
PGA
Server process 2
PGA
PGA
Background process
Background process
PGA
Background process
PGA
SRAD0
SRAD1
SRAD2
SRAD3
54
If Oracle shadow processes are allowed to migrate across domains, the benefit of NUMA-enabling Oracle will be lost. Therefore, arrangements need to be made to affinitize the user connections. For network connections, multiple listeners can be arranged with each listener affinitized to a different domain. The Oracle shadow processes are children of the individual listeners and inherit the affinity from the listener. For local connections, the client process can be affinitized to the desired domain/rset. These connections do not go through any listener, and the shadows are children of the individual clients and inherit the affinity from the client.
55
Listener 1
Listener 2
Listener 3
Listener 4
SRAD0
SRAD1
SRAD2
SRAD3
56
Four Oracle users each having its own schema and tables are defined. The 4 schemas are identical except the name. Each user connection performs some query using random numbers as keys and repeats the operation until the end of the test. The DB cache is big enough to hold the entirety of all the 4 schemas. therefore, it is an in-memory test. All test cases are the same, except domain-attachment control. Each test runs a total of 256 connections, 64 of each oracle user.
58
Relative Performance
IBM Power Systems Technical University Dublin 2012
* RoundRobin = 16 connections of each oracle user run in the each domain; ** Partitioned = 64 connections of 1 oracle user run in each domain.
the relative performance shown applies only to this individual test, and can vary widely with different workloads. Copyright IBM Corporation 2011
59
Agenda
IBM Power Systems Technical University Dublin 2012
CPU
Power 7
Memory
AIX VMM tuning Active Memory Expansion
IO
Storage consideration AIX LVM Striping Disk/Fiber Channel driver optimization Virtual Disk/Fiber channel driver optimization AIX mount option Asynchronous IO
60
61
Oracle Configuration Version 11gR2 parallel_degree_policy = auto in spfile optimizer_feature_enable at the exact version number of Oracle Engine Calibrate IO through DBMS_RESOURCE_MANAGER.CALIBRATE_IO when there is no activity on the database. Update Statistics
Other Oracle configuration parallel_mintime_threshold (default 30s) Parallel_min_servers Parallel_max_servers Parallel_degree_limit (default cpu)
Copyright IBM Corporation 2011
62
Some benchmarks showed performance improvement with In-Memory PX when deactivating NUMA, up to x5 for some queries Hypothesis (still under investigation)
In-Memory PX uses a subpart of SGA which cannot be split and then is in only one rset. Either loose time when CPU and RAM are not aligned Or loose time when IMPX memory is moved from one rset to the other one
63
Various Findings
IBM Power Systems Technical University Dublin 2012
Slow access to time operations (such as sysdate) in Oracle when using Olson TZ on AIX 6.1
Workaround is to set TZ using POSIX values Example
Olson: TZ=Europe/Berlin POSIX: TZ=MET-1MST-2,M3.5.0/02:00,M10.5.0/03:00
Database performance progressively degrades over time until the instance is restarted.
Issue is exposed by a change in Rdbms 11.2.0.3 Triggered by large number of connections + Terabyte segments Fixed in AIX 7.1 TL1 SP5 Workaround for ealier versions : disable Terabyte segment
vmo -r -o shm_1tb_unsh_enable=0 + reboot
Copyright IBM Corporation 2011
64
Session Evaluations
IBM Power Systems Technical University Dublin 2012
PE129
Win prizes by submitting evaluations online. The more evalutions submitted, the greater chance of winning
65
Our customer benchmark center is the place to validate the proposed IBM solution in a simulated production environment or to focus on specific IBM Power / AIX Technologies Standard benchmarks
Dedicated Infrastructure Dedicated technical support
Light Benchmarks
Mutualized infrastructure Second level support
Questions ?
[email protected] [email protected]
67
Thank You
IBM Power Systems Technical University Dublin 2012
Cm n
Vietnam
Thai
Dankie
Afrikaans
Russian
Gracias
Spanish Traditional Chinese
Siyabonga
Zulu Arabic
Danke Obrigado
Brazilian Portuguese German
Grazie
Italian
Merci
French Simplified Chinese Japanese Korean
Tamil
Dziekuje
Polish
Tak
Danish / Norwegian
Tack
Swedish
68