9i DBA Performance Tuning R2 V2 - D37163
9i DBA Performance Tuning R2 V2 - D37163
9i DBA Performance Tuning R2 V2 - D37163
Tuning
y
l
n
O
e
I
A
&
l
a
n
r
te
n
I
e
D11299GC20
Edition 2.0
August 2002
D37163
r
O
l
c
a
s
U
Authors
Peter Kilpatrick
Shankar Raman
Jim Womack
Technical Contributors and
Reviewers
Mirza Ahmad
David Austin
Ruth Baylis
Howard Bradley
This material or any portion of it may not be copied in any form or by any means
without the express prior written permission of Oracle Corporation. Any other copying
is a violation of copyright law and may result in civil and/or criminal penalties.
Pietro Colombo
Michele Cyran
Benoit Dagerville
Connie Dialeris
Joel Goodman
The information in this document is subject to change without notice. If you find any
problems in the documentation, please report them in writing to Education Products,
Oracle Corporation, 500 Oracle Parkway, Box SB-6, Redwood Shores, CA 94065.
Oracle Corporation does not warrant that this document is error-free.
Scott Gossett
Lilian Hobbs
Alexander Hunold
Oracle and all references to Oracle and Oracle products are trademarks or registered
trademarks of Oracle Corporation.
Sushil Kumar
Roderick Manalac
All other products or company names are used for identification purposes only, and
may be trademarks of their respective owners.
Howard Ostrow
y
l
n
Sander Rekveld
Maria Senise
O
e
Ranbir Singh
Janet Stern
Wayne Stokes
Tracy Stollberg
I
A
&
l
a
Publisher
John B Dawson
n
r
te
r
O
l
c
a
n
I
e
s
U
Contents
I
A
&
l
a
n
r
te
l
c
a
r
O
3
n
I
e
y
l
n
s
U
O
e
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
iv
s
U
O
e
y
l
n
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
s
U
O
e
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
vi
s
U
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
vii
O
e
y
l
n
I
A
&
l
a
n
r
te
12 Managing Statistics
Objectives 12-2
Managing Statistics 12-3
Table Statistics 12-5
Collecting Segment-Level Statistics 12-6
Querying Segment-Level Statistics 12-7
Using Dynamic Sampling 12-8
Enabling Dynamic Sampling 12-9
Index Statistics 12-10
n
I
e
r
O
l
c
a
viii
s
U
O
e
y
l
n
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
ix
y
l
n
s
U
O
e
O
e
y
l
n
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
xi
y
l
n
s
U
O
e
&
l
a
n
r
te
n
I
e
r
O
l
c
a
xii
I
A
y
l
n
s
U
O
e
20 Workshop Overview
Objectives 20-2
Approach to Workshop 20-3
Company Information 20-4
Physical Workshop Configuration 20-5
Workshop Database Configuration 20-6
Workshop Procedure 20-7
Choosing a Scenario 20-8
Workshop Scenarios 20-9
Collecting Information 20-10
Generating a Workshop Load 20-11
Results 20-12
Summary 20-13
A Appendix A: Practice Solutions Using SQL Plus
B Appendix B: Practice Solutions Using Enterprise Manager
C Appendix C: Tuning Workshop
y
l
n
O
e
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
xiii
s
U
y
l
n
O
e
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
xiv
s
U
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Tablespace
Segments
Extents
Extents
Extents
y
l
n
Blocks
O
e
s
U
Space Management
The efficient management of space in the database is important to its performance. This
section of the lesson examines how to manage extents and blocks in the database.
Blocks
In an Oracle database, the block is the smallest unit of data file I/O and the smallest unit of
space that can be allocated. An Oracle block consists of one or more contiguous operating
system blocks.
Extents
An extent is a logical unit of database storage space allocation consisting of a number of
contiguous data blocks. One or more extents make up a segment. When the existing space in
a segment is completely used, the Oracle server allocates a new extent for the segment.
Segments
A segment is a set of extents that contains all the data for a specific logical storage structure
within a tablespace. For example, for each table, the Oracle server allocates one or more
extents to form data segments for that table. For indexes, the Oracle server allocates one or
more extents to form its index segment.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Allocation of Extents
O
e
y
l
n
s
U
Allocation of Extents
When database operations cause the data to grow and exceed the space that is allocated, the
Oracle server extends the segment. Dynamic extension or extending the segment when
executing an INSERT or UPDATE statement reduces performance because the server
executes several recursive SQL statements to find free space and add the extent to the data
dictionary. However, this is not the case for locally managed tablespaces that avoid
recursive space management operations.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Pros
Are less likely to extend dynamically
Deliver small performance benefit
Enable you to read the entire extent map with a
single I/O operation
Cons
Free space may not be available
Unused space
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Highwater
mark
Extent 1
Empty blocks
(rows deleted)
Extent 2
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
In a full table scan, the Oracle server reads in all blocks below the high-water mark. Empty
blocks above the high-water mark may waste space, but should not degrade performance;
however, underused blocks below the high-water mark may degrade performance.
Clusters
In clusters, space is allocated for all cluster keys, whether they contain data or not. The
amount of space allocated depends on the value of the SIZE parameter specified when
creating the cluster and the type of cluster:
In a hash cluster, because the number of hash keys is specified in the create cluster
statement, there is space allocated below the high-water mark for every hash key.
In an index cluster, there is space allocated for every entry into the cluster index.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Table Statistics
O
e
y
l
n
s
U
Table Statistics
Using the dbms_stats package you can analyze the storage characteristics of tables,
indexes, and clusters to gather statistics, which are then stored in the data dictionary. You
can use these statistics to determine whether a table or index has unused space.
Query the dba_tables view to see the resulting statistics:
num_rows: Number of rows in the table
blocks: Number of blocks below the high-water mark of the table
empty_blocks: Number of blocks above the high-water mark of the table
avg_space: Average free space in bytes in the blocks below the high-water mark
avg_row_len: Average row length, including row overhead
chain_cnt: Number of chained or migrated rows in the table
avg_space_freelist_blocks: The average freespace of all blocks on a free list
num_freelist_blocks: The number of blocks on the free list
empty_blocks represents blocks that have not yet been used, rather than blocks that were
full and are now empty.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
y
l
n
s
U
You can also use this supplied package to obtain information about space use in segments. It
contains two procedures:
unused_space returns information about unused space in an object (table, index, or
cluster). Its specification is:
&
l
a
unused_space (segment_owner IN
VARCHAR2,
segment_name
IN
VARCHAR2,
segment_type
IN
VARCHAR2,
total_blocks
OUT NUMBER,
total_bytes
OUT NUMBER,
unused_blocks
OUT NUMBER,
unused_bytes
OUT NUMBER,
last_used_extent_file_id
OUT NUMBER,
last_used_extent_block_id
OUT NUMBER,
last_used_block
OUT NUMBER);
n
r
te
n
I
e
r
O
l
c
a
These procedures are created by and documented in the dbmsutil.sql script that is run
by catproc.sql. When running this package, you must provide a value for the
FREE_LIST_GROUP_ID. Use a value of 1, unless you are using Oracle Parallel Server.
The following script prompts the user for the table owner and table name, executes
dbms_space.unused_space, and displays the space statistics:
DECLARE
owner
varchar2(30);
name
varchar2(30);
seg_type varchar2(30);
tblock
number;
tbyte
number;
uBlock
number;
ubyte
number;
lue_fid number;
lue_bid number;
lublock number;
y
l
n
O
e
BEGIN
dbms_space.unused_space('&owner','&table_name','TABLE',
tblock,tbyte,ublock,ubyte,lue_fid,lue_bid,lublock);
dbms_output.put_line ('Total blocks allocated to table = '
||to_char(tblock));
dbms_output.put_line('Total bytes allocated to table = '
||to_char(tbyte));
dbms_output.put_line('Unused blocks(above HWM) = '
||to_char(ublock));
dbms_output.put_line('Unused bytes(above HWM) = '
||to_char(ubyte));
dbms_output.put_line('Last extent used file id = '
||to_char(lue_fid));
dbms_output.put_line ('Last extent used beginning block
id = ' ||to_char(lue_bid));
dbms_output.put_line('Last used block in last extent = '
||to_char(lublock));
END;
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Recovering Space
O
e
y
l
n
s
U
Recovering Space
Dropping or Truncating the Table
When deciding whether to drop or truncate the table, consider the following:
Both actions have the same result: no data in the table.
The DROP command removes all information regarding this table from the data
dictionary. For example, the extents used by the table will be deallocated.
The TRUNCATE command has the option to keep all allocated space, by specifying
reuse storage.
If you use the DROP TABLE command then careful consideration must be given to
using the COMPRESS option, because there might not be a single contiguous area large
enough for the entire space allocated to the table when importing.
If the table is stored in a dictionary-managed tablespace, then the deallocation from the
DROP or TRUNCATE (if using the default setting) and the allocation (at the import
stage) of extents could be a major time factor, depending on the number of extents (not
the size).
Move the Table
After the table is moved, all indexes are marked unusable and must be rebuilt.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Segments
Extents
y
l
n
Blocks
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
Characteristics
When the database is created, the block size is determined by the value of the
DB_BLOCK_SIZE parameter.
It is the minimum I/O unit for data file reads.
The default block size on most Oracle platforms is either 2 or 4 KB.
Some operating systems allow block sizes of up to 64 KB. Check your operating
systemspecific documentation, specifically the Oracle database installation and
configuration guides, to determine the maximum Oracle block size for your platform.
The size cannot be changed without re-creating or duplicating the database. This
makes it difficult to test applications with different block sizes. The Oracle database
can have multiple block sizes. However, the base block size (that of the system
tablespace) cannot be changed.
The database block size should be an integer multiple of the operating system block
size.
If your operating system reads the next block during sequential reads and your
application performs many full table scans, then the database block size should be
large, but not exceeding the operating system I/O size.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Pros
Reduces block contention
Is good for small rows
Is good for random access
Cons
Has a relatively large overhead
Has a small number of rows per block
Can cause more index blocks to be read
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Pros
Less overhead
Good for sequential access
Good for very large rows
Better performance of index reads
Cons
Increases block contention
Uses more space in the buffer cache
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Inserts
Inserts
Inserts
Inserts
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
PCTFREE
Default is 10
Zero if no UPDATE activity
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Chaining
Index
Table
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
y
l
n
s
U
You can identify the existence of migrated and chained rows in a table or cluster by using
the ANALYZE command. This command will count the number of migrated and chained
rows and place this information into the chain_cnt column of dba_tables.
The num_rows column provides the number of rows stored in the analyzed table or cluster.
Compute the ratio of chained and migrated rows to the number of rows to decide whether
migrated rows need to be eliminated.
The Table Fetch Continued Row Statistic
You can also detect migrated or chained rows by checking the Table Fetch Continued Row
statistic in v$sysstat or in the Statspack report under Instance Activity Stats for DB.
&
l
a
n
r
te
n
I
e
l
c
a
Guidelines
Increase PCTFREE to avoid migrated rows. If you leave more free space available in the
block for updates, the row has room to grow. You can also reorganize (re-create) tables and
indexes with a high deletion rate.
r
O
y
l
n
...
O
e
I
A
s
U
You can identify migrated and chained rows in a table or cluster by using the ANALYZE
command with the LIST CHAINED ROWS option. This command collects information
about each migrated or chained row and places this information into a specified output table.
To create the table that holds the chained rows, execute the utlchain.sql script:
SQL>
2
3
4
5
6
7
&
l
a
n
r
te
n
I
e
l
c
a
If you create this table manually, it must have the same column names, data types, and sizes
as the chained_rows table.
r
O
Export/Import
Export the table.
Drop or truncate the table.
Import the table.
O
e
s
U
I
A
y
l
n
&
l
a
n
r
te
n
I
e
r
O
l
c
a
When using this script, you must disable any foreign key constraints that would be violated
when the rows are deleted.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Index Reorganization
O
e
y
l
n
s
U
Index Reorganization
Indexes on volatile tables can also present a performance problem.
In data blocks, the Oracle server replaces deleted rows with inserted ones; however, in index
blocks, the Oracle server orders entries sequentially. Values must go in the correct block,
together with others in the same range.
Many applications insert in ascending index order and delete older values. But even if a
block contains only one entry, it must be maintained. In this case, you may need to rebuild
your indexes regularly.
If you delete all the entries from an index block, the Oracle server puts the block back on the
free list.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
SQL>
ALTER INDEX
oe.customers_pk
To
coalesce
indexes
(alternative toCOALESCE;
REBUILD):
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Coalesce
O
e
y
l
n
s
U
Rebuilding Indexes
You may decide to rebuild an index if the deleted entries represent 20% or more of the
current entries, although this depends on your application and priorities. You can use the
query in the previous slide to find the ratio. Use the ALTER INDEX REBUILD statement
to reorganize or compact an existing index or to change its storage characteristics. The
REBUILD statement uses the existing index as the basis for the new index. All index storage
commands are supported, such as STORAGE (for extent allocation), TABLESPACE (to move
the index to a new tablespace), and INITRANS (to change the initial number of entries).
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
SQL> EXECUTE
dbms_stats.gather_index_stats
(HR,LOC_COUNTRY_IX);
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
NOMONITORING USAGE;
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Practice 13
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console. (Solutions for Oracle Enterprise Manager can be found in Appendix B).
1. Connect using sys/oracle AS sysdba and query the tablespace_name and
extent_management columns of dba_tablespaces to determine which
tablespaces are locally managed and which are dictionary managed. Record which
tablespaces are dictionary managed.
2. Alter the hr user to have the tools tablespace as the default.
3. Examine the v$system_event view and note the total waits for the statistic
enqueue.
Note: On a production system you would be more likely to pick up the contention
through the Statspack report.
4. Also examine the v$enqueue_stat view for eq_type 'ST' to determine the
total_wait# for the ST enqueue, which is the space management enqueue.
5. Exit out of the SQL*Plus session and change the directory to
$HOME/STUDENT/LABS. Run the lab13_04.sh script (lab13_04.bat for
Enterprise Manager) from the operating system prompt. This script will log five users
onto the database simultaneously and then each user creates and drops tables. The
tables each have many extents. The script must be run from the
$HOME/STUDENT/LABS directory or it will fail.
y
l
n
O
e
s
U
I
A
Note: Record the difference in the number of waits for the ST enqueue for extent
management using a dictionary managed tablespace. This value is found by subtracting
the first wait value (from practice 13-04) from the second wait value (from practice 1306).
&
l
a
n
r
te
7. Create a new locally managed tablespace test, name the data file test01.dbf,
and place it in the $HOME/ORADATA/u06 directory. Set the size to 120 MB and a
uniform extent size of 20 KB.
n
I
e
l
c
a
r
O
Note: The same steps are covered again. This time you are looking for the number of
waits for the ST enqueue caused by locally managed tablespaces.
Practice 13 (continued)
9. Examine and record the initial total_wait# for 'ST' in the v$enqueue_stat
view.
10. Exit out of the SQL*Plus session and change directory to $HOME/STUDENT/LABS.
Run the script lab13_04.sh (lab13_04.bat for Enterprise Manager) from the
operating system prompt. This script will log five users onto the database
simultaneously and then each user creates and drops tables. The tables each have many
extents. The script must be run from the $HOME/STUDENT/LABS directory or it will
fail.
11. Again examine and record the final total_wait# for 'ST' in the
v$enqueue_stat view.
Note: Record the difference in the total_wait# for the ST enqueue for extent
management using a locally managed tablespace. This value is found by subtracting
the first wait value (from practice 13-09) from the second wait value (from practice 1311). Compare the two results for the different tablespaces. The locally managed
tablespace would be far less contentious with extent management because it is
managing the space within the tablespace itself.
y
l
n
O
e
s
U
13. Run analyze on the new_emp table and query the dba_tables view to determine
the value of chain_cnt for the new_emp table. Record this value.
I
A
&
l
a
n
r
te
n
I
e
16. Resolve the migration caused by the previous update, by using the ALTER TABLE
MOVE command. This will cause the index to become unusable and should be rebuilt
using the ALTER INDEX REBUILD command before reanalyzing the new_emp
table. Confirm that the migration has been resolved by querying chain_cnt column
in the user_tables view and confirm that the index is valid by querying the
user_indexes view.
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Heap table
Cluster
Indexorganized
table
Organization by value
Heap
Clustered
Sorted
Partitioned
table
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Table size
Row size, row group, and block size
Small or large transactions
Using parallel queries to load or for SELECT
statements
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Clusters
ORD_NO
----101
102
102
102
101
101
PROD
QTY
------ -----A4102
20
A2091
11
G7830
20
N9587
26
A5675
19
W0824
10
ORD_NO
-----101
102
...
ORD_DT
CUST_CD
----------05-JAN-97
R01
07-JAN-97
N45
Cluster Key
(ORD_NO)
101 ORD_DT
CUST_CD
05-JAN-97
R01
PROD
QTY
A4102
20
A5675
19
W0824
10
102
ORD_DT
CUST_CD
07-JAN-97
N45
PROD
QTY
A2091
11
G7830
20
N9587
26
O
e
y
l
n
s
U
Definition of Clusters
A cluster is a group of one or more tables that share the same data blocks because they share
common columns and are often used together in join queries. Storing tables in clusters offers
the DBA a method to denormalize data. Clusters are transparent to the end user and
programmer.
Performance Benefits of Clusters
Disk I/O is reduced and access time improved for joins of clustered tables.
Each cluster key value is stored only once for all the rows of the same key value; it
therefore uses less storage space.
Performance Consideration
Full table scans are generally slower on clustered tables than on nonclustered tables.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Cluster Types
Index cluster
Hash cluster
Hash function
O
e
y
l
n
s
U
Cluster Types
Index Clusters
An index cluster uses an index, known as the cluster index, to maintain the data within the
cluster. The cluster index must be available to store, access, or maintain data in an index
cluster.
The cluster index is used to point to the block that contains the rows with a given key value.
The structure of a cluster index is similar to that of a normal index.
Although a normal index does not store null key values, cluster indexes store null keys.
There is only one entry for each key value in the cluster index. Therefore, a cluster index is
likely to be smaller than a normal index on the same set of key values.
Hash Clusters
A hash cluster uses a hash algorithm (either user-defined or system-generated) to calculate
the location of a row, both for retrieval and for DML operations.
For equality searches that use the cluster key, a hash cluster can provide greater performance
gains than an index cluster, because there is only one segment to scan (no index access is
needed).
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Criterion
Index
Hash
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Partitioning Methods
Range
partitioning
Hash
partitioning
List
partitioning
Composite
partitioning
O
e
y
l
n
s
U
Partitioning Methods
A major driving force for supporting partitioned tables and indexes is the dramatic increase
in the size of these database objects. This capability reduces down time (because of
scheduled maintenance or data failures), improves performance through partition pruning,
and improves manageability and ease of configuration.
Range Partitioning
Range partitioning uses ranges of column values to map rows to partitions. Partitioning by
range is well suited for historical databases. However, it is not always possible to know
beforehand how much data will map to a given range and in some cases sizes of partitions
may differ quite substantially, resulting in sub-optimal performance for certain operations
like parallel DML.
Hash Partitioning
This method uses a hash function on the partitioning columns to stripe data into partitions. It
controls the physical placement of data across a fixed number of partitions and gives you a
highly tunable method of data placement.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Easy to Implement
Enables better performance for PDML and
partition-wise joins
Inserts rows into partitions automatically based on
the hash of the partition key
Supports (hash) local indexes
Does not support (hash) global indexes
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
l
c
a
r
O
This command will split the default partition into two partitions, p3 with values ND and
SD and p4 as the new default partition holding all values not covered by any partition.
Oracle9i Database Performance Tuning 14-15
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Table
partition
Table
partition
Table
partition
Table
partition
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Partition Pruning
99-Jan
99-Feb
SQL>
2
3
4
5
6
7
99-Mar
99-Apr
99-May
99-Jun
SELECT SUM(sales_amount)
FROM sales
WHERE sales_date BETWEEN
TO_DATE(01-MAR-1999,
DD-MON-YYYY) AND
TO_DATE(31-MAY-1999,
DD-MON-YYYY);
y
l
n
sales
O
e
s
U
Partition Pruning
Depending on the SQL statement, the Oracle server can explicitly recognize partitions and
subpartitions (of tables and indexes) that need to be accessed and the ones that can be
eliminated. This optimization is called partition pruning. This can result in substantial
improvements in query performance. However, the optimizer cannot prune partitions if the
SQL statement applies a function to the partitioning column.
If you partition the index and table on different columns (with a global, partitioned index),
partition pruning also eliminates index partitions even when the underlying tables partitions
cannot be eliminated. However, the optimizer cannot use an index if the SQL statement
applies a function to the indexed column, unless it is a function-based index.
The Oracle server considers only the relevant partitions for such predicates as c in (10,30) or
c=10 or c=30
For hash partitioning, partition pruning is limited to using equality or IN-list predicates.
The optimizer can also performs partition pruning when subquery predicates are used.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Partition-Wise Join
3
1
Nonpartition-wise join
Partitioned table
Partition
O
e
y
l
n
s
U
Partition-Wise Joins
Partition-wise joins minimize the amount of data exchanged among parallel slaves during
parallel joins execution by taking into account data distribution. This significantly reduces
response time and resource utilization, both in terms of CPU and memory. In Oracle Parallel
Server environments the data traffic over the interconnect (if the relevant partitions are colocated) can limit the performance of massive join operations.
Full Partition-Wise Join
A full partition-wise join is a join performed on two tables that are equipartitioned in
advance on their join keys. The Oracle server performs the joins sequentially if the query is
executed serially or in parallel if a separate query slave joins each pair. The Oracle server
can join the results of each parallel scan without the need to redistribute data. The number of
partitions limits partition-wise joins.
Note: The Oracle server supports partition-wise joins on range, hash, composite, or list
partitioned tables.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Suppose that the optimizer uses a hash join in this case. You can reduce the processing time
for this join if both tables are partitioned correctly because this enables a partition-wise join.
Multiple partitioning methods can be used, but not all of them activate the partition-wise
join:
Hash-Hash: The customer and sales tables are equipartitioned by hash on
s_customerid and c_customerid respectively. In this case, the technique used
by the Oracle server to make this join parallel is number 3 in the partition-wise join
slide. This is the most efficient. It can become even better in a cluster or MPP
environment if each corresponding hash-partition is co-located (placed on the same
local disk).
Composite-Hash: The sales table is partitioned by range on s_saledate (each
partition representing a quarter) and subpartitioned by hash on s_customerid. The
customer table is partitioned by hash on c_customerid so that both tables are
equipartitioned on the hash dimension. In this case, because the pruning on the sales
table restricts the scan to the subpartitions corresponding to quarter 3 of 1994, the same
technique as in the previous example can be used.
Range-Hash: However, if the sales table is partitioned by range on s_saledate
and the customer table is partitioned by hash on c_customerid, then the
technique used by the Oracle server in this case is number 2 in the partition-wise join
slide. Only a partial partition-wise join can be used.
Range-Range: If both tables are range partitioned, then the only available technique is
number 1 in the partition-wise join slide. This is probably the worst case.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
CALL dbms_stats.gather_table_stats (
ownname => o901,
tabname => sales,
partname => feb99,
granularity => partition);
CALL dbms_stats.gather_index_stats (
ownname => o901,
indname => isales,
partname => s1);
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Application Tuning
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
y
l
n
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
To perform an online redefinition of a table you must perform the following steps:
1. Choose one of the following two methods of redefinition:
- Use the primary keys to perform the redefinition. This is the default and preferred
method. The pre-redefinition and the post-redefinition versions of the tables have
the same primary key columns.
- Use the rowids. This method is used if the primary key method cannot be used.
Index-organized tables should not be redefined with this method.
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
B-Tree Indexes
Index entry
Root
Branch
Leaf
O
e
y
l
n
s
U
B-Tree Indexes
When to Create B-Tree Indexes
B-tree indexes typically improve the performance of queries that select a small percentage
of rows from a table. As a general guideline, you should create indexes on tables that are
often queried for less than 5% of the tables rows. If you select 5% or more of the data it
may mean visiting just about every portion of the table anyway, making it less efficient than
a full scan. On the other hand, an index can be used to eliminate a potentially expensive sort
when many rows are selected, if all data can be retrieved from an index, or where the
indexed columns can be used for joining to other tables.
How Indexes Grow
Indexes are always balanced and they grow from the bottom up. As rows are added, the leaf
block fills. When the leaf block is full, the Oracle server splits it into two blocks and puts
50% of the blocks contents into the original leaf block and 50% into a new leaf block.
If another block is added to the index, this newly added block must be added to the directory
entry in the parent branch block. If this parent branch block is full, the parent branch block is
split in a similar way to the leaf block, with 50% of the existing contents being divided
between the existing and new branch blocks. If required, this pattern is repeated until the
place where the root block becomes a branch block and a new root block is added.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Rebuilding Indexes
y
l
n
O
e
s
U
Rebuilding Indexes
The more levels an index has, the less efficient it may be. Additionally, an index with many
rows deleted might not be efficient. Typically, if 15% of the index data is deleted, then you
should consider rebuilding the index.
You should rebuild your indexes regularly. However, this can be a time-consuming task,
especially if the base table is very large. In the Oracle database, you can create and rebuild
indexes online. Parallelization is possible as well. While the index is being rebuilt, the
associated base table remains available for queries and DML operations.
I
A
&
l
a
n
r
te
The ONLINE keyword specifies that DML operations on the table or partition be allowed
during rebuilding of the index.
Restriction: Parallel DML is not supported during online index building. If you specify
ONLINE and then issue parallel DML statements, an error is returned.
n
I
e
l
c
a
r
O
You can also compute statistics for the cost-based optimizer while rebuilding the index.
Adding the compute statistics procedure has very little effect on the performance of the
rebuild because the entire index is being accessed anyway. The command is:
SQL> ALTER INDEX I_name REBUILD COMPUTE STATISTICS;
You can combine the ONLINE and COMPUTE STATISTICS clauses into one statement.
Oracle9i Database Performance Tuning 15-8
Compressed Indexes
O
e
y
l
n
s
U
Key Compression
Specify COMPRESS to enable key compression, which eliminates the repeated occurrence of
key column values and may substantially reduce storage. Use an integer to specify the prefix
length (the number of prefix columns to compress).
For unique indexes, the valid range of prefix length values is from 1 to the number of key
columns minus 1 (the default value). In a unique index with a single attribute key, key
compression is not possible due to the unique piece there are no grouping pieces to share.
For nonunique indexes, the valid range of prefix length values is from 1 to the number of
key columns. The default prefix length is the number of key columns.
Key compression is useful in many different scenarios, such as:
In a nonunique regular index, the Oracle database stores duplicate keys with the
rowid appended to the key to break the duplicate rows. If key compression is used,
then the duplicate key is stored as a prefix entry on the index block without the
rowid. The rest of the rows are suffix entries that consist of only the rowid.
This same behavior can be seen in a unique index that has a key of the form (item,
time stamp), for example, stock_ticker or transaction_time.
Thousands of rows can have the same stock_ticker value, with
transaction_time preserving uniqueness.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Bitmap Indexes
File 3
Table
Block 10
Block 11
Block 12
Index
Block 13
Start
End
Key ROWID ROWID
Bitmap
O
e
y
l
n
s
U
Bitmap Indexes
With bitmap indexes, you can store data that has few distinct values within the column, such
as gender or job_title. The index consists of a range of rows stored in the index and
then a map of binary values for each key. The value is on, that is 1, if the key is true for
that row. In the example on the slide, the value in the bitmap is on for only one color. If the
row item is blue, then the value is a 1 for blue and 0 for all other combinations.
Bitmap indexes perform best when there are few variations for the value, but millions of
rows. Bitmap indexes also help resolve Boolean type constraints; for example, if the user
requires all items that are blue and yellow or if the green or red items are wanted.
DML statements do not perform well with bitmap indexes, so for high DML activity, do not
use a bitmap index.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Bitmap Indexes
y
l
n
O
e
s
U
Bitmap Indexes
When to Create Bitmapped Indexes
Bitmap indexes are intended for low-cardinality columns that contain a limited number
of values.
If you use a query with multiple WHERE conditions, the Oracle server can use logical
bit-AND or bit-OR operations to combine this bitmap with bitmaps for other columns.
I
A
&
l
a
Performance Considerations
Bitmap indexes use little storage space: one entry per distinct key value, stored in a
compressed form. Each bitmap is divided into bitmap segments (up to one-half block).
They work very fast with multiple predicates on low-cardinality columns.
They are particularly suited to large, read-only systems such as decision support
systems.
DML statements slow down performance:
- They are not suited for OLTP applications.
- Locking is at the bitmap-segment, not entry, level.
Bitmap indexes store null values, whereas B-tree indexes do not.
Parallel query, parallel data manipulation language (PDML), and parallelized CREATE
statements work with bitmap indexes.
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
Maintenance Considerations
In a data warehousing environment, data is usually maintained by way of bulk inserts and
updates. Index maintenance is deferred until the end of each DML operation. For example, if
you insert 1,000 rows, then the inserted rows are placed into a sort buffer and then the
updates of all 1,000 index entries are batched. Thus each bitmap segment is updated only
once per DML operation, even if more than one row in that segment changes.
In the Oracle database, there are different parameters affecting the sort area depending on
the value of WORKAREA_SIZE_POLICY. See the chapter titled Optimizing Sort
Operations for more details.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
B-Tree Indexes
Bitmap Indexes
Row-level locking
More storage
Less storage
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
KEY
----1257
2877
4567
6657
8967
9637
9947
...
ROWID
------------------0000000F.0002.0001
0000000F.0006.0001
0000000F.0004.0001
0000000F.0003.0001
0000000F.0005.0001
0000000F.0001.0001
0000000F.0000.0001
...
EMPLOYEE_ID
----------7499
7369
7521
7566
7654
7698
7782
... ...
LAST_NAME ...
--------ALLEN
SMITH
WARD ...
JONES
MARTIN
BLAKE
CLARK
...
employees table
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
s
U
I
A
&
l
a
INDEX_NAME INDEX_TYPE
-------------------------I2_T1
NORMAL/REV
n
r
te
n
I
e
l
c
a
r
O
y
l
n
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Index-Organized Tables
Regular table access
IOT access
ROWID
Non-key columns
Key column
y
l
n
Row header
O
e
s
U
Index-Organized Tables
Definition
An index-organized table is like a regular table with an index on one or more of its columns,
but instead of maintaining two separate segments for the table and the B-tree index, the
database system maintains one single B-tree structure that contains both the primary key
value and the other column values for the corresponding row.
For index-organized tables, a primary key constraint is mandatory.
Index-organized tables are suitable for frequent data access through the primary key or
through any key that is a prefix of the primary key, such as in applications that use inverted
indexes. An inverted index keeps the value and all its locations together; this meets the
requirement for the index-organized table having a primary key. Each word has one entry
and that entry records all the places in which the word occurs. This means that the index can
be used to re-create the entire document. Inverted indexes are used in Intermedia.
Benefits
No duplication of the values for the primary key column (index and table columns in
indexed tables), therefore less storage is required.
Faster key-based access for queries that involve an exact match, a range search, or
both.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Index-Organized Tables
and Heap Tables
O
e
y
l
n
s
U
Logical rowids
Index-organized tables do not have regular (physical) rowids. However, the concept of
logical rowids was introduced to overcome certain restrictions caused by the lack of
physical rowids. Logical rowids give the fastest possible access to rows in IOTs by
using two methods:
A physical guess whose access time is equal to that of physical rowids
Access without the guess (or after an incorrect guess); this performs a primary key
access of the IOT
The guess is based on knowledge of the file and block that a row resides in. The latter
information is accurate when the index is created, but changes if the leaf block splits. If the
guess is wrong and the row no longer resides in the specified block, then the remaining
portion of the logical rowid entry, the primary key, is used to get the row.
Logical rowids are stored as a variable-length field, where the size depends on the primary
key value being stored.
The UROWID data type enables applications to use logical rowids in the same way they
use rowids, for example, selecting rowids for later update or as part of a cursor.
UROWID can also be used to store rowids from other databases, accessed by way of
gateways. The UROWID type can also be used to reference physical rowids.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
Storing large entries in index leaf blocks slows down index searches and scans. You can
specify that the rows go into an overflow area by setting a threshold value that represents a
percentage of block size.
The primary key column must always be stored in the IOT index blocks as a basis for
searching. But you can place non-key values in a separate area, the row overflow area, so
that the B-tree itself remains densely clustered.
n
r
te
n
I
e
r
O
l
c
a
users tablespace
Segment = COUNTRY_C_ID_PK
IOT_type = IOT
Segment_type = INDEX
Index_type = IOT - TOP
Segment = SYS_IOT_OVER_n
IOT_type = IOT_OVERFLOW
Segment_type = TABLE
Rows within
y
l
n
PCTTHRESHOLD
O
e
I
A
s
U
This clause specifies the percentage of space reserved in the index block for an indexorganized table row. If a row exceeds the size calculated based on this value, all columns
after the column named in the INCLUDING clause are moved to the overflow segment. If
OVERFLOW is not specified, then rows exceeding the threshold are rejected.
PCTTHRESHOLD defaults to 50 and must be a value from 0 to 50.
The INCLUDING Clause
&
l
a
n
r
te
This clause specifies the column at which to divide an index-organized table row into index
and overflow portions. The server accommodates all non-key columns up to the column
specified in the INCLUDING clause in the index leaf block, provided it does not exceed the
specified threshold.
The OVERFLOW Clause and Segment
n
I
e
r
O
l
c
a
This clause affects the index-organized table data rows that exceed the threshold set. The
overflow data is placed in the data segment defined by the segments attributes, which
specify the tablespace, storage, and block utilization parameters.
IOT_NAME
-------COUNTRY
IOT_TYPE
-----------IOT
IOT_OVERFLOW
y
l
n
O
e
s
U
I
A
&
l
a
If an overflow area has been specified, then its name appears as an additional new table in
the list of table names. The overflow table is called sys_iot_over_nnnn (where nnnn is
the object ID of the table segment) in dba_tables. The column iot_type is set to
IOT_Overflow for this table and iot_name is set to the name of the index-organized table
to which it belongs.
n
r
te
n
I
e
r
O
l
c
a
Querying dba_indexes
and dba_segments for IOT information
SQL> SELECT index_name, index_type,
2
tablespace_name, table_name
2 FROM dba_indexes;
INDEX_NAME
INDEX_TYPE TABLESPACE
--------------- ---------- ---------COUNTRY_C_ID_PK IOT - TOP
INDX
TABLE_NAME
---------COUNTRY
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
SQL>
2
3
4
5
6
7
8
9
10
y
l
n
O
e
s
U
I
A
A bitmap index on an IOT is similar to a bitmap index on a heap table, except that the
rowids used in the bitmap index on an IOT are those of a mapping table and not the base
table. The mapping table maintains a mapping of physical rowids (needed by the bitmap
index code) to logical rowids (needed to access the IOT). There is one mapping table per
IOT and it is used by all the bitmap indexes created on that IOT.
In a heap-organized base table, a bitmap index is accessed using a search key. If the key is
found, the bitmap entry is converted to a physical rowid used to access the base table. In an
IOT, a bitmap index is also accessed using a search key. If the key is found, the bitmap entry
is converted to a physical rowid used to access the mapping table. The access to the
mapping table yields a logical rowid. This logical rowid is used to access the IOT using
either the guess data block address (if it is valid) or the primary key. Though a bitmap index
on an IOT does not store logical rowids, it is still logical in nature.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
The movement of rows in an IOT does not leave the bitmap indexes built on that IOT
unusable. The movement of rows in the IOT invalidates the guess data block address in
some of the mapping tables logical rowid entries, however the IOT can still be accessed
using the primary key.
Oracle9i Database Performance Tuning 15-23
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
Note: To drop a mapping table there must be no bitmap indexes on the IOT.
To create a mapping table on the countries IOT use the command:
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
OLTP Systems
Availability
Speed
Concurrency
Recoverability
O
e
OLTP Systems
Typical OLTP Applications
Airline reservation systems
Large order-entry applications
Banking applications
Requirements
High availability (7 days a week/24 hours a day)
High speed
High concurrency
Reduced time recovery
&
l
a
I
A
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
OLTP Requirements
Materialized views
Index-organized tables
O
e
y
l
n
s
U
OLTP Requirements
Space Allocation
Avoid the performance load of dynamic extent allocation; allocate space explicitly to
tables, clusters, and indexes.
Check growth patterns regularly to find the rate at which extents are being allocated so
that you can create extents appropriately.
Indexing
Indexing is critical to data retrieval in OLTP systems. DML statements on indexed
tables need index maintenance and this is a significant performance overhead. Your
indexing strategy must be closely geared to the real needs of the application.
Indexing a foreign key helps child data to be modified without locking the parent data.
B-tree indexing is better than bitmap indexing, because of locking issues affecting
DML operations: when a B-tree index entry is locked, a single row is locked, whereas
when a bitmap index entry is locked, a whole range of rows are locked.
Reverse key indexes avoid frequent B-tree block splits for sequence columns.
You should rebuild indexes regularly.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Data
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Storage allocation:
Set the block size and
DB_FILE_MULTIBLOCK_READ_COUNT carefully.
O
e
I
A
y
l
n
s
U
Data warehouse applications typically perform many table scans; therefore consider a higher
value for the block size parameter. Even if this means re-creating a large database, it is
almost certainly worthwhile, because a larger block size facilitates read-intensive operations,
which are characteristic of data warehouse applications.
Another option is available; you can create a new tablespace with the required block size,
thus having multiple block sizes in the same database.
The DB_FILE_MULTIBLOCK_READ_COUNT Parameter
Pay particular attention to setting the DB_FILE_MULTIBLOCK_READ_COUNT parameter.
During full table scans and fast full index scans this parameter determines how many
database blocks are read into the buffer cache with a single operating system read call. A
larger value decreases the estimated cost of full table scans and therefore the CBO will favor
table scans over index searches.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
Parsing Time
The time taken to parse SELECT statements is likely to be a very small proportion of the
time taken to execute the query. Tuning the library cache is much less of an issue for data
warehouse than for OLTP systems.
Your priority is an optimal access path in the execution plan; small variations can cost
minutes or hours. Developers must:
Use parallelized queries, which enable multiple processes to work together
simultaneously to process a single SQL statement
Use symmetric multiprocessors (SMP), clustered, or massively parallel processing
(MPP) configurations. These configurations gain the largest performance benefits
because the operation can be effectively split among many CPUs on a single system.
Use the EXPLAIN PLAN command to tune the SQL statements and hints to control
access paths
If your application logic uses bind variables, you lose the benefit of this feature: the
optimizer makes a blanket assumption about the selectivity of the column, whereas with
literals, the cost-based optimizer uses histograms. Be careful when setting the
CURSOR_SHARING parameter; it could force a change from literal values to systemgenerated bind variables.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Hybrid Systems
OLTP
Data Warehouse
CURSOR_SHARING set to
Similar can assist performance
CURSOR_SHARING should
be left on Exact
PCTFREE according to
expected update activity
Generates histograms
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Practice 15
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console. (Solutions for Oracle Enterprise Manager can be found in Appendix B).
1. Connect as hr/hr, drop the new_employees table, and create an IOT called
new_employees in the hr schema. Give the table the same columns as the
hr.employees table. Make the employee_id column the primary key and name
the primary key index new_employees_employee_id_pk.
2. Confirm the creation of the table by querying the user_tables and the
user_indexes views
3. Populate the new_employees table with the rows from the hr.employees table.
4. Create a secondary B-tree index on the last_name column of the
new_employees table. Place the index in the indx tablespace. Name the index
last_name_new_employees_idx. Collect the statistics for the secondary index.
5. Confirm the creation of the index by using the user_indexes view in the data
dictionary. Query the index_name, index_type, blevel, and leaf_blocks.
y
l
n
O
e
s
U
7. Confirm the creation of the index and that it is a reverse key index, by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel, and leaf_blocks.
I
A
8. Create a bitmap index on the job_id column of the employees_hist table. Place
the index in the indx tablespace. Name the index bitmap_emp_hist_idx.
&
l
a
n
r
te
9. Confirm the creation of the index and that it is a bitmapped index by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel, and leaf_blocks.
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Materialized Views
Refresh modes:
Manual
Automated (synchronous or asynchronous)
O
e
y
l
n
s
U
Materialized Views
A materialized view (MV) stores both the definition of a view and the rows resulting from
the execution of the view. Like a view, it uses a query as the basis, but the query is executed
at the time the view is created and the results are stored in a table. You can define the table
with the same storage parameters as any other table and place it in the tablespace of your
choice. You can also index and partition the materialized view table, like other tables, to
improve the performance of queries executed against them.
When a query can be satisfied with data in a materialized view, the server transforms the
query to reference the view rather than the base tables. By using a materialized view,
expensive operations such as joins and aggregations do not need to be re-executed; instead
the statement is rewritten to query the materialized view.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
SQL>
2
3
4
5
6
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
dbms_mview.refresh
(CUST_SALES, parallelism => 10);
dbms_mview.refresh_dependent (SALES);
dbms_mview.refresh_all_mviews;
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Level 2
TOTAL_SALES
PROD_MV
PRODUCTS
Level 1
SALES_CUST_MV
CUSTOMERS
SALES
Level 0
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
SQL>
2
3
4
SQL>
2
3
SQL>
2
3
4
n
r
te
n
I
e
l
c
a
r
O
&
l
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
SQL>
2
3
4
SQL>
2
3
4
l
c
a
r
O
The materialized view can be fast refreshed using the command as follows:
SQL> EXECUTE dbms_mview.refresh
2 ('employees_mv', 'F', '', TRUE, FALSE,3
0,0,0, FALSE);
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Query Rewrites
O
e
y
l
n
s
U
Query Rewrites
The optimizer performs a rewrite of the query; the rewrite is transparent to the application.
The ability to perform rewrites must be enabled either at the session level or at the instance
level by using the QUERY_REWRITE_ENABLED parameter.
To enable or disable individual materialized views for query rewrites requires the GLOBAL
QUERY REWRITE or the QUERY REWRITE system privilege. Both privileges allow users
to enable materialized views in their own schema. The GLOBAL QUERY REWRITE
privilege requires that the materialized view be in the users schema, whereas the QUERY
REWRITE privilege requires that the user own the base tables and the materialized view.
Overview of the Summary Advisor in the dbms_olap Package
Materialized views provide high performance for complex, data-intensive queries. The
summary advisor helps you achieve this performance benefit by choosing the proper set of
materialized views for a given workload. In general, as the number of materialized views
and space allocated to materialized views is increased, query performance improves. But the
additional materialized views have some cost: they consume additional storage space and
must be refreshed, which increases maintenance time. The summary advisor considers these
costs and makes the most cost-effective trade-off when recommending the creation of new
materialized views and evaluating the performance of existing materialized views.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Initialization parameters:
OPTIMIZER_MODE
QUERY_REWRITE_ENABLED
QUERY_REWRITE_INTEGRITY
Dimensions
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
SQL>
2
3
4
5
6
SQL>
2
3
4
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Practice 16
In this practice you will make use of the AUTOTRACE feature and create the plan_
table table. These are covered in detail in the chapter titled SQL Statement Tuning.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console. (Solutions for Oracle Enterprise Manager can be found in Appendix B).
1. Connect as sh/sh and confirm that the plan_table table exists. If the table does
exist then truncate it, otherwise create the plan_table table using
$ORACLE_HOME/rdbms/admin/utlxplan.sql.
2. Create a materialized view cust_sales having two columns, cust_last_name
and the total_sales for that customer. This will mean joining the sales and
customers tables using cust_id and grouping the results by cust_last_name.
Make sure that query rewrite is enabled on the view.
3. Confirm the creation of the materialized view cust_sales by querying the
user_mviews data dictionary view, selecting the columns mview_name,
rewrite_enabled and query.
4. Set AUTOTRACE to Traceonly Explain, to generate the explain plan for the query
$HOME/STUDENT/LABS/lab16_04.sql
y
l
n
5. Set the QUERY_REWRITE_ENABLED parameter to True for the session and run the
same query, $HOME/STUDENT/LABS/lab16_04.sql, as in the previous practice.
Note the change in the explain plan due to the query rewrite. Set AUTOTRACE to Off
and disable query rewrite after the script has completed running.
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Locking Mechanism
Automatic management
High level of data concurrency
Row-level locks for DML transactions
No locks required for queries
Multi-version consistency
Exclusive and Share lock modes
Locks held until commit or rollback operations
are performed
O
e
y
l
n
s
U
Lock Management
The Oracle server automatically manages locking. The default locking mechanisms lock
data at the lowest level of restriction to guarantee data consistency while allowing the
highest degree of data concurrency.
Note: The default mechanism can be modified by the ROW_LOCKING parameter. The
default value is Always, which leads the Oracle server to always lock at the lowest and
least restrictive level (the row level, not the table level) during DML statements. The other
possibility is to set the value to Intent, which leads the Oracle server to lock at a more
constraining level (the table level), except for a SELECT FOR UPDATE statement, for
which a row-level lock is used.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Data Concurrency
Transaction 2
Transaction 1
SQL> UPDATE employees
2 SET salary=salary*1.1
3 WHERE id= 24877;
1 row updated.
O
e
y
l
n
s
U
Data Concurrency
Locks are designed to allow a high level of data concurrency; that is, many users can
safely access the same data at the same time.
Data Manipulation Language (DML) locking is at row level.
A query holds no locks, unless the user specifies that it should.
Data Consistency
The Oracle server also provides multi-version consistency; that is, the user sees a static
picture of the data, even if other users are changing it.
Duration
Locks are held until the transaction is committed, rolled back, or terminated. If a
transaction terminates abnormally, then the PMON process cleans up the locks.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Transaction 2
In Share lock mode, several transactions can acquire share locks on the same resource.
Example: Shared locks are set at table level for DML transactions:
Transaction 1
Transaction 2
y
l
n
O
e
I
A
Transaction 1
s
U
Transaction 2
&
l
a
n
r
te
n
I
e
l
c
a
As soon as Transaction 1 is committed, Transaction 2 can update the row, because the
transaction acquired the requested lock. Transaction 2 must wait because it wants to
update the same row as Transaction 1.
r
O
(TM)
y
l
n
O
e
s
U
DML Locks
DML locks guarantee the integrity of data being accessed concurrently by multiple users
for incorporating changes. They prevent destructive interference of simultaneous
conflicting DML and DDL operations.
DML Levels: A table-level lock (TM type) is set for any DML transaction that modifies a
table: INSERT, UPDATE, DELETE, SELECT...FOR UPDATE, or LOCK TABLE. The
table lock prevents DDL operations that would conflict with the transaction.
Example
I
A
&
l
a
Transaction 1
n
r
te
l
c
a
n
I
e
r
O
Transaction 2
SQL> DROP TABLE employees;
ERROR at line 1:
ORA-00054: resource busy and
acquire with NOWAIT specified
Transaction 2
DDL Locks
A DDL lock protects the definition of a schema object while that object is acted upon or
referred to by an ongoing DDL operation. The Oracle server automatically acquires a DDL
lock to prevent any destructive interference from other DDL operations that might modify
or reference the same schema object.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
DML Locks
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Enqueue Mechanism
O
e
y
l
n
s
U
Enqueue Mechanism
The Oracle server maintains all locks as enqueues. The enqueue mechanism keeps track
of:
Users waiting for locks held by other users
The lock mode these users require
The order in which users requested the lock
If three users want to update the same row at the same time, all of them get the shared
table lock but only one (the first) gets the row lock. The table-locking mechanism keeps
track of who holds the row lock and who waits for it.
You can increase the overall number of locks available for an instance by increasing the
values of the DML_LOCKS and ENQUEUE_RESOURCES parameters. This may be
necessary in a Real Application Clusters configuration.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
l
c
a
n
I
e
r
O
SQL>
2
3
4
SELECT id,salary
FROM employees
WHERE id=24877
FOR UPDATE;
ID
SALARY
--------- --------24877
1100
SQL> COMMIT;
Commit complete.
Table(s) Locked.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Share (S)
No DML operations allowed
Implicitly used for referential integrity
y
l
n
O
e
s
U
I
A
&
l
a
Often there are good application reasons for explicit locking but if you get lock contention
you may want to check with the developers. Non-Oracle developers sometimes use
unnecessarily high locking levels.
The table locking modes available for manual locking include:
Share (S) Lock Mode
This lock mode permits other transactions to only query the SELECT ... FOR
UPDATE table. It prevents any modification to the table.
n
r
te
n
I
e
l
c
a
r
O
Exclusive (X)
No DML or DDL operations allowed by other
sessions
No manual locks allowed by other sessions
Queries are allowed
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Row 6
Lock bytes
Row 1
O
e
y
l
n
s
U
Technical Note
This locking information is not cleared out when transactions are committed but rather
when the next query reads the block. This is known as delayed block cleanout.
The query that does the cleaning must check the status of the transaction and the system
change number (SCN) in the transaction table held in the rollback segment header.
Within blocks, the Oracle server keeps an identifier for each active transaction in the block
header. At row level, the lock byte stores an identifier for the slot containing the
transaction.
Example: In the diagram shown ion the slide, the transaction using slot 1 is locking row 6
and the transaction in slot 2 is locking row 1.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
DDL Locks
y
l
n
O
e
s
U
DDL Locks
You are unlikely to see contention for DDL locks because they are held only briefly and
are requested in NOWAIT mode. There are three types of DDL locks.
I
A
&
l
a
n
r
te
Transaction 1
l
c
a
n
I
e
r
O
Transaction 2
SQL> ALTER TABLE employees
2 DISABLE PRIMARY KEY;
ORA-00054: resource busy and
acquire with NOWAIT specified
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Transaction 1
Transaction 2
Transaction 3
UPDATE employees
SET salary =
salary x 1.1;
UPDATE employees
SET salary =
salary x 1.1
WHERE empno = 1000;
UPDATE employees
SET salary =
salary x 1.1
WHERE empno = 2000;
v$lock
v$locked_object
dba_waiters
dba_blockers
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
If the value of xidusn is 0, then the session with the corresponding session ID is
requesting and waiting for the lock being held by the session, for which xidusn value is
different from 0.
The utllockt.sql Script
You can also use the utllockt.sql script to display lock wait-for in a hierarchy. The
script prints the sessions that are waiting for locks and the sessions that are blocking.
You must run the catblock.sql script (found in $ORACLE_HOME/rdbms/admin
folder) as a sysdba user before using utllockt.sql. The catblock.sql script
creates the dba_locks and dba_blockers views along with others that will be used
by utllockt.sql.
I
A
&
l
a
n
r
te
n
I
e
For example, in the following output session 9 is waiting for session 8, sessions 7 and 10
are waiting for 9.
a
r
O
cl
WAITING
SESSION
------8
9
7
10
TYPE MODE
REQUESTED
---- ------------NONE None
TX
Share (S)
RW
Exclusive (X)
RW
Exclusive (X)
MODE
HELD
------------None
Exclusive (X)
S/Row-X (SSX)
S/Row-X (SSX)
LOCK LOCK
ID1
ID2
----- ----0
0
65547 16
33554440 2
33554440 2
Transaction 2
UPDATE employees
SET salary = salary x 1.1
WHERE empno = 1000;
9:00
9:05
10:30
>COMMIT/ROLLBACK;
>ALTER
UPDATE employees
SET salary = salary x 1.1
WHERE empno = 1000;
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
a
r
O
cl
SID
SERIAL#
----- --------8
122
10
23
USERNAME
-------SYSTEM
SCOTT
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Deadlocks
Transaction
Transaction
UPDATE employees
SET salary = salary x 1.1
WHERE empno = = 1000;
9:00
UPDATE employees
SET manager = 1342
WHERE empno = 2000;
UPDATE employees
SET salary = salary x 1.1
WHERE empno = 2000;
9:15
UPDATE employees
SET manager = 1342
WHERE empno = 1000;
ORA-00060:
Deadlock detected while
waiting for resource
9:16
y
l
n
O
e
s
U
Deadlocks
A deadlock can arise when two or more users wait for data locked by each other.
The Oracle server automatically detects and resolves deadlocks by rolling back the
statement that detected the deadlock.
I
A
&
l
a
Transaction 1
Time
Transaction 2
ORA-00060: deadlock
detected while waiting for
resource
n
r
te
n
I
e
l
c
a
r
O
Deadlocks (continued)
If the second update in Transaction 1 detects the deadlock, the Oracle server rolls back that
statement and returns the message. Although the statement that caused the deadlock is
rolled back, the transaction is not, and you receive an ORA-00060 error. Your next action
should be to roll back the remainder of the transaction.
Technical Note
Deadlocks most often occur when transactions explicitly override the default locking of
the Oracle server. Distributed deadlocks are handled in the same way as nondistributed
deadlocks.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Deadlocks
ORA-00060:
Deadlock detected while
waiting for resource
Server
process
SID_ora_PID.trc
UNIX
Trace
file
in USER_DUMP_DEST directory
y
l
n
O
e
s
U
Trace File
A deadlock situation is recorded in a trace file in the USER_DUMP_DEST directory. It is
advisable to monitor trace files for deadlock errors to determine whether there are
problems with the application. The trace file contains the rowids of the locking rows.
I
A
In distributed transactions, local deadlocks are detected by analyzing a waits for graph
and global deadlocks are detected by a time-out.
When detected, nondistributed and distributed deadlocks are handled by the database and
application in the same way.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Practice 17
The objective of this practice is to use available diagnostic tools to monitor lock
contention. You will need to start three sessions in separate windows. Log in as hr/hr in
two separate sessions (sessions 1 and 3) and as sys/oracle as sysdba in another
session (session 2). Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be
found in Appendix B).
1. In session 1 (user hr/hr), update the salary by 10% for all employees with a salary
< 15000 in the temp_emps table. Do not COMMIT.
2. In session 2 connect as sys/oracle AS sysdba and check to see whether any
locks are being held by querying the v$lock view.
3. In session 3 ( the session not yet used), connect as hr/hr and drop the temp_emps
table. Does it work?
4. In session 3 (hr/hr), update the salary by 5% for all employees with a salary >
15000 in the temp_emps table. Do not COMMIT.
y
l
n
5. In session 2, check to see what kind of locks are being held on the temp_emps
table, using the v$lock view.
O
e
6. In session 3, roll back the changes you made and set the manager_id column to 10
for all employees who have a salary < 15000.
I
A
s
U
Note: This session will be hanging, so do not wait for the statement to complete.
7. In session 2, check to see what kind of locks are being held on the temp_emps
table, using the v$lock view.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Lock Matrix
Type of Request
Initialization parameters
Conflicts/Notes
None
None
No locks on reads
Mode 6, so no exclusive DDL
(this is the least restrictive
lock.)
Mode 2
Mode 2
Mode 2
Mode 2
Mode 2
Mode 6
Lock table in Row
Exclusive mode
Mode 3
Mode 3
Mode 3
I
A
DML (up/ins/del)
Mode 3
Mode 6
l&
DML (up/ins/del) on a
partioned table
a
r
O
e
l
c
Mode 3
Mode 3
Mode 6
y
l
n
O
e
s
U
a
n
r
e
t
In
Modes 4, 5, 6 (updates
allowed, because mode 3
does not conflict with mode
3.) No share locks and no
referential integrity locks
Mode 4
Mode 2
Lock table partition in Share mode
Mode 4
Conflicts/Notes
Modes 3, 5, 6
Allows Select for Update and
other Share Locks
No possible ORA 1555 error
on locked table
Mode 3,4,5,6
Allows Select for Update only
No Share locks
No ORA 1555
No cascaded deletes
y
l
n
O
e
I
A
l&
a
n
r
e
t
In
Mode 3
Lock table partition in Exclusive mode
Mode 6
a
r
O
e
l
c
s
U
Conflicts/Notes
Mode 6
No wait
Mode 3
Mode 6
No wait
Mode 2,3,4,5,6
Selects only; No DDL
DDL fails if any other lock
mode on table due to no
wait
Mode 2,3,4,5,6 on the same
partition
Mode 5 on any partition
DDL fails if any other lock
mode on table partition due
to no wait
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Overview
OLTP
user
DSS
Less resources
Oracle9i
DSS
user
Copyright Oracle Corporation, 2002. All rights reserved.
O
e
y
l
n
s
U
Overview
By using the Database Resource Manager, the database administrator (DBA) has more
control over certain resource utilization than is normally possible through operating system
resource management alone. With the Oracle database it is possible to have control over
CPU utilization and the degree of parallelism. If resource management decisions are left in
the hands of the operating system, then this can cause inefficient scheduling or the
rescheduling of Oracle servers while latches are being held.
With the Database Resource Manager, the DBA can:
Guarantee groups of users a minimum amount of processing resources regardless of
the load on the system and the number of users.
Distribute available processing resources, by allocating percentages of CPU time to
different users and applications. In an OLTP environment, a higher priority can be
given to OLTP applications than to DSS applications during normal business hours.
Limit the degree of parallelism that a set of users can use.
Configure an instance to use a particular plan for allocating resources. A DBA can
dynamically change the method, for example, from a daytime setup to a nighttime
setup, without having to shut down and restart the instance.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Database Resource
Management Concepts
User groups with similar
resource needs (one
active resource consumer
group per session)
Resource
consumer
group
Resource plan
Resource plan
directives
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
30% @ L1
MAILDB
PLAN
40% @ L1
BUGDB
PLAN
100% @ L3
100% @ L3
100% @ L2
80% @ L1
100% @ L2
20% @ L1
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Method
Resource
Recipient
Round-robin
CPU to sessions
Groups
Emphasis
CPU to groups
Plans
Absolute
Parallel degree
Plans
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Allocation methods
P1CPU
SYS_GROUP
100%
0%
0%
OTHER_GROUPS
0%
100%
0%
LOW_GROUP
0%
0%
100%
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
l
c
a
r
O
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
l
c
a
r
O
y
l
n
);
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
dbms_resource_manager.create_pending_area();
n
r
te
n
I
e
dbms_resource_manager.create_consumer_group
(consumer_group => OLTP, comment => Online users);
r
O
l
c
a
dbms_resource_manager.create_plan (
plan =>
'NIGHT',
comment => 'DSS/Batch priority, ...' );
dbms_resource_manager.create_plan_directive (
plan =>
'NIGHT',
group_or_subplan =>
'SYS_GROUP',
comment =>
'...',
cpu_p1 =>
100,
parallel_degree_limit_p1 => 20);
y
l
n
O
e
s
U
I
A
&
l
a
dbms_resource_manager.create_plan (
plan => NIGHT, comment => DSS/Batch priority, ...);
n
r
te
n
I
e
r
O
l
c
a
dbms_resource_manager.create_plan_directive (
plan => NIGHT, group_or_subplan => SYS_GROUP,
comment => ..., cpu_p1 => 100,
parallel_degree_limit_p1 => 20);
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
QUEUEING_P1
Indicates how long, in seconds, any session will
wait on the queue before aborting the current
operation
Default is 1000000
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Example:
GROUP
OLTP
BATCH
ACTIVE_SESS_POOL_P1 = 5
QUEUEING_P1 = 600
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Undo Quota
Is specified in kilobytes
Default is 1000000
y
l
n
O
e
s
U
Undo Quota
The DBA can use the Database Resource Manager to limit the undo space consumed by a
transaction. This limit uses the UNDO_POOL resource plan directive parameter.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
dbms_resource_manager_privs.grant_switch_consumer_group (
grantee_name => MOIRA, consumer_group => OLTP,
grant_option => False );
n
r
te
n
I
e
r
O
l
c
a
dbms_resource_manager.set_initial_consumer_group (
user => MOIRA, consumer_group => OLTP );
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
If this error is encountered, then the instance must be shut down, the parameter modified to
show a correct value and the instance restarted. You can also activate, deactivate, or change
the current top plan by using the ALTER SYSTEM statement. If the resource plan is
changed using this command, then it takes effect immediately.
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
dbms_session.switch_current_consumer_group (
new_consumer_group => DSS,
old_consumer_group => v_old_group,
initial_group_on_error => False );
n
r
te
n
I
e
For example, an online application that wants to generate a report at the end of a user
session could execute the command shown so that the report runs at a different priority than
the rest of the application. The old value is returned to the calling application. If necessary,
the consumer group can be switched back to the users initial group within the application.
The third argument, if True, sets the current consumer group of the invoker to the initial
consumer group in the event of an error.
r
O
l
c
a
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
l
c
a
r
O
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
A status of Active indicates that the plan has been submitted and can be used, whereas
a status of Pending shows that the plan has been created, but is still in the pending area.
If the mandatory column is assigned a value of Yes then the plan cannot be deleted.
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
I
A
GRANTED_GROUP
-----------------------------BATCH
BATCH
OLTP
OLTP
DEFAULT_CONSUMER_GROUP
LOW_GROUP
SYS_GROUP
&
l
a
n
r
te
l
c
a
n
I
e
GRA
--NO
NO
YES
YES
YES
NO
NO
I
N
Y
N
Y
Y
N
Y
r
O
The dba_rsrc_manager_system_privs view lists all the users and roles that have
been granted the administer_resource_manager system privilege :
SQL> SELECT *
2 FROM dba_rsrc_manager_system_privs;
GRANTEE
PRIVILEGE
------------------ --------------------------DBA
ADMINISTER RESOURCE MANAGER
EXP_FULL_DATABASE ADMINISTER RESOURCE MANAGER
IMP_FULL_DATABASE ADMINISTER RESOURCE MANAGER
ADM
--YES
NO
NO
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
s
U
CPU Utilization
There are at least three different views in the system that can provide you with information
about the CPU utilization inside the Oracle database:
v$rsrc_consumer_group shows CPU utilization statistics on a per consumer
group basis, if you are running the Oracle Database Resource Manager. This view
displays data related to currently active resource consumer groups.
v$sysstat shows the Oracle database CPU usage for all sessions. The statistic
CPU used by this session shows the aggregate CPU used by all sessions.
v$sesstat shows the Oracle database CPU usage per session. You can use this view
to determine which particular session is using the most CPU.
The v$rsrc_consumer_group View
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Non-Oracle
processes
Oracle
processes
OS
OS and DB
files
Copyright Oracle Corporation, 2002. All rights reserved.
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
System Architectures
O
e
y
l
n
s
U
System Architectures
Uni Processor Systems
Uni Processor systems have only one CPU and one memory.
Symmetric Multiprocessing (SMP) Systems
SMP systems have multiple CPUs. The number commonly ranges from two to 64. All of the
CPUs in an SMP machine share the same memory, system bus, and I/O system. A single
copy of the operating system controls all of the CPUs.
Massively Parallel Processing (MPP) Systems
MPP systems consist of several nodes connected together. Each node has its own CPU,
memory, bus, disks, and I/O system. Each node runs its own copy of the operating system.
Clustered (Cluster) Systems
A cluster consists of several nodes loosely coupled using local area network (LAN)
interconnection technology. Each of the individual nodes can contain one or more CPUs. In
a cluster, system software balances the workload among the nodes and provides for high
availability.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
MMU
Virtual
memory
Physical
memory
Page table
possibly with ISM
Copyright Oracle Corporation, 2002. All rights reserved.
O
e
y
l
n
s
U
Virtual Memory
Operating systems make use of virtual memory. Virtual memory gives the application the
feeling that it is the only application on the system. Each application sees a complete
isolated memory area starting at address zero. This virtual memory area is divided into
memory pages, which are usually 4 or 8 KB in size. The operating system maps these virtual
memory pages into physical memory by the use of a memory management unit (MMU). The
mapping between virtual and physical memory is under the control of a page table. On most
operating systems, each process has its own page table. This can cause memory wastage if
many processes need to access a very large area of shared memory. On some platforms,
Solaris for example, this memory wastage can be avoided by sharing the page table entries
for a shared memory area. This is called intimate shared memory (ISM). An additional
benefit of using ISM is that the shared memory area gets locked into physical memory.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Process
Page
Swap device
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Tuning Memory
O
e
y
l
n
s
U
Tuning Memory
DB Tuning and Its Effects on Paging
Besides tuning the SGA, the DBA can also affect paging and swapping performance in
another way.
On some operating systems, the DBA can lock the SGA into real memory by setting the
LOCK_SGA initialization parameter to True, so it is never paged out to disk. Obviously, the
Oracle server performs better if the entire SGA is kept in real memory.
This should be used only on systems that have sufficient memory to hold all the SGA pages
without degrading performance in other areas.
Monitor Memory Usage
Real and virtual memory usage and paging and swapping can usually be monitored by
process or for the entire operating system. The amount of paging and swapping that is
acceptable varies by operating system; some tolerate more than others.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Tuning I/O
Memory
CPU
System bus
I/O
controller
I/O
controller
I/O
controller
O
e
y
l
n
s
U
Tuning I/O
The system administrator improves the performance of disk I/O by balancing the load across
disks and disk controllers.
I/O-intensive systems, such as database servers, perform better with many small disks
instead of a few large disks. More disks reduce the likelihood that a disk becomes a
bottleneck. Parallel Query operations also benefit by distributing the I/O workload over
multiple disk drives.
Raw Devices
A raw device is a disk or disk partition without a file or directory structure. They are more
difficult to administer than operating system files.
Monitoring
I/O performance statistics usually include the number of reads and writes, reads and writes
per second, and I/O request queue lengths. Acceptable loads vary by device and controller.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
CPU Tuning
Guidelines:
Maximum CPU busy rate: 90%
Maximum OS/User processing ratio: 40/60
CPU load balanced across CPUs
Monitoring:
CPU
Process
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Oracle processes
S
Q
L
P
L
U
S
s
m
o
n
p
m
o
n
a
r
c
0
SQL*PLUS
process
T
h
r
e
a
d
s
c
k
p
t
l
g
w
r
d
b
w
0
Oracle.exe
process
Thread
Threads
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Workshop Overview
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Objectives
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
Approach to Workshop
O
e
y
l
n
s
U
Approach to Workshop
Group-Oriented and Interactive
The workshop is structured to enable groups of individuals to work together to perform
tuning diagnostics and resolution. Each group is encouraged to share its tuning diagnostic
and resolution approach with other groups in the class.
Intensive Hands-On Diagnosis and Problem Resolution
The intent is to provide you with as much hands-on experience as possible to diagnose and
work through a performance tuning methodology, diagnosis, and resolution. The experience
and knowledge gained from the first four days of this course play a major role in
successfully completing the workshop.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Company Information
O
e
y
l
n
s
U
Company Information
Present Situation
The company has two OLTP type users and two DSS types. The system was set up by a
trainee DBA and, though it works, the performance is not acceptable. The company rents
space on a Sun server which it shares with 10 other companies. Due to this there is a
requirement that resources used be kept to a minimum.
Future Goals
At present the company has 4 employees. The company is expanding and is expected to
have 20 concurrent database users. You have been invited in to get the system ready for the
new workload. It is expected that there will be 10 of each type of user.
After collecting the statistics that you think are necessary, implement whatever database
changes your investigation determines would improve the situation. For example, which
parameter values you would set, which tables could do with an index, and so on.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
y
l
n
s
U
Database Configuration
Use the Oracle data dictionary views to obtain a complete picture of the database
configuration.
Setup for Statspack
Ensure that the job scheduler is set to collect statistics every 10 minutes. This is done by
connecting as the user perfstat and executing the following statement:
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workshop Procedure
1.
2.
3.
4.
5.
6.
7.
Choose a scenario.
Create a Statspack report.
Run the workload generator.
Create a Statspack report.
Determine what changes should take place.
Implement the changes.
Return to the second step to check that the
changes have made an improvement.
8. When the changes have improved performance,
choose another scenario.
y
l
n
O
e
s
U
Workshop Procedure
The workshop is executed against the local database. There is a WORKSHOP group on your
desktop. This group includes icons for choosing a scenario and apply a workload.
Choose a Scenario
Seven scenarios are available. Each scenario is represented in the WORKSHOP group as a
.bat file, to set the database for scenario 1 select the icon labeled 1.bat.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Choosing a Scenario
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workshop Scenarios
y
l
n
O
e
s
U
Workshop Scenarios
Workshop Scenario 5
After completing this scenario, change the database back into Automatic Managed Undo.
This can be performed manually or by running the script 0.bat.
I
A
Workshop Scenario 7
This scenario provides a mixture of database tuning problems. Some of the problems are the
same as in other scenarios.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Collecting Information
O
e
y
l
n
s
U
Physical Investigation
Perform a physical investigation of your workshop database environment. Remember to use
the tools that are available to you within the Oracle environment, such as the v$ dynamic
performance views, data dictionary views, table statistics, Statspack, and Oracle Enterprise
Manager. Use the statistics and ratios presented during the first four days of the class to
analyze the baseline. Depending on the scenario used, there are a number of statistics in the
report file that are contrary to a well-tuned database.
Note: While the workload generator is executing, you can use available tools to monitor
database performance. Also ensure that you are in a writable directory when you run the
spreport.sql script.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Results
O
e
y
l
n
s
U
Conclusions
Each group presents its conclusions and findings to the rest of class. The intent is to
demonstrate the effectiveness of the tuning strategy and show what effect the modifications
to the instance and database parameters had on overall performance. Include the following
items in your presentation:
What was done?
What was the justification?
What were the results?
Are there any pending issues?
What would you do differently?
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Summary
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Practice 2
The goal of this practice is to familiarize you with the different methods of collecting
statistical information. Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be
found in Appendix B).
1. Log on as directed by the instructor. If the database is not already started, connect to
SQL*Plus using sys/oracle as sysdba, then start up the instance using the
STARTUP command. Ensure that the password for the user system is set to oracle.
Check that TIMED_STATISTICS has been set to True; if it has not, then set it
using the ALTER SYSTEM statement.
SQL> CONNECT sys/oracle AS sysdba
SQL> ALTER USER system IDENTIFIED BY oracle;
SQL> SHOW PARAMETER TIMED_STATISTICS
y
l
n
2. Connect to SQL*Plus as the system user and issue a command that will create a
trace file for this session. Run a query to count the number of rows in the
dba_tables dictionary view. To locate your new trace file easier, if possible,
delete all the trace files in the USER_DUMP_DEST directory before running the
trace. Remember to disable the trace command after running the query.
O
e
SQL>
SQL>
SQL>
SQL>
I
A
s
U
CONNECT system/oracle
ALTER SESSION SET SQL_TRACE = TRUE;
SELECT COUNT(*) FROM dba_tables;
ALTER SESSION SET SQL_TRACE = FALSE;
&
l
a
n
r
te
3. At the operating system level view the resulting trace file located in the directory set
by USER_DUMP_DEST. Do not try to interpret the content of the trace file, because
this is the topic of a later lesson.
n
I
e
$cd $HOME/ADMIN/UDUMP
$ls l
r
O
l
c
a
Practice 2 (continued)
4. Open two sessions, the first as hr/hr, and the second as sys/oracle as
sysdba. From the second session generate a user trace file for the first session using
the dbms_system.set_sql_trace_in_session procedure. Get the user id
and serial# from v$session.
From Session 1
$ SQL*Plus hr/hr
Change to Session 2
$ SQL*Plus sys/oracle as sysdba
SQL> SELECT username, sid, serial#
2 FROM v$session
3 WHERE username = 'HR';
SQL> BEGIN
2 dbms_system.set_sql_trace_in_session
3 (&SID,&SERIALNUM,TRUE);
4 END;
5 /
Change to Session 1
SQL> SELECT * FROM employees;
Change to Session 2
SQL>
2
3
4
5
BEGIN
dbms_system.set_sql_trace_in_session
(&SID,&SERIALNUM,FALSE);
END;
/
y
l
n
O
e
s
U
5. Confirm that the trace file has been created in the directory set by
USER_DUMP_DEST.
$cd $HOME/ADMIN/UDUMP
$ls l
-rw-r----- 1 dba01 dba
dba01_ora_3270.trc
-rw-r----- 1 dba01 dba
dba01_ora_3281.trc
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
SQL>
SQL>
2
3
Practice 2 (continued)
7. Confirm and record the amount of free space available within the tools tablespace
by querying the dba_free_space view. Also check that the tablespace is
dictionary managed.
SQL>
2
3
SQL>
2
3
4
y
l
n
9. Query dba_free_space to determine the amount of free space left in the tools
tablespace. The difference between this value and the one recorded in step 7 will be
the space required for the initial installation of Statspack.
O
e
s
U
Note: The amount of storage space required will increase in proportion to the
amount of information stored within the Statspack tables, that is, the number of
snapshots.
I
A
Subtract the value received now, from the value received in step 7 to get the amount
of space required to install Statspack
&
l
a
10. Manually collect current statistics using Statspack by running the snap.sql script
located in $HOME/STUDENT/LABS. This will return the snap_id for the
snapshot just taken, which should be recorded.
n
r
te
n
I
e
r
O
l
c
a
Practice 2 (continued)
11. To have Statspack automatically collect statistics every three minutes execute the
spauto.sql script located in your $HOME/STUDENT/LABS directory. Query the
database to confirm that the job has been registered using the user_jobs view.
SQL > @$HOME/STUDENT/LABS/spauto.sql
SQL > SELECT job, next_date, next_sec, last_sec
2
FROM user_jobs;
SELECT snap_id,
TO_CHAR(startup_time, 'dd Mon "at" HH24:mi:ss')
instart_fm,
TO_CHAR(snap_time,'dd Mon YYYY HH24:mi') snap_date,
snap_level level
FROM stats$snapshot
ORDER BY snap_id;
Note: If the job scheduler is not working, check the value of the
JOB_QUEUE_PROCESSES parameter. The value should be greater than 0.
y
l
n
O
e
s
U
13. When there are at least two snapshots, start to generate a report. This is performed
using the spreport.sql script found in the $HOME/STUDENT/LABS directory.
The script lists the snapshot options available and then requests the beginning snap
id and the end snap id. The user is then requested to give a filename for the
report. It is often best left to the default.
I
A
&
l
a
SQL> @$HOME/STUDENT/LABS/spreport.sql
n
r
te
14. Locate the report file in the users current directory, then using any text editor, open
and examine the report. The first page shows a collection of the most queried
statistics.
n
I
e
l
c
a
$ vi sp_X_Y.lst
where X is the starting snapshot and Y is the ending snapshot (this is true if the
default report filename was used).
r
O
Practice 2 (continued)
16. Query the database to determine what system wait events have been registered since
startup using v$system_event.
SQL> SELECT event, total_waits, time_waited
2 FROM v$system_event;
18. Stop the automatic collection of statistics by removing the job. This is performed by
connecting as perfstat/perfstat and querying the user_jobs view to get
the job number. Then execute the dbms_job.remove procedure.
SQL>
SQL>
2
SQL>
CONNECT perfstat/perfstat
SELECT job, log_user
FROM user_jobs;
EXECUTE dbms_job.remove ($job_to_remove);
y
l
n
O
e
19. Connect to your database using Oracle Enterprise Manager. The lecturer will supply
the information required to connect to the Oracle Management Server. After you
have connected to the database, use Oracle Enterprise Manager to explore the
database. Examine items such as the number of tablespaces, users, and tables.
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 2 (continued)
Oracle Classroom Only (continued)
d. Close the MSDOS window.
Start the Oracle Enterprise Manager Console and set the Administrator to
sysman and the password to oem_temp. When prompted, change the
password to oracle. Select Discover Nodes from the Navigator and enter
the host name of the server of your working database.
i. From the Start menu > Programs > Oracle OracleHome >
Enterprise Manager Console
ii. Make sure the Login to the Oracle Management Server is selected.
iii. Administrator: sysman
iv. Password: oem_temp
v. Management server is your machine.
vi. When prompted to change the sysman password to oracle.
vii. Select Navigator > Discover Nodes from the console menu, or select
Discover Nodes from the right mouse shortcut menu to open the
Discover Nodes dialog box.
viii. From the Discovery Wizard: Introduction page, click Next, enter the
name of your UNIX database server, and click Next.
ix. Click Next, give your regular administrator access to your database.
x. Click Finish, then OK. If your discovery was not successful contact
your instructor.
y
l
n
O
e
20. From Oracle Enterprise Manager load Oracle Expert and create a new tuning session.
Limit the tuning scope to Check for Instance Optimizations. This is done to reduce
the time taken to collect information. Collect a new set of data.
I
A
s
U
Note: Do not implement the changes that Oracle Expert recommends, because this
will be done during the course.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 3
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as system/oracle and diagnose database file configuration by querying
the v$datafile, v$logfile and v$controlfile dynamic performance
views.
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
CONNECT system/oracle
SELECT name FROM v$datafile
UNION
SELECT member FROM v$logfile
UNION
SELECT name FROM v$controlfile
UNION
SELECT value FROM v$parameter
WHERE (name LIKE `log_archive_dest%
AND name NOT LIKE 'log_archive_dest_state%')
OR name IN
('log_archive_dest','log_archive_duplex_dest');
y
l
n
I
A
O
e
s
U
3. Determine whether there are waits for redo log files by querying the
v$system_event dynamic performance view, where the waiting event is log file
sync or log file parallel write.
SQL>
2
3
4
&
l
a
n
r
te
n
I
e
Waits for log file sync are indicative of slow disks that store the online logs or
unbatched commits. The log file parallel write is much less useful because this event
only shows how often LGWR waits, not how often server processes wait. If LGWR
waits without impacting user processes, there is no performance problem. If LGWR
waits, it is likely that the log file sync event (mentioned above) will also be evident.
r
O
l
c
a
Practice 3 (continued)
4. Connect as perfstat/perfstat and diagnose file usage from Statspack.
a. Generate a Statspack report using
$HOME/STUDENT/LABS/spreport.sql
b. Locate and open the report file.
c. Examine the report and search for the File IO Stats string.
Note: On a production database care should be taken in monitoring the disk
and controller usage by balancing the workload across all devices. If your
examination shows a distinct over-utilization of a particular data file, consider
resolving the cause of the amount of I/O. For example, investigate the number
of full table scans, clustering of files on a specific device and under-utilization
of indexes. If after this the problem remains then look at placing the data file on
a low utilization device.
5. Connect as system/oracle and enable checkpoints to be logged in the alert file
by setting the value of the LOG_CHECKPOINTS_TO_ALERT parameter to True
using the ALTER SYSTEM SET command.
SQL> CONNECT system/oracle
SQL> ALTER SYSTEM SET LOG_CHECKPOINTS_TO_ALERT = True;
y
l
n
O
e
s
U
7. At the operating system level use the editor to open the alert log file (located in the
directory specified by BACKGROUND_DUMP_DEST). Then determine the
checkpoint frequency for your instance by searching for messages containing the
phrase Completed Checkpoint. The time difference between two consecutive
messages is the checkpoint interval.
I
A
&
l
a
Open the alert log file using an editor and search for the line: Completed checkpoint.
The line before this will be the time at which the checkpoint occurred. Search for the
following checkpoint time and then subtract to get the time between checkpoints.
n
r
te
n
I
e
r
O
l
c
a
Practice 4
The objective of this practice is to use diagnostic tools to monitor and tune the shared pool.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect using sys/oracle as sysdba and check the size of the shared pool.
SQL> CONNECT sys/oracle AS sysdba
SQL> SHOW PARAMETER SHARED_POOL
NAME
---------------------------shared_pool_reserved_size
shared_pool_size
TYPE
----------big integer
big integer
VALUE
--------------2516582
50331648
y
l
n
O
e
3. To simulate user activity against the database open two operating system sessions. In
session 1 connect as hr/hr and run the
$HOME/STUDENT/LABS/lab04_03_1.sql script. In the second session
connect as hr/hr and run the $HOME/STUDENT/LABS/lab04_03_2.sql
script.
I
A
In session 1:
&
l
a
s
U
In session 2:
n
r
te
n
I
e
4. Connect as system/oracle and measure the pin-to-reload ratio for the library
cache by querying v$librarycache. Determine whether it is a good ratio or not.
l
c
a
r
O
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT SUM(pins), SUM(reloads),
SUM(pins) * 100 / SUM(pins+reloads) Pin Hit%
FROM v$librarycache;
Oracle9i Database Performance Tuning A-10
Practice 4 (continued)
5. Connect as system/oracle and measure the get-hit ratio for the data dictionary
cache by querying v$rowcache. Determine whether it is a good ratio or not using
the dynamic view.
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT SUM(getmisses), SUM(gets),
SUM(getmisses )*100/SUM(gets)MISS %
FROM v$rowcache;
If GETMISSES are lower than 15% of the GETS, then it is a good ratio.
6. Connect as perfstat/perfstat and run the
$HOME/STUDENT/LABS/snap.sql script to collect a statistic snapshot and
obtain the snapshot number. Record this number.
SQL> CONNECT perfstat/perfstat
SQL> @$HOME/STUDENT/LABS/snap.sql
y
l
n
O
e
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 4 (continued)
8. Analyze the generated report in the current directory. What would you consider
doing if the library hit ratio (found under the heading Instance Efficiency
Percentages) is less than 98%?
Increase the SHARED_POOL_SIZE parameter.
9. Connect as system/oracle and determine which packages, procedures, and
triggers are pinned in the shared pool by querying v$db_object_cache.
SQL> CONNECT system/oracle
SQL> SELECT name, type, kept
2 FROM v$db_object_cache
3 WHERE type IN
4 ('PACKAGE', 'PROCEDURE', 'TRIGGER', 'PACKAGE
BODY');
10. Connect using sys/oracle as sysdba and pin one of the Oracle supplied
packages that must be kept in memory, such as sys.standard using the
dbms_shared_pool.keep procedure, which is created by running the
$ORACLE_HOME/rdbms/admin/dbmspool.sql script.
SQL>
SQL>
SQL>
SQL>
2
3
4
y
l
n
O
e
I
A
s
U
11. Determine the amount of session memory used by your session by querying the
v$mystat view. Limit the output by including the clause:
&
l
a
n
r
te
Note: Because you are not using the Oracle Shared Server configuration this
memory resides outside the SGA.
n
I
e
l
c
a
12. Determine the amount of session memory used for all sessions, using v$sesstat
and v$statname views:
r
O
SQL>
2
3
4
Practice 5
The objective of this practice is to use available diagnostic tools to monitor and tune the
database buffer cache. Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be
found in Appendix B).
1. Connect as perfstat/perfstat and run a statistic snapshot. Make a note of the
snapshot number. The snap shot can be taken by running the
$HOME/STUDENT/LABS/snap.sql script file.
SQL> CONNECT perfstat/perfstat
SQL> @$HOME/STUDENT/LABS/snap.sql
2. To simulate user activity against the database, connect as the hr/hr user and run
the lab05_02.sql script.
SQL> CONNECT hr/hr
SQL> @$HOME/STUDENT/LABS/lab05_02.sql
3. Connect as system/oracle and measure the hit ratio for the database buffer
cache using the v$sysstat view. Determine whether it is a good ratio or not.
SQL>
SQL>
2
3
4
5
6
7
8
y
l
n
O
e
CONNECT system/oracle
SELECT 1 - (phy.value lob.value dir.value)
/ ses.value "CACHE HIT RATIO
FROM
v$sysstat ses, v$sysstat lob,
v$sysstat dir, v$sysstat phy
WHERE ses.name = 'session logical reads'
AND
dir.name = 'physical reads direct'
AND
lob.name = 'physical reads direct (lob)'
AND
phy.name = 'physical reads';
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
5. Use the report from Statspack between the last two snapshots to check the buffer
cache hit ratio, using the $HOME/STUDENT/LABS/spreport.sql script. Then
analyze the buffer hit % in the Instance Efficiency Percentages section.
r
O
SQL> @$HOME/STUDENT/LABS/spreport.sql
Oracle9i Database Performance Tuning A-13
Practice 5 (continued)
Note: On a production database if the ratio is bad, add new buffers, run steps 2 to 5,
and examine the new ratio to verify that the ratio has improved. If the ratio is good,
remove buffers, run steps 2 to 5, and verify if the ratio is still good.
6. Connect as system/oracle and determine the size of the temp_emps table in
the hr schema that you want to place in the keep buffer pool. Do this by using the
dbms_stats.gather_table_stats procedure and then query the blocks
column of the dba_tables view for the temp_emps table.
SQL>
SQL>
>
SQL>
2
3
CONNECT system/oracle
EXECUTE dbms_stats.gather_table_stats ('HR','TEMP_EMPS');
SELECT table_name , blocks
FROM dba_tables
WHERE table_name IN ('TEMP_EMPS');
7. Keep temp_emps in the keep pool. Use the ALTER SYSTEM command to set
DB_KEEP_CACHE_SIZE to 4 MB for the keep pool. Limit the scope of this
command to the spfile.
SQL> ALTER SYSTEM SET DB_KEEP_CACHE_SIZE=4M
SCOPE=spfile;
y
l
n
O
e
8. For the keep pool to be allocated the database needs to be restarted. You will need to
be connected as a sysdba user to perform this task.
SQL> CONNECT sys/oracle AS sysdba
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 5 (continued)
10. Connect as hr/hr and run the $HOME/STUDENT/LABS/lab05_10.sql script.
This will execute a query against the temp_emps table in the hr schema.
SQL> CONNECT hr/hr
SQL> @$HOME/STUDENT/LABS/lab05_10.sql
11. Connect using sys/oracle as sysdba and check for the hit ratio in different
buffer pools, using the v$buffer_pool_statistics view.
SQL>
SQL>
2
3
4
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 6
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as sys/oracle AS sysdba and, without restarting the instance, resize
the DB_CACHE_SIZE to 12 Mb. Limit the effect of this command to memory, so as
not to modify the spfile.
SQL> CONNECT sys/oracle AS sysdba
SQL> ALTER SYSTEM SET DB_CACHE_SIZE = 12M
2
SCOPE = memory;
Note: This will encounter an error because the total SGA size will be bigger than
SGA_MAX_SIZE. To overcome this you must either change the value of
SGA_MAX_SIZE and restart the instance (which is what dynamic allocation is
meant to avoid) or resize a component, thus making memory available for the
increase in the buffer cache.
2. Reduce the memory used by the shared pool. Limit the effect of this command to
memory, so as not to modify the spfile.
y
l
n
O
e
s
U
3. Without restarting the instance, resize the DB_CACHE_SIZE to 12 Mb. Limit the
effect of this command to memory, so as not to modify the spfile.
I
A
&
l
a
Note: This time the memory is available so the command will be executed.
n
r
te
4. To return the SGA to the original configuration, restart the instance. You must be
connected as a sysdba user to perform this task.
n
I
e
r
O
l
c
a
Practice 7
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as perfstat/perfstat and collect a snapshot of the current statistics
by running the $HOME/STUDENT/LABS/snap.sql script. Record the snapshot
ID for later use.
SQL> CONNECT perfstat/perfstat
SQL> @$HOME/STUDENT/LABS/snap;
y
l
n
CONNECT system/oracle
SELECT rbar.name, rbar.value, re.name, re.value
FROM v$sysstat rbar, v$sysstat re
WHERE rbar.name = 'redo buffer allocation retries'
AND re.name = 'redo entries';
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 7 (continued)
5. Connect as sys/oracle AS sysdba and increase the size of the redo log buffer
in the spfile by changing the value of the LOG_BUFFER parameter. Because this
parameter is static you must specify spfile.
SQL> CONNECT sys/oracle AS sysdba
SQL> ALTER SYSTEM SET LOG_BUFFER = 128000
2
SCOPE = spfile;
6. To have the new value for the LOG_BUFFER take effect, you must restart the
instance. Then confirm that the change has occurred.
SQL> SHUTDOWN immediate
SQL> STARTUP
SQL> SHOW PARAMETER LOG_BUFFER
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 9
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Set the database to use the manual sort option by changing the value of the
WORKAREA_SIZE_POLICY parameter to Manual. Set the SORT_AREA_SIZE
parameter to 512 bytes.
SQL> CONNECT sys/oracle AS sysdba
SQL> ALTER SYSTEM SET WORKAREA_SIZE_POLICY = manual
2
SCOPE = both;
SQL> ALTER SYSTEM SET SORT_AREA_SIZE = 512
2
SCOPE= spfile;
2. For the new value of the LOG_BUFFER parameter to take effect, you must restart the
instance. Then query the v$sysstat view and record the value for sorts (memory)
and sorts (disk).
SQL>
SQL>
SQL>
2
3
SHUTDOWN immediate
STARTUP
SELECT name, value
FROM v$sysstat
WHERE name LIKE 'sorts%';
y
l
n
O
e
Note: The statistics in v$sysstat are collected from startup. If you need to obtain
accurate statistics per statement, you must record statistics before the statement runs
and again afterwards. Subtracting to two values gives the statistics for the statement.
I
A
s
U
3. To perform a sort on the database that will have sorts to disk connect as sh/sh and
execute the $HOME/STUDENT/LABS/lab09_03.sql script.
&
l
a
n
r
te
Note: If this script fails due to a lack of free space in the temp tablespace then
connect as system/oracle and resize the temporary tablespace.
n
I
e
r
O
l
c
a
4. Connect as system/oracle, query the v$sysstat view again, and record the
value for sorts (memory) and sorts (disk). Subtract the values from the recorded
value in question 2. If the ratio of Disk to Memory sorts is greater than 5% then
increase the sort area available.
Oracle9i Database Performance Tuning A-19
Practice 9 (continued)
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT name, value
FROM v$sysstat
WHERE name LIKE 'sorts%';
Note: If this statement returns no rows, it means that all sort operations since startup
have completed in memory.
6. To decrease the number of sorts going to a temporary tablespace, increase the value
of the SORT_AREA_SIZE parameter to 512000 using the ALTER SESSION
command.
O
e
y
l
n
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 10
The objective of this practice is to use available diagnostic tools to monitor and tune the
rollback segments. This would require setting the database to Manual Undo Management
mode. Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Set the database in Manual Undo Mode by connecting as sys/oracle AS
sysdba and change the following parameters to the values shown:
undo_management = Manual
undo_tablespace = Null
Restart the database and confirm that the UNDO_MANAGEMENT parameter is set to
Manual and that UNDO_TABLESPACE is Null.
SQL>
SQL>
2
SQL>
2
SQL>
SQL>
SQL>
y
l
n
O
e
I
A
s
U
&
l
a
Note: This is not to be an UNDO tablespace and you must specify that it is to be
dictionary managed.
n
r
te
3. For the purposes of this practice, create a new rollback segment called rbsx in the
rbs_test tablespace. For the storage parameters, use 64 KB for the INITIAL and
NEXT extent sizes with MINEXTENTS value set to 20. Set the OPTIMAL value so
that the segment shrinks back to 1280 KB automatically.
n
I
e
r
O
l
c
a
SQL>
2
3
4
Practice 10 (continued)
4. Bring the rbsx rollback segment online and ensure that any others (except the
system rollback segment) are offline. Query the dba_rollback_segs view to
get the segment_name and status of the rollback segments to be taken offline using
the ALTER ROLLBACK SEGMENT command.
SQL> ALTER ROLLBACK SEGMENT rbsx ONLINE;
SQL> SELECT segment_id, segment_name, status
2 FROM dba_rollback_segs;
5. Before executing a new transaction, find the number of bytes written so far in the
rbsx rollback segment, using the writes column of v$rollstat.
SQL> SELECT usn, writes
2 FROM v$rollstat
3 WHERE usn>0;
y
l
n
In session 1:
I
A
In Session 2:
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT usn, writes
FROM v$rollstat
WHERE usn>0;
&
l
a
O
e
s
U
n
r
te
Note: The number of writes in the rollback segment between questions 5 and 6 is the
difference in the value of the writes column at the respective times.
n
I
e
r
O
l
c
a
Practice 10 (continued)
8. Return to the hr session (the first session) and commit the insert. Run the
$HOME/STUDENT/LABS/del_temps.sql script. Do not COMMIT. The script
deletes the hundred rows you have just inserted. As user system (in the second
session), check the amount of rollback space used, using the writes column of
v$rollstat. Note the difference between the return value and that found in
question 6.
In session 1:
SQL> COMMIT;
SQL> @$HOME/STUDENT/LABS/del_temps.sql
In session 2:
SQL> SELECT usn, writes
2 FROM v$rollstat
3 WHERE usn>0;
9. In session 2, connect as system/oracle and find out if you have had any rollback
segment contention since startup, using the waits and gets columns in the
v$rollstat view.
SQL> SELECT SUM(waits)/SUM(gets) Ratio",
2 SUM(waits) "Waits", SUM(gets) "Gets
3 FROM v$rollstat;
y
l
n
O
e
I
A
s
U
10. Does the v$system_event view show any waits related to rollback segments?
Using session 2, query in v$system_event view for the undo segment tx slot
entry.
&
l
a
n
r
te
Note: Because only one session is making changes it is unlikely that there will be
any contention for the undo segment transaction slot.
n
I
e
11. In session 1 commit the transaction. Then connect as hr/hr and run the
$HOME/STUDENT/LABS/ins_temps.sql script again, allocating the
transaction to a specific rollback segment rbsx, using the set transaction use
rollback segment command. In session 2, check that the transaction is using the
defined rollback segment join the v$rollstat, v$session, and
v$transaction views.
r
O
l
c
a
Practice 10 (continued)
In session 1:
SQL> COMMIT;
SQL> SET TRANSACTION USE ROLLBACK SEGMENT rbsx;
SQL> @$HOME/STUDENT/LABS/ins_temps.sql
In session 2:
SQL>
2>
3>
4>
5>
12. Close session 2, then in session 1 connect as sys/oracle AS sysdba and set
the database in Auto Undo Mode by changing the following parameters to the values
shown:
undo_management = Auto
undo_tablespace = undotbs
Restart the database and confirm that the UNDO_MANAGEMENT parameter is set to
Auto and that UNDO_TABLESPACE is undotbs.
SQL>
SQL>
2
SQL>
2
SQL>
SQL>
SQL>
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
r
O
y
l
n
Practice 11
The objective of this practice is to familiarize you with SQL statement execution plans and
to interpret the formatted output of a trace file generated using SQL Trace and the
formatted output generated by TKPROF. Throughout this practice Oracle Enterprise
Manager can be used if desired. SQL Worksheet can be used instead of SQL*Plus and
there are many uses for the Oracle Enterprise Manager console. (Solutions for Oracle
Enterprise Manager can be found in Appendix B).
1. Connect as hr/hr and create the plan_table table under the hr schema, if it is
not already created, by running the
$ORACLE_HOME/rdbms/admin/utlxplan.sql script.
SQL> CONNECT hr/hr
SQL> @$ORACLE_HOME/rdbms/admin/utlxplan.sql
Note: If plan_table already exists and holds rows then truncate the table.
2. Set the optimizer mode to rule based using the ALTER SESSION command and
generate the explain plan for the statement
$HOME/STUDENT/LABS/lab11_02.sql. View the generated plan by querying
object name, operation, option, and optimizer from the plan_table table.
SQL>
SQL>
2
SQL>
2
y
l
n
O
e
I
A
s
U
3. Truncate the plan_table table. Change the optimizer mode to cost based by
setting the value to All_rows and rerun the explain plan for
$HOME/STUDENT/LABS/lab11_02.sql. Notice that the optimizer mode and
the explain plan have changed.
SQL>
SQL>
SQL>
2
SQL>
2
&
l
a
n
r
te
l
c
a
n
I
e
Note: Although exactly the same scripts are being run, due to the different optimizer
settings, different explain paths are found. With rule-based, one of the rules is to use
any index that is on the columns in the where clause. By using cost-based optimizer
mode, the server has been able to determine that it will be faster to just perform a full
table scan, due to the number of rows being returned by the script.
r
O
Practice 11 (continued)
4. Truncate the plan_table table and set the optimizer mode to Rule by using the
ALTER SESSION command. This time generate the explain plan for the
$HOME/STUDENT/LABS/lab11_04.sql script. Examine the script which is a
copy of $HOME/STUDENT/LABS/lab11_02.sql except that it changes the line
SELECT * to include a hint /*+ all_rows*/ for the optimizer. View the
generated execution plan by querying object name, operation, option, and optimizer
from plan_table table.
SQL>
SQL>
SQL>
2
SQL>
2
y
l
n
Note: this step is performed only to make it easier to find the trace file generated. It
is not a requirement of SQL Trace.
O
e
6. Connect as sh/sh and enable SQL Trace, using the ALTER SESSION command,
to collect statistics for the script, $HOME/STUDENT/LABS/lab11_06.sql. Run
the script. After the script has completed, disable SQL Trace, then format your trace
file using TKPROF. Use the options SYS=NO and EXPLAIN= sh/sh. Name the
file myfile.txt.
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
7. View the output file myfile.txt and note the CPU, current, and query figures for
the fetch phase. Do not spend time analyzing the contents of this file because the
only objective here is to become familiar and comfortable with running TKPROF and
SQL*Trace.
$ more myfile.txt
Oracle9i Database Performance Tuning A-26
Practice 12
The objective of this practice is to familiarize you with the dbms_stats package.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as hr/hr and create a table new_employees as a copy of the
employees table. Gather statistics on the new_employees table and determine
the current number of rows in the new_employees table. Record the number of
rows for comparison later.
SQL> CONNECT hr/hr
SQL> CREATE TABLE new_employees
2
AS SELECT *
3
FROM employees;
SQL> EXECUTE >
dbms_stats.gather_table_stats
('HR','NEW_EMPLOYEES');
SQL> SELECT table_name, num_rows
2 FROM user_tables
3 WHERE table_name = 'NEW_EMPLOYEES';
y
l
n
O
e
s
U
SQL> @$HOME/STUDENT/LABS/lab12_02.sql
3. Confirm that the statistics have not been changed in the data dictionary by re-issuing
the same statement as in question 1.
I
A
&
l
a
n
r
te
4. Connect hr/hr and gather statistics for all objects under the hr schema using the
dbms_stats package. While gathering the new statistics save the current statistics
in a table named stats.
n
I
e
r
O
l
c
a
b. Save the current schema statistics into your local statistics table.
SQL> execute
dbms_stats.export_schema_stats('HR','STATS');
Practice 12 (continued)
5. Determine that the current number of rows in the employees table has been updated
in the data dictionary. This should be twice the number of rows recorded in question
1.
SQL> SELECT table_name, num_rows
2 FROM user_tables
3 WHERE table_name = 'NEW_EMPLOYEES';
6. Remove all schema statistics from the dictionary and restore the original statistics
you saved in step b.
SQL> execute dbms_stats.delete_schema_stats('HR');
SQL> execute
dbms_stats.import_schema_stats('HR','STATS');
7. Confirm that the number of rows in the employees table recorded in the data
dictionary has returned to the previous value collected in question 1.
SQL> SELECT table_name, num_rows
2 FROM user_tables
3 WHERE table_name = 'NEW_EMPLOYEES';
O
e
y
l
n
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 13
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect using sys/oracle AS sysdba and query the tablespace_name
and extent_management columns of dba_tablespaces to determine which
tablespaces are locally managed and which are dictionary managed. Record which
tablespaces are dictionary managed.
SQL> CONNECT / AS sysdba
SQL> SELECT tablespace_name, extent_management
2 FROM dba_tablespaces;
3. Examine the v$system_event view and note the total waits for the statistic
enqueue.
y
l
n
O
e
s
U
Note: On a production system you would be more likely to pick up the contention
through the Statspack report.
I
A
4. Also examine the v$enqueue_stat view for eq_type 'ST' to determine the
total_wait# for the ST enqueue, which is the space management enqueue.
&
l
a
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
n
r
te
n
I
e
r
O
l
c
a
$ cd $HOME/STUDENT/LABS
$ ./lab13_04.sh
Practice 13 (continued)
6. Connect as system/oracle and again examine the v$enqueue_stat view for
eq_type 'ST' to determine the value of total_wait# for the ST enqueue, which
is the space management enqueue.
$ SQL*Plus system/oracle
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
Note: Record the difference in the number of waits for the ST enqueue for extent
anagement using a dictionary managed tablespace. This value is found by subtracting
the first wait value (from practice 13-04) from the second wait value (from practice
13-06).
7. Create a new locally managed tablespace test, name the data file test01.dbf,
and place it in the $HOME/ORADATA/u06 directory. Set the size to 120 MB and a
uniform extent size of 20 KB.
SQL> CREATE TABLESPACE test
2 DATAFILE '$HOME/ORADATA/u06/test01.dbf' SIZE 120M
3 UNIFORM SIZE 20k;
y
l
n
O
e
s
U
Note: The same steps are covered again. This time you are looking for the number of
waits for the ST enqueue caused by locally managed tablespaces.
I
A
9. Examine and record the initial total_wait# for 'ST' in the v$enqueue_stat
view.
&
l
a
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
n
r
te
n
I
e
10. Exit out of the SQL*Plus session and change directory to $HOME/STUDENT/LABS.
Run the lab13_04.sh script from the operating system prompt. This script will
log five users onto the database simultaneously and then each user creates and drops
tables. The tables each have many extents. The script must be run from the
$HOME/STUDENT/LABS directory or it will fail.
r
O
l
c
a
$ cd $HOME/STUDENT/LABS
$ ./lab13_04.sh
Oracle9i Database Performance Tuning A-30
Practice 13 (continued)
11. Again examine and record the final total_wait# for 'ST' in the
v$enqueue_stat view.
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
Note: Record the difference in the total_wait# for the ST enqueue for extent
management using a locally managed tablespace. This value is found by subtracting
the first wait value (from practice 13-09) from the second wait value (from practice
13-11). Compare the two results for the different tablespaces. The locally managed
tablespace would be far less contentious with extent management because it is
managing the space within the tablespace itself.
12. Connect as the hr/hr user and run the
$HOME/STUDENT/LABS/lab13_12.sql script. This will create a similar table
(new_emp) as the employees table but with PCTFREE = 0. The table is then
populated with data from the employees table.
SQL> CONNECT hr/hr
SQL> @$HOME/STUDENT/LABS/lab13_12.sql;
y
l
n
13. Run ANALYZE on the new_emp table and query the dba_tables view to
determine the value of chain_cnt for the new_emp table. Record this value.
SQL>
SQL>
2
3
O
e
s
U
I
A
&
l
a
n
r
te
l
c
a
n
I
e
r
O
Practice 13 (continued)
SQL>
SQL>
SQL>
2
3
SQL>
2
3
@$HOME/STUDENT/LABS/lab13_15.sql
ANALYZE TABLE new_emp COMPUTE STATISTICS;
SELECT table_name, chain_cnt
FROM user_tables
WHERE table_name = 'NEW_EMP';
SELECT index_name, status
FROM user_indexes
WHERE index_name = 'NEW_EMP_NAME_IDX';
16. Resolve the migration caused by the previous update, by using the ALTER TABLE
MOVE command. This will cause the index to become unusable and should be rebuilt
using the ALTER INDEX REBUILD command before reanalyzing the new_emp
table. Confirm that the migration has been resolved by querying chain_cnt
column in the user_tables view and confirm that the index is valid by querying
the user_indexes view.
SQL>
2
SQL>
SQL>
SQL>
2
3
SQL>
2
3
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 15
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as hr/hr, drop the new_employees table, and create an IOT called
new_employees in the hr schema. Give the table the same columns as the
hr.employees table. Make the employee_id column the primary key and
name the primary key index new_employees_employee_id_pk.
SQL>
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CONNECT hr/hr
DROP TABLE new_employees;
CREATE TABLE new_employees
(employee_id
NUMBER(6),
first_name
VARCHAR2(20),
last_name
VARCHAR2(25),
email
VARCHAR2(25),
phone_number
VARCHAR2(20),
hire_date
DATE,
job_id
VARCHAR2(10),
salary
NUMBER(8,2),
commission_pct
NUMBER (2,2),
manager_id
NUMBER(6),
department_id
NUMBER(4),
CONSTRAINT
new_employees_employee_id_pk
PRIMARY KEY
(employee_id))
ORGANIZATION INDEX;
y
l
n
O
e
I
A
s
U
2. Confirm the creation of the table by querying the user_tables and the
user_indexes views
&
l
a
n
r
te
n
I
e
l
c
a
3. Populate the new_employees table with the rows from the hr.employees
table.
r
O
Practice 15 (continued)
4. Create a secondary B-tree index on the last_name column of the
new_employees table. Place the index in the indx tablespace. Name the index
last_name_new_employees_idx. Collect the statistics for the secondary
index.
SQL>
2
3
SQL>
>
5. Confirm the creation of the index by using the user_indexes view in the data
dictionary. Query the index_name, index_type, blevel, and
leaf_blocks.
SQL> SELECT index_name, index_type, blevel, leaf_blocks
2 FROM user_indexes
3 WHERE index_name = 'LAST_NAME_NEW_EMPLOYEES_IDX';
Note: If the values for blevel and leaf_blocks are null then there were no
statistics collected. Confirm that the value of index_type is normal.
y
l
n
s
U
I
A
&
l
a
O
e
7. Confirm the creation of the index and that it is a reverse key index, by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel, and leaf_blocks.
n
r
te
n
I
e
l
c
a
Note: This time the values of blevel and leaf_blocks should be null, because
you did not collect statistics for this index while creating it. Also the value for index
type should now be normal/reverse.
r
O
Practice 15 (continued)
8. Create a bitmap index on the job_id column of the employees_hist table.
Place the index in the indx tablespace. Name the index
bitmap_emp_hist_idx.
SQL> CREATE BITMAP INDEX bitmap_emp_hist_idx
2 ON employees_hist (job_id)
3 TABLESPACE indx;
9. Confirm the creation of the index and that it is a bitmapped index by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel, and leaf_blocks.
SQL> SELECT index_name, index_type
2 FROM user_indexes
3 WHERE index_name = 'BITMAP_EMP_HIST_IDX';
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 16
In this practice you will make use of the AUTOTRACE feature and create the plan_
table table. These are covered in detail in the chapter titled SQL Statement Tuning.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be found in
Appendix B).
1. Connect as sh/sh and confirm that the plan_table table exists. If the table does
exist then truncate it, otherwise create the plan_table table using
$ORACLE_HOME/rdbms/admin/utlxplan.sql.
SQL> CONNECT sh/sh
SQL> DESC plan_table
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
Note: The rewrite_enabled column must have a value of Y in order for the
practice on query rewrite to work.
r
O
l
c
a
Practice 16 (continued)
4. Set AUTOTRACE to Traceonly Explain, to generate the explain plan for the query
$HOME/STUDENT/LABS/lab16_04.sql
SQL> SET AUTOTRACE Traceonly Explain
SQL> @$HOME/STUDENT/LABS/lab16_04.sql
5. Set the QUERY_REWRITE_ENABLED parameter to True for the session and run the
same query, $HOME/STUDENT/LABS/lab16_04.sql, as in the previous
practice. Note the change in the explain plan due to the query rewrite. Set
AUTOTRACE to Off and disable query rewrite after the script has completed running.
SQL>
SQL>
SQL>
SQL>
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 17
The objective of this practice is to use available diagnostic tools to monitor lock
contention. You will need to start three sessions in separate windows. Log in as hr/hr in
two separate sessions (sessions 1 and 3) and as sys/oracle as sysdba in another
session (session 2). Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console. (Solutions for Oracle Enterprise Manager can be
found in Appendix B).
1. In session 1 (user hr/hr), update the salary by 10% for all employees with a salary
< 15000 in the temp_emps table. Do not COMMIT.
SQL>
SQL>
2
3
CONNECT hr/hr
UPDATE TEMP_EMPS
SET SALARY = SALARY * 1.1
WHERE salary <15000;
y
l
n
O
e
3. In session 3 ( the session not yet used), connect as hr/hr and drop the temp_emps
table. Does it work?
SQL> CONNECT hr/hr
SQL> DROP TABLE hr.temp_emps;
I
A
s
U
Note: The DDL statement requires an exclusive table lock. It cannot obtain it
because session 1 already holds a row exclusive table lock on the temp_emps table.
&
l
a
n
r
te
4. In session 3 (hr/hr), update the salary by 5% for all employees with a salary >
15000 in the temp_emps table. Do not COMMIT.
n
I
e
SQL>
SQL>
2
3
r
O
l
c
a
CONNECT hr/hr
UPDATE temp_emps
SET salary = salary * 1.05
WHERE salary > 15000;
Practice 17 (continued)
5. In session 2, check to see what kind of locks are being held on the temp_emps
table, using the v$lock view.
SQL> SELECT sid, type, id1, id2, lmode, request
2 FROM v$lock
3 WHERE id1 =
4
(SELECT object_id FROM dba_objects
5
WHERE object_name = 'TEMP_EMPS'
6
AND object_type = 'TABLE');
6. In session 3, roll back the changes you made and set the manager_id column to 10
for all employees who have a salary < 15000.
SQL>
SQL>
2
3
rollback;
UPDATE hr.temp_emps
SET MANAGER_id = 10
WHERE salary < 15000;
Note: This session will be hanging, so do not wait for the statement to complete.
7. In session 2, check to see what kind of locks are being held on the temp_emps
table, using the v$lock view.
y
l
n
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
SQL>
SQL>
2
SQL>
2
3
SQL>
r
O
l
c
a
@$ORACLE_HOME/rdbms/admin/catblock.sql
SELECT waiting_session, holding_session
FROM dba_waiters;
SELECT sid, serial#, username
FROM v$session
WHERE SID ='&HOLDING_SESSION';
ALTER SYSTEM KILL SESSION '&SID,&SERIAL_NUM';
Oracle9i Database Performance Tuning A-39
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice Solutions
Using Enterprise Manager
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Practice 2
The goal of this practice is to familiarize you with the different methods of collecting
statistical information. Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console.
1. Log on as directed by the instructor. If the database is not already started, connect to
SQL*Plus using sys/oracle as sysdba then start up the instance using the
STARTUP command. Ensure that the password for user system is set to oracle.
Check that TIMED_STATISTICS has been set to True; if it has not, then set it using
the ALTER SYSTEM statement.
Use Enterprise Manager Console - Instance Configuration
Check All Initialization Parameters Looking for TIMED_STATISTICS
If a value of True is returned, then continue to question 2. If a value of False is
returned, then set the TIMED_STATISTICS parameter to True using the command:
Use Enterprise Manager SQL Worksheet
SQL> ALTER SYSTEM SET TIMED_STATISTICS = True
2
SCOPE = both;
y
l
n
2. Connect to SQL*Plus as the system user and issue a command that will create a
trace file for this session. Run a query to count the number of rows in the
dba_tables dictionary view. To locate your new trace file easier, if possible, delete
all the trace files in the USER_DUMP_DEST directory before running the trace.
Remember to disable the trace command after running the query.
O
e
I
A
s
U
CONNECT system/oracle
ALTER SESSION SET SQL_TRACE = TRUE;
SELECT COUNT(*) FROM dba_tables;
ALTER SESSION SET SQL_TRACE = FALSE;
&
l
a
n
r
te
3. At the operating system level view the resulting trace file located in the directory set
by USER_DUMP_DEST. Do not try to interpret the content of the trace file, as this is
the topic of a later lesson.
n
I
e
$cd $HOME/ADMIN/UDUMP
$ls l
l
c
a
r
O
Practice 2 (continued)
4. Open two sessions, the first as hr/hr and the second as sys/oracle as
sysdba. From the second session generate a user trace file for the first session using
the dbms_system.set_sql_trace_in_session procedure. Get the user id
and serial# from v$session.
Open multiple SQL Worksheets. Add the connect string for your database.
From Session 1
SQL> CONNECT hr/hr
Change to Session 2
SQL>
SQL>
2
3
SQL>
2
3
4
5
Change to Session 1
SQL> SELECT * FROM employees;
y
l
n
Change to Session 2
SQL>
2
3
4
5
BEGIN
dbms_system.set_sql_trace_in_session
(&SID,&SERIALNUM,FALSE);
END;
/
I
A
O
e
s
U
5. Confirm that the trace file has been created in the directory set by USER_DUMP_DEST
$cd $HOME/ADMIN/UDUMP
$ls l
-rw-r----- 1 dba01 dba
-rw-r----- 1 dba01 dba
&
l
a
n
r
te
n
I
e
l
c
a
r
O
Practice 2 (continued)
7. Confirm and record the amount of free space available within the tools tablespace
by querying the dba_free_space view. Also check that the tablespace is
dictionary managed.
Use Enterprise Manager Console - Storage - Tablespaces
8. Connect using sys/oracle as sysdba, then install Statspack using the
spcreate.sql script located in your E:\LABS\LABS directory. Use the following
settings when asked by the installation program:
- Users Default Tablespace = TOOLS
- User's Temporary Tablespace = TEMP
SQL> CONNECT sys/oracle AS sysdba
SQL> @E:\LABS\LABS\spcreate.sql
9. Query dba_free_space to determine the amount of free space left in the tools
tablespace. The difference between this value and the one recorded in step 7 will be the
space required for the initial installation of Statspack.
Use Enterprise Manager Console - Storage - Tablespaces
y
l
n
Note: The amount of storage space required will increase in proportion to the amount
of information stored within the Statspack tables, that is, the number of snapshots.
O
e
Subtract the value received now, from the value received in step 7 to get the amount of
space required to install Statspack
I
A
s
U
10. Manually collect current statistics using Statspack by running the snap.sql script
located in E:\LABS\LABS. This will return the snap_id for the snapshot just taken,
which should be recorded.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 2 (continued)
11. To have Statspack automatically collect statistics every three minutes execute the
spauto.sql script located it your E:\LABS\LABS directory. Query the database
to confirm that the job has been registered using the user_jobs view.
SQL > @E:\LABS\LABS\spauto.sql
SQL > SELECT job, next_date, next_sec, last_sec
2
FROM user_jobs;
Note: The spauto.sql script in the E:\LABS\LABS directory has been altered
from the spauto.sql script shipped with the Oracle database. The alteration has
changed the time between snapshots from 1 hour to 3 minutes.
12. After waiting for a period in excess of three minutes query the stats$snapshot
view to list what snapshots have been collected. There must be at least two snapshots
before moving to the next step.
SQL>
2
3
4
5
6
7
SELECT snap_id,
TO_CHAR(startup_time,' dd Mon "at" HH24:mi:ss')
instart_fm,
TO_CHAR(snap_time,'dd Mon YYYY HH24:mi') snap_date,
snap_level "level"
FROM stats$snapshot
ORDER BY snap_id;
y
l
n
O
e
Note: If the job scheduler is not working, check the value of the
JOB_QUEUE_PROCESSES parameter. The value should be greater than 0.
s
U
13. When there are at least two snapshots, start to generate a report. This is performed
using the spreport.sql script found in the E:\LABS\LABS directory. The script
lists the snapshot options available and then requests the beginning snap id and the
end snap id. The user is then requested to give a filename for the report. It is often
best left to the default.
I
A
&
l
a
SQL> @E:\LABS\LABS\spreport.sql
n
r
te
14. Locate the report file in the users current directory, then using any text editor, open
and examine the report. The first page shows a collection of the most queried statistics.
n
I
e
$ vi sp_X_Y.lst
l
c
a
where X is the starting snapshot and Y is the ending snapshot (this is true if the default
report filename was used).
r
O
Practice 2 (continued)
16. Query the database to determine what system wait events have been registered since
startup using v$system_event.
SQL> SELECT event, total_waits, time_waited
2 FROM v$system_event;
18. Stop the automatic collection of statistics by removing the job. This is performed by
connecting as perfstat/perfstat and querying the user_jobs view to get the
job number. Then execute the dbms_job.remove procedure.
SQL>
SQL>
2
SQL>
CONNECT perfstat/perfstat
SELECT job, log_user
FROM user_jobs;
EXECUTE dbms_job.remove ($job_to_remove);
y
l
n
19. Connect to your database using Oracle Enterprise Manager. The lecturer will supply
the information required to connect to the Oracle Management Server. After you have
connected to the database, use Oracle Enterprise Manager to explore the database.
Examine items such as the number of tablespaces, users, and tables.
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 2 (continued)
Oracle Classroom Only (continued)
d. Close the MSDOS window.
Start the Oracle Enterprise Manager Console and set the Administrator to
sysman and the password to oem_temp. When prompted, change the
password to oracle. Select Discover Nodes from the Navigator and enter the
host name of the server of your working database.
i. From the Start menu > Programs > Oracle OracleHome >
Enterprise Manager Console
ii. Make sure the Login to the Oracle Management Server is selected.
iii. Administrator: sysman
iv. Password: oem_temp
v. Management server is your machine.
vi. When prompted to change the sysman password to oracle.
vii. Select Navigator > Discover Nodes from the console menu, or select
Discover Nodes from the right mouse shortcut menu to open the Discover
Nodes dialog box.
viii. From the Discovery Wizard: Introduction page, click Next, enter the name
of your UNIX database server, and click Next.
ix. Click Next, give your regular administrator access to your database.
x. Click Finish, then OK. If your discovery was not successful contact
your instructor.
y
l
n
O
e
20. From Oracle Enterprise Manager load Oracle Expert and create a new tuning session.
Limit the tuning scope to Check for Instance Optimizations. This is done to reduce
the time taken to collect information. Collect a new set of data.
I
A
s
U
Note: Do not implement the changes that Oracle Expert recommends, because this will
be done during the course.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 3
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as system/oracle and diagnose database file configuration by querying
the v$datafile, v$logfile and v$controlfile dynamic performance
views.
Use Enterprise Manager Console - Storage - Controlfile
Use Enterprise Manager Console - Storage - Datafiles
Use Enterprise Manager Console - Storage - Redo Log Groups
2. Diagnose database file usage by querying the v$filestat dynamic performance
view, combine with v$datafile to get the data file names.
Use Enterprise Manager Performance Manager - I/O - File Statistics
3. Determine whether there are waits for redo log files by querying the
v$system_event dynamic performance view, where the waiting event is log file
sync or log file parallel write.
Use Enterprise Manager Performance Manager - Wait Events
O
e
y
l
n
s
U
Waits for log file sync are indicative of slow disks that store the online logs or
unbatched commits. The log file parallel write is much less useful. The reason it is less
useful is that this event only shows how often LGWR waits, not how often server
processes wait. If LGWR waits without impacting user processes, there is no
performance problem. If LGWR waits, it is likely that the log file sync event
(mentioned above) will also be evident.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 3 (continued)
4. Connect as perfstat/perfstat and diagnose file usage from Statspack.
a. Generate a Statspack report using E:\LABS\LABS\spreport.sql
b. Locate and open the report file.
c. Examine the report and search for the File IO Stats string.
Note: On a production database care should be taken in monitoring the disk and
controller usage by balancing the workload across all devices. If your
examination shows a distinct over utilization of a particular data file, consider
resolving the cause of the amount of I/O. For example, investigate the number of
full table scans, clustering of files on a specific device and under utilization of
indexes. If after this the problem remains then look at placing the data file on a
low utilization device.
5. Connect as system/oracle and enable checkpoints to be logged in the alert file by
setting the value of the LOG_CHECKPOINTS_TO_ALERT parameter to True using
the ALTER SYSTEM SET command.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
6. Connect as sh/sh and execute the E:\LABS\LABS\lab03_06.sql script to
provide a workload against the database.
y
l
n
O
e
s
U
7. At the operating system level use the editor to open the alert log file (located in the
directory specified by BACKGROUND_DUMP_DEST). Then determine the checkpoint
frequency for your instance by searching for messages containing the phrase
Completed Checkpoint. The time difference between two consecutive messages is
the checkpoint interval.
I
A
&
l
a
Open the alert log file using an editor and search for the line: Completed checkpoint.
The line before this will be the time at which the checkpoint occurred. Search for the
following checkpoint time and then subtract to get the time between checkpoints.
n
r
te
n
I
e
r
O
l
c
a
Practice 4
The objective of this practice is to use diagnostic tools to monitor and tune the shared pool.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect using sys/oracle as sysdba and check the size of the shared pool.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
2. Connect as perfstat/perfstat, execute the E:\LABS\LABS\snap.sql
script to collect initial snapshot of statistics and note the snapshot number.
SQL> CONNECT perfstat/perfstat
SQL> @E:\LABS\LABS\snap.sql
3. To simulate user activity against the database open two operating system sessions. In
session 1 connect as hr/hr and run the E:\LABS\LABS\lab04_03_1.sql
script. In the second session connect as hr/hr and run the
E:\LABS\LABS\lab04_03_2.sql script.
y
l
n
Open multiple SQL Worksheets. Add the connect string for your database.
O
e
In session 1
SQL> CONNECT hr/hr
SQL> @E:\LABS\LABS\lab04_03_1.sql
In session 2
I
A
s
U
4. Connect as system/oracle and measure the pin-to-reload ratio for the library
cache by querying v$librarycache. Determine whether it is a good ratio or not.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 4 (continued)
5. Connect as system/oracle and measure the get-hit ratio for the data dictionary
cache by querying v$rowcache. Determine whether it is a good ratio or not using
the dynamic view.
Use Enterprise Manager Performance Manager - Memory
If GETMISSES are lower than 15% of the GETS, it is a good ratio.
6. Connect as perfstat/perfstat and run the E:\LABS\LABS\snap.sql
script to collect a statistic snapshot and obtain the snapshot number. Record this
number.
SQL> CONNECT perfstat/perfstat
SQL> @E:\LABS\LABS\snap.sql
y
l
n
&
l
a
I
A
O
e
s
U
n
r
te
n
I
e
l
c
a
You can also determine an appropriate size for the Shared Pool by using:
r
O
Practice 4 (continued)
8. Analyze the generated report in the current directory. What would you consider doing
if the library hit ratio (found under the heading Instance Efficiency Percentages) is
less than 98%?
Increase the SHARED_POOL_SIZE parameter.
9. Connect as system/oracle and determine which packages, procedures and triggers
are pinned in the shared pool by querying v$db_object_cache.
SQL>
SQL>
2
3
4
CONNECT system/oracle
SELECT name, type, kept
FROM v$db_object_cache
WHERE type IN
('PACKAGE', 'PROCEDURE', 'TRIGGER', 'PACKAGE BODY');
10. Connect using sys/oracle as sysdba and pin one of the Oracle supplied
packages that needs to be kept in memory, such as sys.standard using the
dbms_shared_pool.keep procedure , that is created by running the
$ORACLE_HOME/rdbms/admin/dbmspool.sql script.
SQL>
SQL>
SQL>
SQL>
2
3
4
y
l
n
O
e
I
A
s
U
11. Determine the amount of session memory used by your session by querying the
v$mystat view. Limit the output by including the clause:
&
l
a
n
r
te
Note: Since you are not using the Oracle Shared Server configuration this memory
resides outside the SGA.
n
I
e
l
c
a
12. Determine the amount of session memory used for all sessions, using v$sesstat
and v$statname views:
r
O
SQL>
2
3
4
Practice 5
The objective of this practice is to use available diagnostic tools to monitor and tune the
database buffer cache. Throughout this practice Oracle Enterprise Manager can be used if
desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses for the
Oracle Enterprise Manager console.
1. Connect as perfstat/perfstat and run a statistic snapshot. Make a note of the
snapshot number. The snap shot can be taken by running the
E:\LABS\LABS\snap.sql script file.
SQL> CONNECT perfstat/perfstat
SQL> @E:\LABS\LABS\snap.sql
2. To simulate user activity against the database, connect as the hr/hr user and run the
lab05_02.sql script.
SQL> CONNECT hr/hr
SQL> @E:\LABS\LABS\lab05_02.sql
3. Connect as system/oracle and measure the hit ratio for the database buffer cache
using the v$sysstat view. Determine whether it is a good ratio or not.
y
l
n
O
e
&
l
a
I
A
s
U
5. Use the report from Statspack between the last two snapshots to check the buffer cache
hit ratio, using the E:\LABS\LABS\spreport.sql script. Then analyze the
buffer hit % in the Instance Efficiency Percentages section.
n
r
te
SQL> @E:\LABS\LABS\spreport.sql
n
I
e
r
O
l
c
a
Practice 5 (continued)
Note: On a production database if the ratio is bad, add new buffers, run steps 2 to 5
and examine the new ratio to verify that the ratio has improved. If the ratio is good,
remove buffers, run steps 2 to 5 and verify if the ratio is still good.
6. Connect as system/oracle and determine the size of the temp_emps table in the
hr schema that you want to place in the keep buffer pool. Do this by using the
dbms_stats.gather_table_stats procedure and then query the blocks
column of the dba_tables view for the temp_emps table.
SQL>
SQL>
>
SQL>
2
3
CONNECT system/oracle
EXECUTE dbms_stats.gather_table_stats ('HR','TEMP_EMPS');
SELECT table_name , blocks
FROM dba_tables
WHERE table_name IN ('TEMP_EMPS');
7. Keep temp_emps in the keep pool. Use the ALTER SYSTEM command to set
DB_KEEP_CACHE_SIZE to 4 MB for the keep pool. Limit the scope of this
command to the spfile.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
y
l
n
O
e
8. For the keep pool to be allocated the database needs to be restarted. You will need to
be connected as a sysdba user to perform this task.
Shut down and start up the instance using:
Enterprise Manager Console - Instance - Configuration
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
10. Connect as hr/hr and run the E:\LABS\LABS\lab05_10.sql script. This will
execute a query against the temp_emps table in the hr schema.
SQL> CONNECT hr/hr
SQL> @E:\LABS\LABS\lab05_10.sql
Oracle9i Database Performance Tuning B-14
Practice 5 (continued)
11. Connect using sys/oracle as sysdba and check for the hit ratio in different
buffer pools, using the v$buffer_pool_statistics view.
Use Enterprise Manager Performance Manager - Database Instance - Instance
Efficiency Statistics
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 6
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as sys/oracle AS sysdba and, without restarting the instance, resize
the DB_CACHE_SIZE to 12 Mb. Limit the effect of this command to memory, so as
not to modify the spfile.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
Note: This will encounter an error because the total SGA size will be bigger than
SGA_MAX_SIZE. To overcome this you will must either change the value of
SGA_MAX_SIZE and restart the instance (which is what dynamic allocation is meant
to avoid) or resize a component, thus making memory available for the increase in the
buffer cache.
2. Reduce the memory used by the shared pool. Limit the effect of this command to
memory, so as not to modify the spfile.
y
l
n
O
e
3. Without restarting the instance, resize the DB_CACHE_SIZE to 12 Mb. Limit the
effect of this command to memory, so as not to modify the spfile.
I
A
s
U
Note: This time the memory is available so the command will be executed.
&
l
a
4. To return the SGA to the original configuration, restart the instance. You will need to
be connected as a sysdba user to perform this task.
n
r
te
n
I
e
r
O
l
c
a
Practice 7
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as perfstat/perfstat and collect a snapshot of the current statistics by
running the script E:\LABS\LABS\snap.sql. Record the snapshot ID for later
use.
SQL> CONNECT perfstat/perfstat
SQL> @E:\LABS\LABS\snap;
y
l
n
O
e
&
l
a
I
A
s
U
n
r
te
5. Connect as sys/oracle AS sysdba and increase the size of the redo log buffer
in the spfile by changing the value of the LOG_BUFFER parameter. Since this
parameter is static you must specify spfile.
n
I
e
l
c
a
r
O
6. To have the new value for the LOG_BUFFER take effect, you must restart the instance.
Then confirm that the change has occurred.
Use Enterprise Manager Console - Instance - Configuration
Oracle9i Database Performance Tuning B-17
Practice 9
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Set the database to use the manual sort option by changing the value of the
WORKAREA_SIZE_POLICY parameter to Manual. Set the SORT_AREA_SIZE
parameter to 512 bytes.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
2. For the new value of the LOG_BUFFER parameter to take effect, you must restart the
instance. Then query the v$sysstat view and record the value for sorts (memory)
and sorts (disk).
Use Enterprise Manager Console - Instance - Configuration
SQL> SELECT name, value
2 FROM v$sysstat
3 WHERE name LIKE 'sorts%';
y
l
n
Note: The statistics in v$sysstat are collected from startup. If you need to get
accurate statistics per statement, you must record statistics before the statement runs
and again afterwards. Subtracting to two values gives the statistics for the statement.
O
e
s
U
3. To perform a sort on the database that will have sorts to disk connect as sh/sh and
execute the E:\LABS\LABS\lab09_03.sql script.
I
A
&
l
a
Note: If this script fails due to a lack of free space in the temp tablespace then
connect as system/oracle and resize the temporary tablespace.
n
r
te
n
I
e
4. Connect as system/oracle, query the v$sysstat view again, and record the
value for sorts (memory) and sorts (disk). Subtract the values from the recorded value
in question 2. If the ratio of Disk to Memory sorts is greater than 5% then increase the
sort area available.
r
O
l
c
a
Practice 9 (continued)
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT name, value
FROM v$sysstat
WHERE name LIKE 'sorts%';
Note: If this statement returns no rows, it means that all sort operations since startup
have completed in memory.
6. To decrease the number of sorts going to a temporary tablespace, increase the value of
the SORT_AREA_SIZE parameter to 512000 using the ALTER SESSION
command.
SQL> ALTER SESSION SET SORT_AREA_SIZE = 512000;
y
l
n
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 10
The objective of this practice is to use available diagnostic tools to monitor and tune the
rollback segments. This would require setting the database to Manual Undo Management
mode. Throughout this practice Oracle Enterprise Manager can be used if desired. SQL
Worksheet can be used instead of SQL*Plus and there are many uses for the Oracle
Enterprise Manager console.
1. Set the database in Manual Undo Mode by connecting as sys/oracle AS
sysdba and change the following parameters to the values shown:
undo_management = Manual
undo_tablespace = Null
Restart the database and confirm that the UNDO_MANAGEMENT parameter is set to
Manual and that UNDO_TABLESPACE is Null.
Use Enterprise Manager Console - Instance - Configuration - All Initialization
Parameters
SQL> SHOW PARAMETER undo
y
l
n
O
e
s
U
Note: This is not to be an UNDO tablespace and you must specify that it is to be
dictionary managed.
I
A
3. For the purposes of this practice, create a new rollback segment called rbsx in the
rbs_test tablespace. For the storage parameters, use 64 KB for the INITIAL and
NEXT extent sizes with MINEXTENTS value set to 20. Set the OPTIMAL value so that
the segment shrinks back to 1280 KB automatically.
SQL>
2
3
4
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Practice 10 (continued)
4. Bring the rbsx rollback segment online and ensure that any others (except the
system rollback segment) are offline. Query the dba_rollback_segs view to
get the segment_name and status of the rollback segments to be taken offline using the
ALTER ROLLBACK SEGMENT command.
SQL> ALTER ROLLBACK SEGMENT rbsx ONLINE;
SQL> SELECT segment_id, segment_name, status
2 FROM dba_rollback_segs;
5. Before executing a new transaction, find the number of bytes written so far in the
rbsx rollback segment, using the writes column of v$rollstat.
SQL> SELECT usn, writes
2 FROM v$rollstat
3 WHERE usn>0;
y
l
n
O
e
In session 1:
SQL> CONNECT hr/hr
SQL> @E:\LABS\LABS\ins_temps.sql
In Session 2:
SQL>
SQL>
2
3
CONNECT system/oracle
SELECT usn, writes
FROM v$rollstat
WHERE usn>0;
&
l
a
I
A
s
U
Note: The number of writes in the rollback segment between questions 5 and 6 is the
difference in the value of the writes column at the respective times.
n
r
te
n
I
e
r
O
l
c
a
Practice 10 (continued)
8. Return to the hr session (the first session) and commit the insert. Run the
E:\LABS\LABS\del_temps.sql script. Do not COMMIT. The script deletes the
hundred rows you have just inserted. As user system (in the second session), check
the amount of rollback space used, using the writes column of v$rollstat. Note
the difference between the return value and that found in question 6.
Open multiple SQL Worksheets. Add the connect string for your database.
In session 1:
SQL> COMMIT;
SQL> @E:\LABS\LABS\del_temps.sql
In session 2:
SQL> SELECT usn, writes
2 FROM v$rollstat
3 WHERE usn>0;
9. In session 2, connect as system/oracle and find out if you have had any rollback
segment contention since startup, using the waits and gets columns in the
v$rollstat view.
SQL> SELECT SUM(waits)/SUM(gets) "Ratio",
2 SUM(waits) "Waits", SUM(gets) "Gets
3 FROM v$rollstat;
y
l
n
O
e
I
A
s
U
10. Does the v$system_event view show any waits related to rollback segments?
Using session 2, query in v$system_event view for the undo segment tx slot
entry.
&
l
a
n
r
te
Note: Since only one session is making changes it is unlikely that there will be any
contention for the undo segment transaction slot.
n
I
e
l
c
a
11. In session 1 commit the transaction. Then connect as hr/hr and run the
E:\LABS\LABS\ins_temps.sql script again, allocating the transaction to a
specific rollback segment rbsx, using the set transaction use rollback segment
command. In session 2, check that the transaction is using the defined rollback
segment join the v$rollstat, v$session, and v$transaction views.
r
O
Practice 10 (continued)
In session 1:
SQL> COMMIT;
SQL> SET TRANSACTION USE ROLLBACK SEGMENT rbsx;
SQL> @E:\LABS\LABS\ins_temps.sql
In session 2:
SQL>
2>
3>
4>
5>
12. Close session 2, then in session 1 connect as sys/oracle AS sysdba and set the
database in Auto Undo Mode by changing the following parameters to the values
shown:
undo_management = Auto
undo_tablespace = undotbs
Restart the database and confirm that the UNDO_MANAGEMENT parameter is set to
Auto and that UNDO_TABLESPACE is undotbs.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 11
The objective of this practice is to familiarize you with SQL statement execution plans and
to interpret the formatted output of a trace file generated using SQL Trace and the formatted
output generated by TKPROF. Throughout this practice Oracle Enterprise Manager can be
used if desired. SQL Worksheet can be used instead of SQL*Plus and there are many uses
for the Oracle Enterprise Manager console.
1. Connect as hr/hr and create the plan_table table under the hr schema, if it is
not already created, by running the
@%ORACLE_HOME%\rdbms\admin\utlxplan.sql script.
SQL> CONNECT hr/hr
SQL> @%ORACLE_HOME%\rdbms\admin\utlxplan.sql
Note: If plan_table already exists and holds rows then truncate the table.
2. Set the optimizer mode to rule based using the ALTER SESSION command and
generate the explain plan for the statement E:\LABS\LABS\lab11_02.sql.
View the generated plan by querying object name, operation, option, and optimizer
from the plan_table table.
SQL>
SQL>
2
SQL>
2
y
l
n
O
e
s
U
3. Truncate the plan_table table. Change the optimizer mode to cost based by setting
the value to All_Rows and rerun the explain plan for
E:\LABS\LABS\lab11_02.sql. Notice that the optimizer mode and the explain
plan have changed.
I
A
SQL>
SQL>
SQL>
2
SQL>
2
&
l
a
n
r
te
n
I
e
l
c
a
Note: Although exactly the same scripts are being run, due to the different optimizer
settings, different explain paths are found. With rule based, one of the rules is to use
any index that is on the columns in the where clause. By using cost based optimizer
mode, the server has been able to determine that it will be faster to just perform a full
table scan, due to the number of rows being returned by the script.
r
O
Practice 11 (continued)
4. Truncate the plan_table table and set the optimizer mode to Rule by using the
ALTER SESSION command. This time generate the explain plan for the
E:\LABS\LABS\lab11_04.sql script. Examine the script which is a copy of
E:\LABS\LABS\lab11_02.sql except it changes the line SELECT * to
include a hint /*+ all_rows*/ for the optimizer. View the generated execution
plan by querying object name, operation, option, and optimizer from plan_table
table.
SQL>
SQL>
SQL>
2
SQL>
2
5. Exit out of SQL*Plus, change the directory to $HOME/ADMIN/UDUMP and delete all
the trace files already generated.
SQL> EXIT
$ cd $HOME/ADMIN/UDUMP
$ rm *.trc
y
l
n
Note: this step is performed only to make it easier to find the trace file generated. It is
not a requirement of SQL Trace.
O
e
6. Connect as sh/sh and enable SQL Trace, using the ALTER SESSION command, to
collect statistics for the script, E:\LABS\LABS\lab11_06.sql. Run the script.
After the script has completed, disable SQL Trace, then format your trace file using
TKPROF. Use the options SYS=NO and EXPLAIN= sh/sh. Name the file
myfile.txt.
I
A
&
l
a
s
U
n
r
te
n
I
e
l
c
a
7. View the output file myfile.txt and note the CPU, current and query figures for
the fetch phase. Do not spend time analyzing the contents of this file as the only
objective here is to become familiar and comfortable with running TKPROF and
SQL*Trace.
r
O
$ more myfile.txt
Oracle9i Database Performance Tuning B-25
Practice 12
The objective of this practice is to familiarize you with the dbms_stats package.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as hr/hr and create a table new_employees as a copy of the
employees table. Gather statistics on the new_employees table and determine
the current number of rows in the new_employees table. Record the number of
rows for comparison later.
SQL>
SQL>
2
3
SQL>
>
SQL>
2
3
CONNECT hr/hr
CREATE TABLE new_employees
AS SELECT *
FROM employees;
EXECUTE dbms_stats.gather_table_stats ('HR','NEW_EMPLOYEES');
SELECT table_name, num_rows
FROM user_tables
WHERE table_name = 'NEW_EMPLOYEES';
2. Increase the size of the new_employees table by using the lab12_02.sql script.
y
l
n
SQL> @E:\LABS\LABS\lab12_02.sql
O
e
3. Confirm that the statistics have not been changed in the data dictionary by re-issuing
the same statement as in question 1.
I
A
s
U
&
l
a
4. Connect hr/hr and gather statistics for all objects under the hr schema using the
dbms_stats package. While gathering the new statistics save the current statistics in
a table named stats.
n
r
te
n
I
e
l
c
a
b. Save the current schema statistics into your local statistics table.
SQL> execute dbms_stats.export_schema_stats('HR','STATS');
r
O
Practice 12 (continued)
5. Determine that the current number of rows in the employees table has been updated in
the data dictionary. This should be twice the number of rows recorded in question 1.
SQL> SELECT table_name, num_rows
2 FROM user_tables
3 WHERE table_name = 'NEW_EMPLOYEES';
6. Remove all schema statistics from the dictionary and restore the original statistics you
saved in step b.
SQL> execute dbms_stats.delete_schema_stats('HR');
SQL> execute dbms_stats.import_schema_stats('HR','STATS');
7. Confirm that the number of rows in the employees table recorded in the data
dictionary has returned to the previous value collected in question 1.
SQL> SELECT table_name, num_rows
2 FROM user_tables
3 WHERE table_name = 'NEW_EMPLOYEES';
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 13
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect using sys/oracle AS sysdba and query the tablespace_name and
extent_management columns of dba_tablespaces to determine which
tablespaces are locally managed and which are dictionary managed. Record which
tablespaces are dictionary managed.
Use Enterprise Manager Console - Storage - Tablespaces
2. Alter the hr user to have the tools tablespace as the default.
Use Enterprise Manager Console - Security - Users - HR
3. Examine the v$system_event view and note the total waits for the statistic
enqueue.
SQL> SELECT event, total_waits
2 FROM v$system_event
3 WHERE event = 'enqueue';
y
l
n
Note: On a production system you would be more likely to pick up the contention
through the Statspack report.
O
e
s
U
4. Also examine the v$enqueue_stat view for eq_type 'ST' to determine the
total_wait# for the ST enqueue, which is the space management enqueue.
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
&
l
a
I
A
5. Exit out of the SQL*Plus session and change directory to E:\LABS\LABS. Run the
lab13_04.bat script from the operating system prompt. This script will log five
users onto the database simultaneously and then each user creates and drops tables.
The tables each have many extents. The script must be run from the E:\LABS\LABS
directory or it will fail.
n
r
te
n
I
e
r
O
l
c
a
$ cd E:\LABS\LABS
$ lab13_04.bat
Practice 13 (continued)
6. Connect as system/oracle and again examine the v$enqueue_stat view for
eq_type 'ST' to determine the value of total_wait# for the ST enqueue, which is
the space management enqueue.
$ SQL*Plus system/oracle
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
Note: Record the difference in the number of waits for the ST enqueue for extent
management using a dictionary managed tablespace. This value is found by subtracting
the first wait value (from practice 13-04) from the second wait value (from practice 1306).
7. Create a new locally managed tablespace test, name the data file test01.dbf and
place it in the directory $HOME/ORADATA/u06. Set the size to 120 MB and a
uniform extent size of 20 KB.
Use Enterprise Manager Console - Storage - Tablespaces
8. Alter the default tablespace of the hr user to test.
Use Enterprise Manager Console - Security - Users - HR
y
l
n
O
e
Note: The same steps are covered again. This time you are looking for the number of
waits for the ST enqueue caused by locally managed tablespaces.
I
A
s
U
9. Examine and record the initial total_wait# for 'ST' in the v$enqueue_stat
view.
&
l
a
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
n
r
te
10. Exit out of the SQL*Plus session and change directory to E:\LABS\LABS. Run the
lab13_04.bat script from the operating system prompt. This script will log five
users onto the database simultaneously and then each user creates and drops tables.
The tables each have many extents. The script must be run from the E:\LABS\LABS
directory or it will fail.
n
I
e
r
O
l
c
a
$ cd E:\LABS\LABS
$ ./lab13_04.bat
Practice 13 (continued)
11. Again examine and record the final total_wait# for 'ST' in the
v$enqueue_stat view.
SQL> SELECT *
2 FROM v$enqueue_stat
3 WHERE eq_type = 'ST';
Note: Record the difference in the total_wait# for the ST enqueue for extent
management using a locally managed tablespace. This value is found by subtracting
the first wait value (from practice 13-09) from the second wait value (from practice 1311). Compare the two results for the different tablespaces. The locally managed
tablespace would be far less contentious with extent management because it is
managing the space within the tablespace itself.
12. Connect as the hr/hr user and run the E:\LABS\LABS\lab13_12.sql script.
This will create a similar table (new_emp) as the employees table but with
PCTFREE = 0. The table is then populated with data from the employees table.
SQL> CONNECT hr/hr
SQL> E:\LABS\LABS\lab13_12.sql;
y
l
n
13. Run ANALYZE on the new_emp table and query the dba_tables view to determine
the value of chain_cnt for the new_emp table. Record this value.
SQL>
SQL>
2
3
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
l
c
a
15. Run the E:\LABS\LABS\lab13_15.sql script, which will update the rows of the
new_emp table. Analyze the new_emp table again and query the user_tables
view to get the new value of chain_cnt Record this value. Also check the status of
the new_emp_name_idx index.
r
O
Practice 13 (continued)
SQL>
SQL>
SQL>
2
3
SQL>
2
3
@E:\LABS\LABS\lab13_15.sql
ANALYZE TABLE new_emp COMPUTE STATISTICS;
SELECT table_name, chain_cnt
FROM user_tables
WHERE table_name = 'NEW_EMP';
SELECT index_name, status
FROM user_indexes
WHERE index_name = 'NEW_EMP_NAME_IDX';
16. Resolve the migration caused by the previous update, by using the ALTER TABLE
MOVE command. This will cause the index to become unusable and should be rebuilt
using the ALTER INDEX REBUILD command before reanalyzing the new_emp
table. Confirm that the migration has been resolved by querying chain_cnt column
in the user_tables view and confirm that the index is valid by querying the
user_indexes view.
SQL>
2
SQL>
SQL>
SQL>
2
3
SQL>
2
3
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 15
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as hr/hr, drop the new_employees table and create an IOT called
new_employees in the hr schema. Give the table the same columns as the
hr.employees table. Make the employee_id column the primary key and name
the primary key index new_employees_employee_id_pk.
SQL>
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CONNECT hr/hr
DROP TABLE new_employees;
CREATE TABLE new_employees
(employee_id
NUMBER(6),
first_name
VARCHAR2(20),
last_name
VARCHAR2(25),
email
VARCHAR2(25),
phone_number
VARCHAR2(20),
hire_date
DATE,
job_id
VARCHAR2(10),
salary
NUMBER(8,2),
commission_pct
NUMBER (2,2),
manager_id
NUMBER(6),
department_id
NUMBER(4),
CONSTRAINT
new_employees_employee_id_pk
PRIMARY KEY
(employee_id))
ORGANIZATION INDEX;
y
l
n
O
e
s
U
2. Confirm the creation of the table by querying the user_tables and the
user_indexes views
I
A
&
l
a
n
r
te
n
I
e
l
c
a
3. Populate the new_employees table with the rows from the hr.employees table.
r
O
Practice 15 (continued)
4. Create a secondary B-tree index on the last_name column of the
new_employees table. Place the index in the indx tablespace. Name the index
last_name_new_employees_idx. Collect the statistics for the secondary index.
SQL>
2
3
SQL>
>
5. Confirm the creation of the index by using the user_indexes view in the data
dictionary. Query the index_name, index_type, blevel and leaf_blocks.
SQL> SELECT index_name, index_type, blevel, leaf_blocks
2 FROM user_indexes
3 WHERE index_name = 'LAST_NAME_NEW_EMPLOYEES_IDX';
Note: If the values for blevel and leaf_blocks are null then there were no
statistics collected. Confirm that the value of index_type is normal.
6. Create a reverse key index on the department_id of the employees_hist
table. Place the index in the indx tablespace. Name the index
emp_hist_dept_id_idx.
SQL>
2
3
4
I
A
y
l
n
O
e
s
U
7. Confirm the creation of the index and that it is a reverse key index, by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel and leaf_blocks.
&
l
a
n
r
te
n
I
e
Note: This time the values of blevel and leaf_blocks should be null, because
you did not collect statistics for this index while creating it. Also the value for index
type should now be normal/reverse.
r
O
l
c
a
Practice 15 (continued)
8. Create a bitmap index on the job_id column of the employees_hist table. Place
the index in the indx tablespace. Name the index bitmap_emp_hist_idx.
SQL> CREATE BITMAP INDEX bitmap_emp_hist_idx
2 ON employees_hist (job_id)
3 TABLESPACE indx;
9. Confirm the creation of the index and that it is a bitmapped index by querying the
user_indexes view in the data dictionary. Query the index_name,
index_type, blevel, and leaf_blocks.
SQL> SELECT index_name, index_type
2 FROM user_indexes
3 WHERE index_name = 'BITMAP_EMP_HIST_IDX';
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 16
In this practice you will make use of the AUTOTRACE feature and create the plan_
table table. These are covered in detail in the chapter titled SQL Statement Tuning.
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. Connect as sh/sh and confirm that the plan_table table exists. If the table does
exist then truncate it, otherwise create the plan_table table using
$ORACLE_HOME/rdbms/admin/utlxplan.sql.
SQL> CONNECT sh/sh
SQL> DESC plan_table
y
l
n
O
e
I
A
s
U
&
l
a
n
r
te
n
I
e
Note: The rewrite_enabled column must have a value of Y in order for the
practice on query rewrite to work.
r
O
l
c
a
Practice 16 (continued)
4. Set AUTOTRACE to Traceonly Explain, to generate the explain plan for the query
E:\LABS\LABS\lab16_04.sql
SQL> SET AUTOTRACE Traceonly Explain
SQL> @E:\LABS\LABS\lab16_04.sql
5. Set the QUERY_REWRITE_ENABLED parameter to True for the session and run the
same query, E:\LABS\LABS\lab16_04.sql, as in the previous practice. Note
the change in the explain plan due to the query rewrite. Set AUTOTRACE to Off and
disable query rewrite after the script has completed running.
SQL>
SQL>
SQL>
SQL>
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Practice 17
The objective of this practice is to use available diagnostic tools to monitor lock contention.
You will need to start three sessions in separate windows. Log in as hr/hr in two separate
sessions (sessions 1 and 3) and as sys/oracle as sysdba in another session (session 2).
Throughout this practice Oracle Enterprise Manager can be used if desired. SQL Worksheet
can be used instead of SQL*Plus and there are many uses for the Oracle Enterprise Manager
console.
1. In session 1 (user hr/hr), update the salary by 10% for all employees with a salary <
15000 in the temp_emps table. Do not COMMIT.
SQL>
SQL>
2
3
CONNECT hr/hr
UPDATE TEMP_EMPS
SET SALARY = SALARY * 1.1
WHERE salary <15000;
2. In session 2 connect as sys/oracle AS sysdba and check to see if any locks are
being held by querying the v$lock view.
SQL>
SQL>
2
3
y
l
n
O
e
3. In session 3 ( the session not yet used), connect as hr/hr and drop the temp_emps
table. Does it work?
SQL> CONNECT hr/hr
SQL> DROP TABLE hr.temp_emps;
I
A
s
U
Note: The DDL statement requires an exclusive table lock. It cannot obtain it, because
session 1 already holds a row exclusive table lock on the temp_emps table.
&
l
a
4. In session 3 (hr/hr), update the salary by 5% for all employees with a salary > 15000
in the temp_emps table. Do not COMMIT.
SQL>
SQL>
2
3
n
I
e
l
c
a
r
O
n
r
te
CONNECT hr/hr
UPDATE temp_emps
SET salary = salary * 1.05
WHERE salary > 15000;
Practice 17 (continued)
5. In session 2, check to see what kind of locks are being held on the temp_emps table,
using the v$lock view.
SQL> SELECT sid, type, id1, id2, lmode, request
2 FROM v$lock
3 WHERE id1 =
4
(SELECT object_id FROM dba_objects
5
WHERE object_name = 'TEMP_EMPS'
6
AND object_type = 'TABLE');
6. In session 3, roll back the changes you made and set the manager_id column to 10
for all employees who have a salary < 15000.
SQL> rollback;
SQL> UPDATE hr.temp_emps SET MANAGER_id = 10
2 WHERE salary < 15000;
Note: This session will be hanging, so do not wait for the statement to complete.
7. In session 2, check to see what kind of locks are being held on the temp_emps table,
using the v$lock view.
SQL> SELECT sid, type, id1, id2, lmode, request
2 FROM v$lock
3 WHERE id1 =
4
(SELECT object_id
5
FROM dba_objects
6
WHERE object_name = 'TEMP_EMPS'
7
AND object_type = 'TABLE');
y
l
n
O
e
&
l
a
I
A
s
U
n
r
te
r
O
l
c
a
n
I
e
SQL>
SQL>
2
SQL>
2
3
SQL>
@$ORACLE_HOME/rdbms/admin/catblock.sql
SELECT waiting_session, holding_session
FROM dba_waiters;
SELECT sid, serial#, username
FROM v$session
WHERE SID ='&HOLDING_SESSION';
ALTER SYSTEM KILL SESSION '&SID,&SERIAL_NUM';
Oracle9i Database Performance Tuning B-38
Tuning Workshop
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
Workshop Scenarios
O
e
y
l
n
s
U
Workshop Scenario
At the beginning of each scenario, the database is shut down and then restarted.
When resizing memory components, do not consume more memory than is actually required
to meet the requirements of the objects given. For example, there is little point in resizing
the shared pool to 500 MB. The purpose of the workshop is to be as realistic as possible.
The company concerned has at present two OLTP users and two DSS users. The system was
set up by a trainee DBA, and though it works, the performance is very slow. You have been
invited in to resolve the current performance problems, and to prepare the present system for
an increase in the number of users accessing the database. The company is about to expand
to 10 of each type of user (twenty users in all). At the same time the company is unwilling to
spend extra on new hardware components.
Therefore, the management has imposed a limit of 20 MB for the entire SGA.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workshop Methodology
O
e
y
l
n
s
U
Workshop Methodology
Perform the following steps:
Make sure that the job scheduler is set to collect statistics at least every 10 minutes or
use manual snapshot collection.
Open the WORKLOAD window and select the desired workload.
Each scenario has a icon labeled with the scenario number.
Allow time for some statistics to be generated (at least 20 minutes). Shorter time
periods make it more difficult to determine where problems exist.
The workload generator has been set to run for a minimum of 20 minutes. After 20
minutes the users will complete what they are working on and then log off the
database. Check each window before closing for any errors that the user might have
received.
Generate a report by running the spreport.sql script. Choose a start and end time
that falls between the period that the workload generator was running. Name the report
in a manner associated with the scenario; for example, for scenario 1 use
reportscn1.txt, for Scenario 5 use reportscn5.txt, and so on.
Look for the generated report in the directory from which SQL*Plus was executed.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workshop Procedure
1.
2.
3.
4.
5.
6.
7.
Choose a scenario.
Create a Statspack report.
Run the workload generator.
Create a Statspack report.
Determine what changes should take place.
Implement the changes.
Return to the second step to check that the
changes have made an improvement.
8. When the changes have improved performance,
choose another scenario.
y
l
n
O
e
s
U
Workshop Procedure
The workshop is executed against the local database. There is a WORKSHOP group on your
desktop. This group includes icons for choosing a scenario and apply a workload.
Choose a Scenario
There are seven scenarios available. Each scenario is represented in the WORKSHOP group
as a .bat file. To set the database for scenario 1 select the icon labeled 1.bat.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workshop Scenario 1
(Shared Pool)
O
e
y
l
n
s
U
Workshop Scenario 1
Waits recorded on the latch Shared pool, library cache could be indicative of a small
shared pool. However, before increasing the SHARED_POOL_SIZE, it would be advisable
to determine why the pool is too small. Some reasons are listed below:
Many SQL statements stored in the SQL area are only executed once, and they often
differ only in literals in the WHERE clause. A case could be made here for using bind
variables, or setting CURSOR_SHARING. To determine these SQL statements
examine the report created by Statspack, or query v$sql using the like option of
the WHERE clause to collect information regarding similar SQL statements.
Examine which packages are loaded using the query shown below:
I
A
&
l
a
n
r
te
n
I
e
SQL>
2
3
4
5
6
l
c
a
r
O
SELECT *
FROM v$db_object_cache
WHERE sharable_mem > 10000
AND (type=PACKAGE OR type=PACKAGE BODY
OR type=FUNCTION OR type=PROCEDURE)
AND KEPT=NO;
With this information determine whether the SQL statement can be converted into a
procedure and stored as a package; this can assist users in sharing the same cursor.
After you have reduced the number of SQL statements as much as possible, run the
following query:
SQL> SELECT SUM(pins) "Executions", SUM(reloads)
2
"Cache Misses", SUM(reloads)/SUM(pins)
3 FROM v$librarycache;
Increase the shared pool to reduce cache misses. Record the increase received for each
increase in the shared pool in order to determine whether the extra memory is worth the
increase received.
y
l
n
O
e
I
A
s
U
The result of this query is a row for each segment of the data dictionary cache. Each area can
then be checked for usage. For example, if there is a large number of gets on dc_sequences,
this is probably due to sequence numbers not being cached. To reduce the number of gets on
dc_sequences, examine increasing the number of sequence numbers cached.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Workload Scenario 2
(Buffer Cache)
O
e
y
l
n
s
U
Workshop Scenario 2
The first indication that the buffer cache is too small is waits on the free buffer waits event.
The cache buffers LRU chain latch may also indicate that the buffer cache is too small.
Waits on the latch may also signify that the DBWR process is not able to keep up with the
work load.
To determine which problem is causing the latch contention, examine the number of writes
in the file statistics found in the Statspack report.
On the front page of the Statspack report, the section named Instance Efficiency
Percentages lists the important ratios of the instance. For this scenario, the value of Buffer
Hit%: is of interest.
The value of the Buffer Hit%: depends on individual systems. Ideally, this value should be
close to 100 percent. There may be several reasons why this goal cannot be realized. A low
percentage indicates that there are a lot of buffers being read into the database buffer cache.
Before increasing the size of the database buffer cache you should examine what SQL
statements are being run against the database. You are looking for statements that cause a
high number of buffer gets and how many times these statements are executed.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
When the value has been set to On, allow the system to run the required scripts to collect
information regarding buffer usage.
After the database executes a typical load, query the v$db_cache_advice view and set
a new value for the DB_CACHE_SIZE parameter. When a new size has been determined,
use the following command to dynamically change the size of the cache.
n
r
te
n
I
e
l
c
a
Run a test load with this new value. Collect the statistics again. If the increase has resolved
the problem then change the value in the parameter file.
r
O
Workshop Scenario 3
(Redo Log Buffer)
y
l
n
O
e
s
U
Workshop Scenario 3
Waits on the LOG BUFFER SPACE event is an indication that your log buffer is too small.
I
A
On the first page of the Statspack report there is a section named Instance Efficiency
Percentages. Note the value of the statistic REDO NO WAIT. While this statistics ideal
value of 100 percent is seldom achieved, any lesser value could indicate that the Redo Log
Buffer is not correctly sized. Consider reducing the amount of redo created by the use of NO
LOGGING in appropriate statements. Query the data dictionary to determine the current size
of the Redo Log Buffer.
Estimate the increase required by examining the amount of redo generated. The first page of
the Statspack report has this information under the heading Load Profile.
Edit the LOG_BUFFER_SIZE parameter to set a new size for the redo log buffer. This
parameter is static so you will need to bounce the database after making the change.
Rerun the workload generator and collect the statistics again. If the increase has resolved the
problem and if the change was vast, then repeat the process with a larger redo log buffer.
Note: The redo log buffer does not always have the size stipulated in the parameter file. This
is due to minimum size, and rounding upwards to the nearest Oracle block. To confirm the
actual redo log buffer size use the v$sgastat view.
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
Workshop Scenario 4
Several indexes have been deleted and performance has decreased. The results are seen in
the Statspack report where there are many indications that untuned SQL is running. For
example:
The buffer cache hit ratio is lower.This can be seen on the first page of the Statspack
report in the Load Profile section.
There are more waits on the free buffer waits event.
There are more full table scans occurring.
Missing or incorrect indexes cause too many full table scans. This can also be caused by
badly written SQL statements.
To resolve the problem you must determine which SQL statements are run on the database
during a normal workload. Either use the SQL Trace utility or examine the top resource
users in the Statspack report to collect a representation of these SQL statements.
After the appropriate statements are collected, examine the WHERE clauses. Any columns
referenced in the WHERE clause are good candidates for indexes.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Workshop Scenario 5
(Rollback Segments)
O
e
y
l
n
s
U
Workshop Scenario 5
Presently the company has four users that log on to the database. Currently, the system is set
up with one very large rollback segment.
The company is planning on increasing the number of users to twenty. Having only one
rollback segment is likely to cause contention.
Collect statistics during the running of the workload generator and generate a Statspack
report. In the report there is a section named Buffer Wait Statistics which has an undo
header statistic. This statistic indicates that there is contention for the rollback segment
header blocks.
Use the following two methods to resolve this problem:
Create extra rollback segments
Use Auto Managed Undo
Options for Resolving Rollback Contention in an Oracle9i Database
In Oracle9i, the problems around the number and size of rollback segments have been
eliminated. Instead of creating rollback segments, the DBA only has to create an undo
tablespace that is large enough.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
Workshop Scenario 6
The DSS users log on and create a series of reports for management. This requires a lot of
sorting. Currently, the scripts are completing. However, management wants a more rapid
completion.
The first step is to run the workload and collect the Statspack report. Also make a note of
how many transactions are completed during your running period (should be at least twenty
minutes).
Because of the concern about sorts, the tendency is to jump straight to the Instance Activity
stats and look for the values of Sorts (Disk), Sorts (Memory), and Sorts (Rows). However,
doing so ignores some good information found on the front page.
On the front page, look at the buffer hit ratio. In a data warehouse environment you would
expect this ratio to drop; however, combining this information with the high number of
Buffer Busy Waits indicates that the buffer cache is too small for the number of sorts taking
place. So you would likely want to increase the size of the buffer cache.
Moving to the Instance Activity report, you find that a large number of sorts are having to go
to disk. Ideally, no sorts should go to disk; however, accomplishing this has a high cost in
memory. The ratio of sorts (Disk) to sorts (Memory) should be less than 5 percent.
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
O
e
y
l
n
s
U
I
A
&
l
a
n
r
te
n
I
e
r
O
l
c
a
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
995156390 orcl
1 9.2.0.1.0
NO
EDT3R4P1
Snap Id
Snap Time
Sessions Curs/Sess Comment
------- ------------------ -------- --------- ------------------Begin Snap:
1 30-Jul-02 15:14:22
5.0
End Snap:
2 30-Jul-02 16:01:23
5.9
Elapsed:
(mins)
47.02
4M
8M
4K
Log Buffer:
63K
Load Profile
~~~~~~~~~~~~
Per Second
Per Transaction
Redo size:
--------------897.28
--------------2,981.43
Logical reads:
Block changes:
775.82
6.52
2,577.84
21.65
Physical reads:
Physical writes:
718.92
5.58
User calls:
Parses:
0.21
1.05
Hard parses:
Sorts:
Logons:
Executes:
Transactions:
I
e
r
O
l
c
a
na
r
e
nt
l&
O
e
I
A
s
U
2,388.78
18.55
O
0.03
0.19
y
l
n
0.71
3.47
0.11
0.65
0.01
2.98
0.02
9.89
0.30
0.84
98.36
0.00
3677.05
98.34
100.00
Buffer Hit
In-memory Sort %:
%:
7.59
Library Hit
Soft Parse %:
%:
97.33
Execute to Parse %:
Latch Hit %:
64.88
2.11
74.50
96.95
100.00
99.66
Begin
End
-----94.93
-----95.69
71.80
48.25
87.21
84.48
Waits
% Total
Time (s) Ela Time
25,114
2,031
42.96
3.47
724
3,085
535
325
.91
.56
enqueue
direct path read
s
U
O
e
-------------------------------------------------------------
I
A
&
l
a
n
r
te
n
I
e
l
c
a
r
O
y
l
n
130,326
36,307
Instance: orcl
Snaps: 1 -2
100th of a second
1000th of a second
Waits
Total Wait
Timeouts
Time (s)
wait
(ms)
Waits
/txn
130,326
36,307
0
0
25,114
2,031
193
56
153.5
42.8
724
3,085
19
0
535
325
738
105
0.9
3.6
217
177
216
20
222
86
1023
485
0.3
0.2
898
892
760
0
58
55
65
61
1.1
1.1
872
406
436
0
39
24
45
59
1.0
0.5
170
280
0
0
20
13
117
48
0.2
0.3
174
38
0
0
5
2
26
41
latch free
SQL*Net break/reset to clien
213
38
18
0
0
0
561
561
0
0
enqueue
direct path read
I
A
s
U
O
e
221
0
1
0
394
0
-------------------------------------------------------------
&
l
a
n
r
te
n
I
e
r
O
l
c
a
y
l
n
0.2
0.0
0.3
0.0
0.7
0.7
Waits
Timeouts
Total Wait
Time (s)
wait
(ms)
Waits
/txn
892
872
0
436
55
39
61
45
1.1
1.0
406
121
0
0
24
5
59
42
0.5
0.1
172
2
0
0
0
0
1
7
0.2
0.0
latch free
direct path write
3
16
1
0
0
0
1
0
0.0
0.0
1
4,137
0
3,607
0
17,594
0
4253
0.0
4.9
smon timer
SQL*Net message from client
9
1
9
0
4,591 ######
214 ######
0.0
0.0
0.0
-------------------------------------------------------------
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------1,460,071
10
146,007.1
66.7
43.9
43.8
Module: SQL*Plus
BEGIN workload_generator.oltp1; END;
960,366
100
9,603.7
100
9,594.1
10
72,516.1
33.1
56.83 #########
O
e
33
21,962.9
y
l
n
899679532
33.1
s
U
I
A
cust_id = s.cust_id
and s.prod_id = p.prod_id
and s.quantity
_sold = (select MAX(quantity_sold) from sh.sales)
and rownum =
&
l
a
n
r
te
436,669
200
Module: Workload Generator
2,183.3
l
c
a
n
I
e
6,536
200
r
O
20.0
32.7
0.3
0.11
119.93 3247722561
where employee_
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------5,716
200
28.6
0.3
0.13
188.80 3558587519
24.5
0.2
0.09
313
13.0
161.84 2089710453
where
0.2
0.20
4.13 4168585130
10
146,007.1
O
e
66.7
Module: SQL*Plus
BEGIN workload_generator.oltp1; END;
960,366
100
9,603.7
&
l
a
100
n
r
te
9,594.1
y
l
n
values
43.9
I
A
43.8
s
U
n
I
e
725,161
Module: SQL*Plus
10
72,516.1
33.1
56.83 #########
l
c
a
r
O
899679532
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------724,775
33
21,962.9
33.1
200
2,183.3
and rownum =
20.0
32.7
0.11
28.6
0.3
l&
24.5
O
0.2
na
I
e
4,083
313
188.80 3558587519
r
e
nt
13.0
0.09
161.84 2089710453
where
0.2
0.20
l
c
a
r
O
O
e
s
U
0.13
I
A
y
l
n
119.93 3247722561
where employee_
200
4,890
200
Module: Workload Generator - oe_prod
0.3
values
4.13 4168585130
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------3,041
160
19.0
0.1
0.36
7.81 1915274376
313
6.4
0.1
0.05
0.37
467603321
WHERE
order_id = :b1
y
l
n
1,790
517
3.5
0.1
0.22
0.50 3935516425
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
1
1,350
313
4.3
&
l
a
160
8.0
0.1
0.1
0.05
0.15 4170474221
0.09
0.36 2950658496
I
A
s
U
O
e
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
n
r
te
1,259
160
7.9
0.1
0.19
2.40 2729780859
0.0
0.22
0.36 1994657103
l
c
a
n
I
e
1,076
150
7.2
r
O
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------848
948
0.9
115
5.0
0.0
0.06
1.42 3615375148
0.0
0.05
0.23 1351631542
Module: SQL*Plus
COMMIT
575
24
where c.con# = :1 an
23.1
0.0
0.02
1.24 1819073277
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_
obj#, d_owner#, nvl(property,0),subname from dependency$,obj$ wh
ere d_obj#=:1 and p_obj#=obj#(+) order by order#
515
125
Module: Workload Generator - oe_prod
DELETE oe.inventories where
4.1
0.0
0.08
y
l
n
0.03 3863742839
O
e
ROWID = :b1
s
U
511
26
19.7
0.0
0.05
14.45 3111103299
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece fr
I
A
&
l
a
24
17.2
O
0.0
0.02
0.78 4049165760
n
r
te
390
95
4.1
0.0
0.00
select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
220
In
55
4.0
0.0
0.02
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
e
l
c
a
r
O
0.06 2085632044
0.86 2591785020
Snaps: 1 -2
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU
Elapsd
Buffer Gets
Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------181
177
1.0
0.0
0.02
0.03 1375013356
where
ROWID = :b2
139
26
5.3
0.0
0.00
0.76 3218356218
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece fr
om idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by pi
ece#
135
45
3.0
0.0
0.05
0.38 4059714361
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
y
l
n
O
e
87
29
3.0
0.0
0.02
0.14
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su
bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
I
A
189272129
s
U
72
26
2.8
0.0
0.00
0.19 1428100621
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece fr
&
l
a
-------------------------------------------------------------
n
r
te
n
I
e
r
O
l
c
a
Snaps: 1 -2
CPU
Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------1,327,038
10
132,703.8
65.4
46.6
46.6
Module: SQL*Plus
BEGIN workload_generator.oltp1; END;
944,747
100
9,447.5
100
9,446.9
10
70,054.9
34.5
56.83 #########
899679532
34.5
33
21,228.2
I
A
381,590
200
Module: Workload Generator
1,908.0
&
l
a
n
r
te
251
26
O
18.8
O
e
s
U
cust_id = s.cust_id
and s.prod_id = p.prod_id
and s.quantity
_sold = (select MAX(quantity_sold) from sh.sales)
and rownum =
1
y
l
n
9.7
0.0
0.05
14.45 3111103299
n
I
e
ece#
l
c
a
30
24
1.3
0.0
0.02
1.24 1819073277
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_
r
O
Snaps: 1 -2
CPU
Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------28
200
0.1
0.0
0.09
161.84 2089710453
where
0.1
0.0
0.20
4.13 4168585130
values
20
26
0.8
0.0
0.00
0.76 3218356218
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece fr
om idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by pi
ece#
19
160
0.1
0.0
0.36
ail,
phone_number, hire_date, job_id, salary, com
mission_pct, manager_id,
department_id)
, :b4,
55
0.0
I
A
O
e
s
U
0.3
y
l
n
7.81 1915274376
0.02
0.86 2591785020
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
&
l
a
n
r
te
16
24
0.7
0.0
0.02
select order#,columns,types from access$ where d_obj#=:1
l
c
a
n
I
e
13
45
0.3
0.0
0.05
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
r
O
0.78 4049165760
0.38 4059714361
Snaps: 1 -2
CPU
Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------7
26
0.3
0.0
0.00
0.19 1428100621
0.0
0.0
0.19
2.40 2729780859
0.34
957616262
200
0.0
0.0
0.13
188.80 3558587519
y
l
n
O
e
6
517
0.0
0.0
0.22
0.50 3935516425
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
s
U
160
0.0
0.0
I
A
0.09
0.36 2950658496
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
2
&
l
a
29
n
r
te
0.1
0.0
0.02
0.14
189272129
0.37
467603321
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su
bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
n
I
e
2
313
Module: Workload Generator - oe_ord
l
c
a
0.0
0.0
r
O
0.05
WHERE
Snaps: 1 -2
CPU
Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ---------2
16
0.1
0.0
0.00
0.03
931956286
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2)
)from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(co
l#,0) order by grantee#
2
115
0.0
0.0
0.05
0.23 1351631542
select o.name, c.name from con$ c, user$ o where c.con# = :1 an
d owner# = user#
2
95
0.0
0.0
0.00
select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
0.06 2085632044
-------------------------------------------------------------
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Instance: orcl
Snaps: 1 -2
CPU per
Elap per
Executions
Rows Processed
Rows per Exec
Exec (s)
Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ---------948
0.0
0.00
0.00 3615375148
517
1.0
0.00
0.00 3935516425
Module: SQL*Plus
COMMIT
517
1.0
0.00
313
1.0
0.00
467603321
WHERE
0.00
0.01 4168585130
313
1.0
0.00
1.0
I
A
200
&
l
a
n
r
te
200
200
Module: Workload Generator - oe-emp
1.0
l
c
a
n
I
e
200
200
r
O
y
l
n
values
O
e
0.00 4170474221
s
U
0.00
0.81 2089710453
where
0.00
0.60 3247722561
where employee_
1.0
0.00
0.94 3558587519
Instance: orcl
Snaps: 1 -2
CPU per
Elap per
Executions
Rows Processed
Rows per Exec
Exec (s)
Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ---------200
200
1.0
1.0
0.00
0.00 1375013356
where
0.0
0.00
0.05 1915274376
160
1.0
0.00
1.0
0.00
160
I
A
y
l
n
O
e
0.02 2729780859
s
U
0.00 2950658496
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
150
&
l
a
150
1.0
n
r
te
150
0.0
0.00
0.00 1994657103
0.00
0.00 2386552905
n
I
e
me,
cust_street_address, cust_city, country_id, c
ust_credit_limit,
cust_email)
valu
l
c
a
r
O
Instance: orcl
Snaps: 1 -2
CPU per
Elap per
Executions
Rows Processed
Rows per Exec
Exec (s)
Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ---------125
125
1.0
0.00
0.00 3863742839
1.0
0.00
0.00 1351631542
115
95
where c.con# = :1 an
1.0
100
1.0
47
0.9
0.00
y
l
n
0.00 2085632044
s
U
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
I
A
O
e
0.02 2591785020
&
l
a
45
45
1.0
0.00
0.01 4059714361
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
n
r
te
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
VL(spare1,0) from seg$ where ts#=:1 and file#=:2 and block#=:3
33
In
33
1.0
e
l
c
sold
from sh.customers c, sh.products p, sh.sales s
where c.
cust_id = s.cust_id
and s.prod_id = p.prod_id
and s.quantity
a
r
O
and rownum =
Instance: orcl
Snaps: 1 -2
CPU per
Elap per
Executions
Rows Processed
Rows per Exec
Exec (s)
Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ---------29
29
1.0
0.00
0.00
189272129
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su
bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
26
0.2
0.00
0.01
957616262
102
3.9
0.00
0.56 3111103299
O
e
y
l
n
-------------------------------------------------------------
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Instance: orcl
Snaps: 1 -2
% Total
Parse Calls Executions
Parses Hash Value
------------ ------------ -------- ---------517
517
17.53 3935516425
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
1
211
948
7.15 3615375148
160
5.42 2950658496
Module: SQL*Plus
COMMIT
160
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#
150
150
5.08 2386552905
115
3.90 1351631542
45
1.53 4059714361
I
A
&
l
a
n
r
te
32
55
1.08 2591785020
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
ce=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)a
n
I
e
l
c
a
r
O
26
0.88
O
e
s
U
where c.con# = :1 an
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
26
y
l
n
957616262
Instance: orcl
Snaps: 1 -2
% Total
Parse Calls Executions
Parses Hash Value
------------ ------------ -------- ---------26
26
0.88 1428100621
26
0.88 3218356218
24
0.81 4049165760
200
&
l
a
I
A
s
U
where
0.68 3247722561
n
r
te
where employee_
n
I
e
20
200
0.68 3558587519
Module: Workload Generator - oe_ord
l
c
a
r
O
O
e
y
l
n
Instance: orcl
Snaps: 1 -2
% Total
Parse Calls Executions
Parses Hash Value
------------ ------------ -------- ---------20
200
0.68 3675510457
100
0.44 1750902811
100
0.44 2215370455
O
e
s
U
12
16
0.41 931956286
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2)
I
A
20
&
l
a
0.37 2385919346
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decod
e(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180
n
r
te
,scale,181,scale,182,scale,183,scale,231,scale,0),null$,fixedsto
rage,nvl(deflength,0),default$,rowid,col#,property, nvl(charseti
d,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
n
I
e
10
Module: SQL*Plus
10
0.34
899679532
l
c
a
r
O
y
l
n
Instance: orcl
Snaps: 1 -2
% Total
Parse Calls Executions
Parses Hash Value
------------ ------------ -------- ---------10
177
0.34 1375013356
where
ROWID = :b2
10
160
0.34 1915274376
Module: Workload Generator - oe-emp
INSERT into hr.employees (employee_id, last_name, first_name, em
ail,
phone_number, hire_date, job_id, salary, com
-------------------------------------------------------------
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Instance: orcl
Snaps: 1 -2
Statistic
Total
per Second
per Trans
--------------------------------- ------------------ -------------- -----------CPU used by this session
CPU used when call started
15,305
15,305
5.4
5.4
18.0
18.0
4,555
1,556,535
1.6
551.8
5.4
1,833.4
50
1,102
0.0
0.4
0.1
1.3
2,049
0
0.7
0.0
2.4
0.0
1,039
6
0.4
0.0
1.2
0.0
6
1,102
0.0
0.4
0.0
1.3
430
723
0.2
0.3
0.5
0.9
501
2,806
0.2
1.0
0.6
3.3
2,720
2,130,206
1.0
755.1
3.2
2,509.1
2,539
56,493
0.9
20.0
3.0
66.5
65,619
29,701
23.3
10.5
calls to kcmgas
calls to kcmgcs
6,251
160
2.2
0.1
31
2,253
1,481
130
CR blocks created
Cached Commit SCN referenced
y
l
n
O
e
s
U
77.3
35.0
7.4
0.2
0.0
0.8
0.0
2.7
0.5
0.1
1.7
0.2
830
478
0.3
0.2
1.0
0.6
2
127
0.0
0.1
0.0
0.2
1
2,551
0.0
0.9
0.0
3.0
2,421
243
0.9
0.1
2.9
0.3
10,340
2,169,211
3.7
769.0
12.2
2,555.0
18,811
6
6.7
0.0
22.2
0.0
cursor authentications
data blocks consistent reads - un
76
10,076
0.0
3.6
0.1
11.9
I
A
na
l&
r
e
nt
I
e
consistent changes
consistent gets
r
O
l
c
a
Instance: orcl
Snaps: 1 -2
Statistic
Total
per Second
per Trans
--------------------------------- ------------------ -------------- -----------db block changes
db block gets
18,384
19,379
6.5
6.9
21.7
22.8
1,229
55
0.4
0.0
1.5
0.1
enqueue conversions
enqueue releases
2,507
8,921
0.9
3.2
3.0
10.5
enqueue requests
enqueue timeouts
8,940
19
3.2
0.0
10.5
0.0
enqueue waits
execute count
665
8,399
0.2
3.0
0.8
9.9
12,938
2,027,711
4.6
718.8
15.2
2,388.4
695
1,611
0.3
0.6
0.8
1.9
195
547
0.1
0.2
0.2
0.6
2,405
2,799
0.9
1.0
2.8
3.3
3
18
0.0
0.0
0.0
0.0
logons cumulative
messages received
20
1,046
0.0
0.4
messages sent
no buffer to keep pinned count
1,046
31
0.4
0.0
2,125,242
2,601
physical writes
physical writes direct
0.0
1.1
0.1
3.5
52
2,468
0.0
0.9
0.1
2.9
2,028,073
5,585
718.9
2.0
2,388.8
6.6
15,751
13,586
5.6
4.8
18.6
16.0
14,508
12,860
5.1
4.6
17.1
15.2
1,703,051
30
603.7
0.0
2,006.0
0.0
I
A
r
e
nt
I
e
prefetched blocks
prefetched blocks aged out before
l
c
a
r
O
l&
na
s
U
1.2
0.0
2,503.2
3.1
90
2,950
physical reads
physical reads direct
O
e
0.0
1.2
753.4
0.9
y
l
n
20,561,129,916
0
7,288,596.2 ############
0.0
0.0
recursive calls
recursive cpu usage
36,164
15,194
12.8
5.4
42.6
17.9
5,545
2.0
6.5
Instance: orcl
Snaps: 1 -2
Statistic
Total
per Second
per Trans
--------------------------------- ------------------ -------------- -----------redo entries
redo size
10,568
2,531,236
3.8
897.3
12.5
2,981.4
479
175
0.2
0.1
0.6
0.2
248,460
6,413
88.1
2.3
292.7
7.6
0
897
0.0
0.3
0.0
1.1
498
3,110
0.2
1.1
0.6
3.7
929
20,561,129,916
2,188,590
0
775.8
0.0
2,577.8
0.0
0
295,704
0.0
104.8
0.0
348.3
8,836,212
2,375
3,132.3
0.8
10,407.8
2.8
140
409
0.1
0.1
0.2
0.5
2,018,699
134
715.6
0.1
2,377.7
0.2
98
3,109
0.0
1.1
0.1
3.7
174
2,123,096
181,473,409
2,480
na
l&
user calls
user commits
r
e
nt
I
e
O
e
s
U
0.2
2,500.7
64,329.5
0.9
213,749.6
2.9
3,352
162
1.2
0.1
4.0
0.2
3
257
0.0
0.1
0.0
0.3
602
849
0.2
0.3
0.7
1.0
134
595
0.1
0.2
0.2
0.7
r
O
l
c
a
y
l
n
0.1
752.6
I
A
0.3
1.1
7,288,596.2 ############
0.0
Snaps: 1 -2
Tablespace
-----------------------------Av
Av
Av
Av
Buffer Av Buf
113
90.6
6.3
856
36,250
56.0
5,585
130.6
1.0
13,586
0.0
25
62.0
1.0
1,161
0.0
476
52.0
1.0
22
57
21.1
61
9.3
1.0
126
0.0
TEMP
UNDOTBS1
SYSTEM
TOOLS
------------------------------------------------------------File IO Stats for DB: ORCL Instance: orcl Snaps: 1 -2
->ordered by Tablespace, File
Tablespace
Filename
------------------------ ---------------------------------------------------Av
Av
Av
Reads Reads/s Rd(ms) Blks/Rd
Av
Writes Writes/s
y
l
n
Buffer Av Buf
Waits Wt(ms)
O
e
476
5,585
61
E:\ORANT\ORA92\ORADATA\ORCL\TOOLS01.DBF
9.3
1.0
126
0
62.0
SYSTEM
TEMP
90.6
6.3
856
s
U
318,868
I
A
36,250
E:\ORANT\ORA92\ORADATA\ORCL\SYSTEM01.DBF
52.0
1.0
22
0
l&
57
E:\ORANT\ORA92\ORADATA\ORCL\TEMP01.DBF
TOOLS
UNDOTBS1
130.6
1.0
13,586
a
n
r
e
t
In
E:\ORANT\ORA92\ORADATA\ORCL\UNDOTBS01.DBF
25
e
l
c
1.0
1,161
-------------------------------------------------------------
a
r
O
56.0
21.1
Snaps: 1 -2
R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free
Write
Buffer
Buffer Complete
Waits
Waits
Busy
Waits
-----36,307
Number of Cache
Buffers Hit %
Buffer
Gets
Physical
Reads
Physical
Writes
Recovery
Actual
Snaps: 1 -2
Log File
Size
Target
Log Ckpt
Timeout
Log Ckpt
Interval
(s)
(s)
Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- ---------- ---------- ---------- ---------- ---------B
E
16
16
12
12
152
159
558
0
18432
18432
18432
18432
------------------------------------------------------------Buffer Pool Advisory for DB: ORCL Instance: orcl End Snap: 2
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Size for Size
Buffers for Est Physical
D
D
4
8
1.0
2.0
979
1,958
1.00
0.89
D
D
12
16
3.0
4.0
2,937
3,916
0.80
0.72
D
D
20
24
5.0
6.0
4,895
5,874
0.65
0.58
D
D
28
32
7.0
8.0
6,853
7,832
D
D
36
40
9.0
10.0
8,811
9,790
D
D
44
48
11.0
12.0
D
D
52
56
13.0
14.0
D
D
60
64
D
D
O
e
2,029,871
1,801,958
s
U
1,620,631
1,468,751
1,318,356
1,184,142
0.51
0.43
1,043,838
875,666
0.36
0.29
730,044
582,543
10,769
11,748
0.20
0.07
397,890
140,097
12,727
13,706
0.01
0.01
23,791
20,051
15.0
16.0
14,685
15,664
0.01
0.01
20,051
20,014
68
72
17.0
18.0
16,643
17,622
0.01
0.01
20,014
20,014
76
80
19.0
20.0
18,601
19,580
0.01
0.01
20,014
20,014
l&
na
r
e
nt
I
e
l
c
a
r
O
D
D
I
A
y
l
n
Estimated
P
Estimate (M) Factr
Estimate
Read Factor
Physical Reads
--- ------------ ----- ---------------- ------------- ------------------
-------------------------------------------------------------
Instance: orcl
Snaps: 1 -2
Class
Waits
Tot Wait
Time (s)
36,172
2,022
20
segment header
39
90
25
------------------------------------------------------------PGA Aggr Target Stats for DB: ORCL Instance: orcl Snaps: 1 -2
-> B: Begin snap
E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
y
l
n
O
e
s
U
I
A
PGA Mem
Alloc(M)
l&
W/A PGA
Used(M)
a
n
r
%PGA
%Auto
%Man
W/A
Mem
W/A
Mem
W/A
Mem
Global Mem
Bound(K)
24
e
t
In
17
8.0
0.0
.0
.0
.0
1,228
-------------------------------------------------------------
e
l
c
a
r
O
Instance: orcl
Snaps: 1 -2
16K
32K
8
2
8
2
0
0
0
0
32K
64K
64K
128K
133
1
1
1
0
0
132
0
128K
256K
256K
512K
2
66
0
66
0
0
2
0
1M
2M
66
66
0
0
-------------------------------------------------------------
Instance: orcl
End Snap: 2
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra
PGA Target
Est (MB)
Size
Factr
Estd PGA
Estd PGA
Cache
Hit %
Overalloc
Count
W/A MB
W/A MB Read/
Processed Written to Disk
0.8
1.0
0.0
0.0
0.0
0.0
29
34
1.2
1.4
0.0
0.0
0.0
0.0
38
43
1.6
1.8
0.0
0.0
0.0
0.0
48
72
2.0
3.0
0.0
0.0
96
144
4.0
6.0
0.0
0.0
192
0.0
0.0
I
A
l&
a
n
r
O
e
s
U
0.0
0.0
1
0
0
0
0.0
0.0
0
0
0.0
0.0
0.0
0.0
0
0
0.0
0.0
0.0
0.0
0
0
y
l
n
8.0
0.0
0.0
0.0
0
-------------------------------------------------------------
e
t
In
e
l
c
a
r
O
Eq
Requests
Waits
Avg Wt
Time (ms)
Wait
Time (s)
7,814
7,814
0
200
433.54
-------------------------------------------------------------
87
Instance: orcl
Snaps: 1 -2
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
RBS No
Trans Table
Gets
Pct
Waits
Undo Bytes
Written
Wraps
Shrinks
Extends
428.0
441.0
0.00
0.00
73,508
71,700
2
2
0
0
2
2
3
4
464.0
396.0
0.00
0.00
63,328
58,240
1
2
0
0
1
2
5
6
437.0
446.0
0.00
0.00
67,164
67,546
0
2
0
0
0
2
7
8
378.0
481.0
0.00
0.00
66,198
75,778
0
2
0
0
0
2
9
10
379.0
466.0
0.00
0.00
54,632
183,916
0
0
0
0
0
0
425,984
716,800
6,144
32,145
2
3
913,408
2,093,056
22,536
147,845
4
5
585,728
1,175,552
17,759
0
6
7
716,800
1,175,552
17,759
0
8
9
651,264
2,289,664
17,759
0
10
I
A
s
U
913,408
2,093,056
585,728
1,175,552
716,800
1,175,552
651,264
2,289,664
1,175,552
0
1,175,552
-------------------------------------------------------------
e
t
In
e
l
c
ra
Undo
O
e
425,984
716,800
l&
a
n
r
y
l
n
Undo
Instance: orcl
Snaps: 1 -2
uR - unexpired Released,
eR - expired
Released,
Num
Max Qry
uU - unexpired reUsed
eU - expired
reUsed
TS#
Blocks
Trans Len (s)
Concurcy Too Old Space eS/eR/eU
---- -------------- ---------- -------- ---------- -------- ------ -------------
O
1
811
14,114
660
10
0
0 0/0/0/0/0/0
-------------------------------------------------------------
Instance: orcl
Snaps: 1 -2
End Time
Max Tx Snap
Out of uS/uR/uU/
Concy Too Old Space eS/eR/eU
5
5
3,595
3,549
399
660
8
7
0
0
0 0/0/0/0/0/0
0 0/0/0/0/0/0
30-Jul 15:28
30-Jul 15:18
4
797
3,525
3,445
453
219
10
5
0
0
0 0/0/0/0/0/0
0 0/0/0/0/0/0
Get
Avg
Slps
Wait
Time
Pct
NoWait NoWait
Latch
Requests
Miss /Miss
(s)
Requests
Miss
------------------------ -------------- ------ ------ ------ ------------ -----Consistent RBA
FOB s.o list latch
898
118
0.0
0.0
0
0
1
60,311
0.0
0.0
0
0
1,377
69
0.0
0.0
6,520,208
4,151,271
0.0
0.0
40
1,853
0.0
0.0
62,800
1,049
I
A
s
U
0
0
892
0
0
0
0
0
0.0
0.0
0
0
1,731
0
10,660
40
0.0
0.0
0
0
0
0
21,119
5,205
0.0
0.0
0
0
0
0
20
19,558
0.0
0.0
0
0
0
0
35
2
0.0
0.0
0
0
92
0
9
1,271
0.0
0.0
0
0
0
0
57,587
0.0
476
I
e
l
c
a
r
O
r
e
nt
na
1.0
1.0
0.0
0
0
3,787,163
2,398
y
l
n
O
e
0
0
l&
1.0
1.3
0
0
0.0
0.0
0.0
0.0
0.0
Instance: orcl
Snaps: 1 -2
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Get
Requests
Latch
Pct
Avg
Wait
Pct
Get
Miss
Slps
/Miss
Time
(s)
NoWait NoWait
Requests
Miss
------------------------ -------------- ------ ------ ------ ------------ -----library cache load lock
library cache pin
577
35,875
0.0
0.0
0
0
0
0
14,850
38
0.0
0.0
0
0
0
0
684
390
0.0
0.0
0
0
0
98
9,781
1,271
0.0
0.0
0
0
0
0
625,480
46
0.0
0.0
0
0
0
0
104
1,809
0.0
0.0
0
0
0
174
20
40
0.0
0.0
0
0
20
0
redo allocation
redo copy
12,734
4
0.0
0.0
0
0
redo writing
row cache enqueue latch
5,047
13,873
0.0
0.0
16,402
1,085
0.0
0.0
session allocation
session idle bit
5,513
2,409
0.0
0.0
46
957
messages
mostly latch-free SCN
multiblock read objects
ncodef allocation latch
object stats modificatio
post/wait queue
process allocation
process group creation
session switching
session timer
shared pool
sim partition latch
simulator hash latch
n
I
e
I
A
s
U
0
0
0
0
0
0
0.0
0.0
0
0
0
0
23,586
0
0.0
0
0
0
257
269,094
0.0
l&
r
O
l
c
a
0.0
0.0
0.0
0
0
723
0
y
l
n
O
e
0
10,561
0
0
na
r
e
t
1.1
0.0
0.0
0.0
Instance: orcl
Snaps: 1 -2
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Get
Requests
Latch
Pct
Avg
Wait
Pct
Get
Miss
Slps
/Miss
Time
(s)
NoWait NoWait
Requests
Miss
1,163
19
0.0
0.0
0
0
0
0
50
46
0.0
0.0
0
0
0
0
13,816
80
0.0
0.0
0
0
1
0
transaction allocation
transaction branch alloc
undo global data
user lock
0.0
Latch Name
Misses
Spin &
Sleeps Sleeps 1->4
y
l
n
625,480
6,520,208
58
18
57,587
19,558
10
1
library cache
file number translation ta
I
A
O
e
66 0/50/8/0/0
18 0/0/0/0/0
s
U
10 0/10/0/0/0
1 0/1/0/0/0
&
l
a
n
r
te
n
I
e
r
O
l
c
a
Instance: orcl
Snaps: 1 -2
Waiter
Latch Name
Where
Misses
Sleeps
Sleeps
------------------------ -------------------------- ------- ---------- -------cache buffers chains
cache buffers chains
0
0
13
2
12
1
0
0
2
1
0
3
kcbzgb: wait
kcbzar: KSLNBEGIN
0
0
90
15
3
112
kcbzgm
kftts2a
0
0
12
1
0
1
library cache
library cache
kgldte: child 0
kglpnc: child
0
0
5
3
0
8
library cache
multiblock read objects
kglupc: child
kcbzib: mbr get
0
0
2
42
2
40
0
0
24
1
26
1
O
e
Pct
Scan
I
A
s
U
Pct
y
l
n
Mod
Final
Cache
Requests
Miss
Reqs Miss
Reqs
Usage
------------------------- ------------ ------ ------- ----- -------- ---------dc_free_extents
dc_histogram_defs
9
212
0.0
29.7
dc_object_ids
dc_objects
924
284
3.1
19.4
20
234
dc_profiles
dc_rollback_segments
dc_segments
dc_sequences
dc_tablespaces
dc_user_grants
l
c
a
I
e
dc_usernames
dc_users
r
O
0
0
1
63
0
0
0
0
211
460
0.0
0.0
0
0
0
0
1
14
273
511
16.5
0.8
0
0
0
511
147
7
980
200
0.0
6.0
0
0
0
0
6
13
179
3,425
2.8
0.4
0
0
0
0
9
19
l&
na
r
e
nt
0
0
-------------------------------------------------------------
Instance: orcl
Snaps: 1 -2
Namespace
Pct
Miss
Pin
Requests
CLUSTER
0.0
SQL AREA
1.3
TABLE/PROCEDURE
7.0
3
TRIGGER
4.0
160
6.3
160
0.0
11
3,937
2.0
15,510
1,510
4.3
2,841
50
4.0
50
------------------------------------------------------------Shared Pool Advisory for DB: ORCL Instance: orcl End Snap: 2
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
y
l
n
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid
O
e
Estd
Shared Pool
Size for
SP
Size
Estd
Lib Cache
Estd
Lib Cache
s
U
I
A
.5
1.0
5
7
1,527
2,481
12
16
1.5
2.0
7
7
2,481
2,481
l&
a
n
r
171
171
1.0
1.0
17,801
17,817
171
171
1.0
1.0
17,817
17,817
e
t
In
Size in Bytes
e
l
c
Fixed Size
Redo Buffers
a
r
O
Variable Size
sum
452,992
143,360
25,165,824
----------------
29,956,480
-------------------------------------------------------------
Name
Instance: orcl
Snaps: 1 -2
Begin value
End value
% Diff
282,304
323,292
282,304
323,292
0.00
0.00
shared FileOpenBlock
shared KGK heap
695,504
3,756
695,504
3,756
0.00
0.00
1,573,468
504,860
1,102,560
534,044
-29.93
5.78
shared KQR S PO
shared KQR S SO
109,072
1,280
117,544
3,340
7.77
160.94
166,104
841,036
166,104
841,036
0.00
0.00
1,033,000
8,352
1,033,000
8,352
0.00
0.00
1,977,180
183,432
824,776
844,676
-58.29
360.48
2,068
330,844
2,068
330,844
0.00
0.00
1,610,880
171,860
1,610,880
171,860
0.00
0.00
shared errors
shared event statistics per sess
23,468
1,718,360
23,468
1,718,360
180
1,275,960
180
1,085,252
0.00
-14.95
4,220
603,524
0.00
4.40
148,652
3,009,736
0.00
6.43
4,220
578,108
I
A
y
l
n
O
e
s
U
0.00
0.00
148,652
2,828,032
834,752
3,540,008
834,752
4,439,036
0.00
25.40
7,308
410,720
3,880
410,720
-46.91
0.00
38,108
1,691,548
38,108
1,834,540
0.00
8.45
840
2,788
2,800
1,384
233.33
-50.36
1,076
1,228
1,492
264
38.66
-78.50
4,194,304
452,992
4,194,304
452,992
0.00
0.00
log_buffer
133,120
133,120
-------------------------------------------------------------
0.00
shared parameters
shared sessions
shared sim memory hea
shared sql area
r
e
nt
I
e
l
c
a
buffer_cache
fixed_sga
r
O
na
l&
Instance: orcl
Snaps: 1 -2
End value
Parameter Name
Begin value
(if different)
----------------------------- --------------------------------- -------------background_dump_dest
compatible
E:\orant\ora92\admin\ORCL\bdump
9.2.0.0.0
control_files
core_dump_dest
E:\orant\ora92\oradata\ORCL\contr
E:\orant\ora92\admin\ORCL\cdump
db_block_size
db_cache_size
4096
4194304
db_domain
db_file_multiblock_read_count 16
db_keep_cache_size
db_name
0
ORCL
db_recycle_cache_size
fast_start_mttr_target
0
10
hash_join_enabled
instance_name
TRUE
ORCL
java_pool_size
large_pool_size
0
0
log_buffer
log_checkpoint_timeout
64000
10
open_cursors
pga_aggregate_target
300
25165824
processes
query_rewrite_enabled
150
FALSE
remote_login_passwordfile
shared_pool_reserved_size
EXCLUSIVE
1024000
shared_pool_size
sort_area_size
8388608
512
star_transformation_enabled
timed_statistics
FALSE
TRUE
undo_management
undo_retention
AUTO
10800
&
l
a
y
l
n
O
e
I
A
s
U
n
r
te
undo_tablespace
user_dump_dest
UNDOTBS1
E:\orant\ora92\admin\ORCL\udump
workarea_size_policy
MANUAL
-------------------------------------------------------------
n
I
e
l
c
a
End of Report
r
O
Redundant Arrays of
Inexpensive Disks Technology (RAID)
O
e
I
A
&
l
a
n
r
te
r
O
l
c
a
n
I
e
y
l
n
s
U
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
RAID 0 refers to simple data striping of multiple disks into a single logical volume, and has
no fault tolerance. When properly configured, it provides excellent response times for high
concurrency random I/O and excellent throughput for low concurrency sequential I/O.
Selection of the array and stripe sizes requires careful consideration in order to achieve the
promised throughput. For RAID 0, the total I/Os-per-second load generated against the array
is calculated directly from the application load, because there is no fault tolerance in this
configuration:
Total I/O per second load on array = (reads/transaction + writes/ transaction) *
transactions/second
The size of each drive in the array can be calculated from the online volume requirements as
follows:
Drive size = [total space required by application / number of drives in array] Rounded
up to next drive size.
Below is a summary of RAID 0 characteristics:
- Random read performance: Excellent under all concurrency levels if each I/O
request fits within a single striping segment
- Random write performance: Same as random read performance
- Sequential read performance: Excellent with fine-grained striping at low
concurrency levels
- Sequential write performance: Same as sequential read performance
- Outage frequency: Poor; any single disk failure will cause application outage
- Outage duration: Poor; the duration of a RAID 0 outage is the time required to
detect the failure, replace the disk drive, and perform Oracle media recovery
- Performance degradation during outage: Poor; any disk failure causes all
applications requiring use of the array to crash
- Acquisition cost: Excellent, because there is no redundancy; you buy only
enough for storage and I/Os per second requirements
- Operational cost: Fair to poor; frequent media recoveries increase operational
costs and may outweigh the acquisition cost advantage
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
RAID Level 1, or disk mirroring, provides the best fault tolerance of any of the RAID
configurations. Each disk drive is backed up by an exact copy of itself on an identical drive.
A storage subsystem of mirrored drives can continue at full performance with a multiple
disk failure as long as no two drives in a mirrored pair have failed. The total I/Os per load
applied to a mirrored pair is calculated as follows:
Total I/O per second load on array = (reads/transaction + 2*writes/ transaction) *
transactions/second
Note the two multiplier of the writes/transaction factor. This is due to the fact that each write
request by an application to a mirrored pair actually results in two writes, one to the primary
disk and one to the backup disk. The size of the drive required is:
Drive size = [total space required by application / number of drives in array/2]
Rounded up to next drive size.
In the simplest RAID 1 configuration, the number of drives in the array is two: the primary
drive and its backup. The definition of RAID 1, however, includes the ability to expand the
array in units of two drives to achieve a striped and mirrored configuration. Striping occurs
in an array of four or more disks. Some industry literature (for example, Millsap,1996) refers
to striped and mirrored configurations as RAID 0 + 1. The Compaq hardware used as an
example configuration in this document supports both configurations. Compaq uses only the
RAID 1 term to describe all 100% mirrored configurations in arrays of even-numbered
disks. Because the performance of a simple two-drive RAID 1 pair is somewhat different
from a striped and mirrored array, the figures for striped and mirrored are presented
separately under the RAID 0 + 1 section.
Below is a summary of characteristics of the two-disk array RAID configuration:
Random read performance: Good; if the implementation uses read-optimized RAID 1
controllers, which read from the drive with the smallest I/O setup cost, then slightly
better than an independent disk
Random write performance: Good (Application write requests are multiplied by two,
because the data must be written to two disks. Thus, some of the I/Os-per-second
capacity of the two drives is used up by the mirroring function.)
Sequential read performance: Fair; throughput is limited to the speed of one disk
Sequential write performance: Fair; same factors as are influencing the random write
performance
Outage frequency: Excellent
Outage duration: Excellent; for hot swapable drives, no application outage is
encountered by a single failure
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
As noted in the previous section, the striped and mirrored configuration is an expansion of
the RAID 1 configuration from a simple mirrored pair to an array of even-numbered drives.
This configuration offers the performance benefits of RAID 0 striping and the fault tolerance
of simple RAID 1 mirroring. The striped and mirrored configuration is especially valuable
for Oracle data files holding files with high write rates, such as table data files and online
and archived redo log files. Unfortunately, it also presents the high costs of simple RAID 1.
The equations for the total I/Os per second and disk drive size calculations for RAID 0 + 1
are identical to those presented for RAID 1 above. The Compaq SMART array controller
used in the example configuration supports RAID 0 + 1 (RAID 1 in Compaq terminology) in
arrays up to 14 drives, providing the effective storage of 7 drives.
Below is a summary of characteristics of RAID 0 + 1 storage arrays:
Random read performance: Excellent under all concurrency levels if each I/O requests
fits within a single striping segment (Using a stripe size that is too small can cause
dramatic performance breakdown at high concurrency levels.)
Random write performance: Good (Application write requests are multiplied by two
because the data must be written to two disks. Thus, some of the I/Os-per-second
capacity of the two drives is used up by the mirroring function.)
Sequential read performance: Excellent under all concurrency levels if each I/O
request fits within a single striping segment
Sequential write performance: Good
Outage frequency: Excellent; same as RAID 1
Outage duration: Excellent; same as RAID 1
Performance degradation during outage: Excellent; there is no degradation during a
disk outage (The resilvering operation that takes place when the failed disk is replaced
will consume a significant amount of the available I/Os-per-second capacity.)
Acquisition cost: Poor; same as RAID 1
Operational cost: Fair; same as RAID 1
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
In the RAID 3 configuration, disks are organized into arrays in which one disk is dedicated
to storage of parity data for the other drives in the array. The stripe size in RAID 3 is 1 bit.
This enables recovery time to be minimized, because data can be reconstructed with a
simple exclusive-OR operation. However, using a stripe size of 1 bit reduces I/O
performance. RAID 3 is not recommended for storing any Oracle database files. Also, RAID
3 is not supported by the Compaq SMART array controllers.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
RAID 5 is similar to RAID 3, except that RAID 5 striping segment sizes are configurable,
and RAID 5 distributes parity across all the disks in an array. A RAID 5 striping segment
contains either data or parity.
Battery-backed cache greatly reduces the impact of this overhead for write calls, but its
effectiveness is implementation-dependent. Large write-intensive batch jobs generally fill
the cache quickly, reducing its ability to offset the write-performance penalty inherent in the
RAID 5 definition.
The total I/Os per second load applied to a RAID 5 array is calculated as follows:
Total I/O per second load on array = (reads/transaction + 4*writes/ transaction) *
transactions/second
The writes/transaction figure is multiplied by four because the parity data must be written in
a six-step process:
1. Read the data drive containing the old value of the data to be overwritten. This requires
one I/O.
2. Read the parity drive. This requires one I/O.
3. Subtract the contribution of the old data from the parity value.
4. Add the contribution of the new data to the parity value.
5. Write the new value of the parity requiring one I/O.
6. Write the new data value to the data drive. This requires one I/O.
Summing up all I/Os in this process yields four I/Os required for each write requested by the
application. This is the main reason that RAID 5 is not recommended for storing files with a
high I/O performance requirement; the 4 multiplier reduces the effective I/Os-per-second
capacity of the array.
The size of the drive required is:
Drive size = [total space required by application /(total number drives - number of
arrays)] Rounded up to next drive size.
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
y
l
n
O
e
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a
Query
None
0+1
Data protection
O
e
y
l
n
I
A
&
l
a
s
U
n
r
te
n
I
e
r
O
l
c
a