Ora 10 GUtilities

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

Oracle10G Utilities

Session Objectives

• Utility discovery
• Utilities detailed
– data pump (impdp, expdp,
dbms_data_pump)
– dbms_file_transfer
– trcsess
What is a utility?

Websters defines a utility as “a useful


article or device”
Value Determination
• Documentation
– TechNet
– MetaLink
– Google
– OracleUtilities.com
• Research
– PL/SQL package specs
– OS executables (just run it!)
Utility Discovery

For new OS binaries …

dircmp –s newDir oldDir


New OS Binaries expdp
agentok.buf
extjob ocrconfig
cemutls
extproc32 ocrdump
clscfg
genezi ocssd
clsfmt
impdp oidca
clsid isqlplusctl olsadmintool
cmadmin kfod olsoidsync
cmctl lbuilder onsctl
cmgw lcsscan orajaxb
cmmigr localconfig orapipe
crsctl lsnodes orapki
ctxlc makeserverwlt osdbagrp
dsm121dif mkstore proxyserv
netlaunch rawutl
e2eme
nmb repo_mig
emagent
nmei runclass
emagtm
nmo searchctl
emctl
nmocat sslsetup
emtgtct1
nmuct trcsess
emtgtct12
nmupm xsl
emutil nmus xvm
emwd ocrcheck
Utility Discovery

For new PL/SQL packages …

select object_name from dba_objects


where owner = ‘SYS’
and object_type = ‘PACKAGE_BODY’
minus
select object_name from dba_objects@ORCL92
where owner = ‘SYS’
and object_type = ‘PACKAGE_BODY’;
Utility Discovery

This approach also works for…


• Instance parameters v$parameter
• Obsolete parameters
v$obsolete_parameter
• System events v$event_name
• V$ tables v$fixed_view_definition
New Packages DBMS_STREAMS_AUTH
CONNECTIONINTERFACE DBMS_LDAP GENMDMCLASSCONSTANTS OLAP_API_SESSION_INIT
CURSORMANAGERINTERFACE DBMS_STREAMS_CDC_ADM
DBMS_LDAP_API_FFI GENMDMOBJECTIDCONSTANTS OWM_9IP_PKG
DATABASEINTERFACE DBMS_STREAMS_DATAPUMP
DBMS_LDAP_UTL GENMDMPROPERTYIDCONSTANTS OWM_BULK_LOAD_PKG
DATAPROVIDERINTERFACE DBMS_STREAMS_DATAPUMP_UTIL
DBMSHSXP DBMS_LOGREP_UTIL_INVOK GENMETADATAPROVIDERINTERFACE OWM_DDL_PKG
DBMS_STREAMS_LCR_INT
DBMS_ADVANCED_REWRITE DBMS_LOGSTDBY_PUBLIC GENPARAMETERIDCONSTANTS OWM_IEXP_PKG
DBMS_ADVISOR DBMS_STREAMS_MESSAGING
DBMS_MONITOR GENSERVERINTERFACE OWM_MIG_PKG
DBMS_AQ_BQVIEW DBMS_STREAMS_PUB_RPC
DBMS_CDC_DPUTIL DBMS_PROFILER GENSNAPIINTERFACE OWM_MP_PKG
DBMS_STREAMS_RPC
DBMS_CDC_EXPDP DBMS_PRVT_TRACE GET_ERROR$ OWM_REPUTIL
DBMS_CDC_EXPVDP DBMS_STREAMS_RPC_INTERNAL
DBMS_REGISTRY_SERVER INITJVMAUX OWM_VT_PKG
DBMS_CDC_IMPDP DBMS_STREAMS_TABLESPACE_ADM
DBMS_REPCAT_EXP INTERRUPTABLEINTERFACE PRVT_ACCESS_ADVISOR
DBMS_CDC_IPUBLISH DBMS_STREAMS_TBS_INT
DBMS_CDC_ISUBSCRIBE DBMS_REPCAT_MIGRATION JAVA_XA PRVT_ADVISOR
DBMS_STREAMS_TBS_INT_INVOK
DBMS_CRYPTO DBMS_RULE_EXP_UTLI KUPC$QUEUE PRVT_DIMENSION_SYS_UTIL
DBMS_CRYPTO_FFI DBMS_SUM_RWEQ_EXPORT
DBMS_SCHEDULER KUPC$QUEUE_INT PRVT_HDM
DBMS_CRYPTO_TOOLKIT DBMS_SUM_RWEQ_EXPORT_INTERNA
DBMS_CRYPTO_TOOLKIT_FFI DBMS_SCHED_CLASS_EXPORT L KUPC$QUE_INT PRVT_SYS_TUNE_MVIEW
DBMS_DATAPUMP DBMS_SCHED_EXPORT_CALLOUTS DBMS_SWRF_INTERNAL KUPD$DATA PRVT_TUNE_MVIEW
DBMS_DBUPGRADE DBMS_SCHED_JOB_EXPORT KUPD$DATA_INT PRVT_UADV
DBMS_SWRF_REPORT_INTERNAL
DBMS_DBVERIFY
DBMS_SCHED_MAIN_EXPORT DBMS_TRANSFORM_INTERNAL KUPF$FILE PRVT_WORKLOAD
DBMS_DIMENSION
DBMS_FBT DBMS_SCHED_PROGRAM_EXPORT DBMS_UNDO_ADV KUPF$FILE_INT RMJVM
DBMS_FEATURE_USAGE DBMS_SCHED_SCHEDULE_EXPORT DBMS_UPGRADE_INTERNAL KUPM$MCP SERVERINTERFACE
DBMS_FEATURE_USAGE_INTERNAL
DBMS_SCHED_WINDOW_EXPORT DBMS_WARNING KUPP$PROC SQLJUTL
DBMS_FILE_TRANSFER
DBMS_FREQUENT_ITEMSET DBMS_SCHED_WINGRP_EXPORT DBMS_WARNING_INTERNAL KUPV$FT SQLJUTL2
DBMS_INDEX_UTL DBMS_SCHEMA_COPY DBMS_WORKLOAD_REPOSITORY KUPV$FT_INT UD_TRIGS
DBMS_INTERNAL_SAFE_SCN DBMS_SERVER_ALERT KUPW$WORKER UTL_COMPRESS
DBMS_XMLQUERY
DBMS_ISCHED
DBMS_SERVER_ALERT_EXPORT DBMS_XMLSAVE LT UTL_DBWS
DBMS_I_INDEX_UTL
DBMS_JAVA DBMS_SERVER_TRACE DBMS_XMLSTORE LTADM UTL_I18N
DBMS_JAVA_DUMP DBMS_SERVICE DBMS_XSOQ LTAQ UTL_LMS
DBMS_JMS_PLSQL
DBMS_SQLTUNE DBMS_XSOQ_ODBO LTDDL UTL_RECOMP
DBMS_SQLTUNE_INTERNAL DBMS_XSOQ_UTIL LTDTRG UTL_SYS_COMPRESS
DBMS_STAT_FUNCS DEFINITIONMANAGERINTERFACE LTPRIV WM_DDL_UTIL
DBMS_STAT_FUNCS_AUX EXF$DBMS_EXPFIL_SYSPACK LTRIC WM_ERROR
DBMS_STREAMS_ADM_UTL_INVOK GENCONNECTIONINTERFACE LTUTIL XML_SCHEMA_NAME_PRESENT

GENCURSORMANAGERINTERFACE LT_CTX_PKG
GENDATABASEINTERFACE LT_EXPORT_PKG
GENDATAPROVIDERINTERFACE METADATAPROVIDERINTERFACE
GENDATATYPEIDCONSTANTS
GENDEFINITIONMANAGERINTERFACE
GENFUNCTIONIDCONSTANTS
GENINTERRUPTABLEINTERFACE
Packages – OS Comparison
500

400

300
PL/SQL
200 OS

100

0
7.3.4 8.1.7 9.2.0 10.1
Why Emphasis on PL/SQL?

• Platform independence
• Speed for DB processing
• Mature Dev environment
Moving Data With Data Pump
Data Pump

• New and improved import and


export
• Both OS and PL/SQL versions
available
• Enough on this topic for entire
book or 3 hour presentation
Advantages over imp exp

• Job interrupt/restart capability


• OS and PL/SQL versions available
• Interactive mode
• Job monitoring
• Fine grained object selection
• Intelligence
• PERFORMANCE!
Data Pump Architecture
• Different than imp/exp
• New background processes
– Master DMnn
– Workers DWnn
• Master table created in users schema
with name of job name (used for
restart) lots of stuff in it
• Requires directory object
• Uses Direct path or external table
expdp

• Data pump export


• Faster than traditional export
• Interactive mode
– Ctrl-C will get you there (or ATTACH)
• Job interrupt and restart capability
– ATTACH=job_name
SQL> select owner_name, job_name, state from dba_datapump_jobs;
expdp
• Usage is pretty much the same as exp
• Plenty of doc available on all options
• Disk space estimation
C:\ioug>expdp parfile=dp_options.par

Export: Release 10.1.0.2.0 - Production on Sunday, 27 February, 2005 17:08

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -


Production
With the Partitioning, OLAP and Data Mining options
Starting "DAVE"."DAVE_TEST": parfile=dp_options.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 636 MB

Processing object type TABLE_EXPORT/TABLE/TABLE


Sample par file contents

DUMPFILE=myexport.dmp
DIRECTORY=DATA_PUMP_DIR
JOB_NAME=MY_EXPORT_JOB
SCHEMAS=DAVE
INCLUDE=PROCEDURE
INCLUDE=VIEW
INCLUDE=TABLE:”like ‘DAVE_TEST%’”
expdp options mapped
dumpfile = file
logfile = log
include = grants, indexes, constraints
status = feedback

No expdp options for buffers, direct


impdp

• Data pump import


• MUCH faster than traditional
import !?!? (Oracle claims 15-45x)
• Interactive mode
• Job interrupt and restart capability
impdp options mapped
content = rows Y/N
remap_schema = fromuser / touser
reuse_datafiles= destroy
sqlfile = indexfile
table_exists_action = ignore

No impdp options for filesize,commit


Performance Benchmark
• Averaged over 10 executions
• Performed on Unix and Windows
machines
• Used table with two million rows

SQL> desc two_million_rows;


Name Null? Type
----------------------------------------- -------- --------------
COL1 NUMBER
COL2 NUMBER
COL3 VARCHAR2(200)
COL4 VARCHAR2(200)
COL5 VARCHAR2(200)
COL6 DATE
Par files used in test
Datapump par file Import par file
tables=two_million_rows commit=n
userid=dave/dave buffer=64000
job_name=dave_test tables=two_million_rows
log=import_test.log
userid=dave/dave

Export par file


compress=n
direct=y
buffer=1000
tables=two_million_rows
log=export_test.log
userid=dave/dave
Performance Tests
“One test is worth more than a thousand opinions”

Exec / Platform Execution Time Size of export file

exp on Win 145 secs 578 MB

expdp on Win 110 secs 574 MB

exp on Unix 69 secs 578 MB

expdp on Unix 59 secs 574 MB

imp on Win 4 min 25 secs N/A

impdp on Win 1 min 40 secs N/A

imp on Unix 12 minutes N/A

impdp on Unix 1 min 11 secs N/A

impdp consistently consumed more CPU than imp 1.3 vs. 7.7%
Test Results
• expdp is faster than exp
• impdp is WAY faster than imp
• Size of the export file is nearly the
same (exp vs. expdp)
• Dump file produced by exp cannot
be used by impdp
• expdp dump file is platform
independent
DBMS_DATAPUMP
• Same capability from within
PL/SQL
– Uses same Oracle background
processes
– Leverage it from Oracle jobs
• Less documentation = harder to
use
DECLARE
h1 NUMBER; -- handle for data pump job
BEGIN

-- step 1: setup job


h1 := DBMS_DATAPUMP.OPEN(
OPERATION => 'EXPORT',
JOB_MODE => 'SCHEMA',
REMOTE_LINK => NULL,
JOB_NAME => 'DAVE_TEST',
VERSION => 'LATEST');

-- step 2: setup export file


DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'twomillionrows.dmp',
DIRECTORY => 'DATA_PUMP_DIR',
FILESIZE => NULL,

Working FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

-- step 3: add log file

DBMS_DATAPUMP DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'example1.log',

example DIRECTORY => 'DATA_PUMP_DIR',


FILESIZE => NULL,
FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

-- step 4: specify owner filter


DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'SCHEMA_EXPR',
VALUE => 'IN (''DAVE'')',
OBJECT_TYPE => NULL);

-- step 5: specify table filter


DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (''TWO_MILLION_ROWS'')',
OBJECT_TYPE => 'TABLE');

-- step 6: specify parallel


DBMS_DATAPUMP.SET_PARALLEL(
HANDLE => h1,
DEGREE => 1);

-- step 7: let it roll


DBMS_DATAPUMP.START_JOB(HANDLE => h1);
dbms_datapump.detach(HANDLE => h1);
END;
/
Step 1- Prime the pump

h1 := DBMS_DATAPUMP.OPEN(
OPERATION => 'EXPORT',
JOB_MODE => 'SCHEMA',
REMOTE_LINK => NULL,
JOB_NAME => 'DAVE_TEST',
VERSION => 'LATEST');
Step 2 – Specify export
file
DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'twomillionrows.dmp',
DIRECTORY => 'DATA_PUMP_DIR',
FILESIZE => NULL,
FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
Step 3- Specify Log file
DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'example1.log',
DIRECTORY => 'DATA_PUMP_DIR',
FILESIZE => NULL,
FILETYPE =>
DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
Step 4 – Specify Filter
DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'SCHEMA_EXPR',
VALUE => 'IN (''DAVE'')',
OBJECT_TYPE => NULL);

DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (''TWO_MILLION_ROWS'')',
OBJECT_TYPE => 'TABLE');
Step 5- Let it roll

DBMS_DATAPUMP.START_JOB(HANDLE => h1);


dbms_datapump.detach(HANDLE => h1);
So many questions …

• What are valid values for name,


value, filesize, filetype?
• How complicated can the logic be?

Review package spec for best doc


Data pump usage
• Leverage restart capability
– Fix problems
– Yield to other processing
• Use PL/SQL version from within
DBMS_JOB
• Use v$session_longops for an
accurate estimate to completion
Pseudocode
If data pump job is running then
If it will not be done for awhile then -- dbms_datapump.get_status
pause datapump job –- dbms_datapump.stop_job
job_stopped = true
else -- yield
job_stopped = false
wait until it is done –- dbms_lock.sleep(time_remaining + 20)
end if
End if

Proceed …
If job_stopped = true then
start datapump job –- dbms_datapump.start_job
End if
When using DB Links

• No dump file is created


• Use filtering due to bandwidth
• Consider “create table as select”
instead
– Table with 2mm rows, data pump
took 2 minutes, CTAS took 50 secs
Utilize Data Pump Views

• DBA_DATAPUMP_JOBS
• DBA_DATAPUMP_SESSIONS
• V$DATAPUMP_JOB
• V$DATAPUMP_SESSION
• SCHEMA.<job name>
Limited support in PL/SQL
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
ORA-06512: at line 11

Why did this execution fail?


Limited support in PL/SQL
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
ORA-06512: at line 11

The real cause of the error was that the export file already existed
in the target directory. Obvious?
Copying files with
DBMS_FILE_TRANSFER
DBMS_FILE_TRANSFER
• Provides a way to copy data files
without any OS credentials (copying
files that require oracle password is
restricted)
• Requires the following DB privs
– Execute on dbms_file_transfer
– Read on directory source_dir
– Write on directory dest_dir
• Can copy local files (data files or
datapump) to a remote server or vice
versa.
• Created mainly for Transportable
Tablespaces
The Three Procedures

• copy_file – for copying files


locally on the db server
• get_file – for getting a file from a
remote database
• put_file – for putting a local file to
a remote database
Example: Trans Tsps

BEGIN DBMS_STREAMS_TABLESPACE_ADM.PULL_SIMPLE_TABLESPACE (
tablespace_name => 'TS_ORDERS',
database_link => 'MID101SO',
directory_object => 'DEST_DIR',
conversion_extension => 'dbf’
);
END;

dbms_streams_tablespace_adm uses
dbms_file_transfer underneath
Example: Redistribute I/O
ALTER TABLESPACE ts_hotspot READ ONLY;
BEGIN
dbms_file_transfer.copy_file( source_directory_object => 'SOURCE_DIR',
source_file_name => 'hotspot.dbf',
destination_directory_object => 'DEST_DIR',
destination_file_name => 'hotspot.dbf');
END;
/
ALTER TABLESPACE ts_hotspot OFFLINE;
ALTER TABLESPACE ts_hotspot RENAME datafile
'/usr/oracle/databases/mid101so/datafiles/hotspot.dbf' TO '/tmp/hotspot.dbf';
ALTER TABLESPACE ts_hotspot ONLINE;
ALTER TABLESPACE ts_hotspot READ WRITE;
Detected the I/O hot spot (prior),
changed the tablespace to read only
mode, copied the data file, renamed the
datafile, put tablespace online …

NO MORE HOT SPOT.

All from PL/SQL!


Managing trc Files with trcsess
Does your user dump dest
look like this?
trcsess to the rescue
• trcsess consolidates data from
multiple trace files – how useful!
• Java class imbedded in bat file
• Allows criteria to be specified
– Clientid, service, action, module
– Can specify filenames with wildcard
(*.trc)
• Then it’s back to regular tuning
New Contents in .trc
dbms_session.set_identifier('HERE IS THE CLIENTID');

dbms_application_info.set_module('DAVE_MODULE','DAVE_ACTION');

*** 2005-04-10 20:06:22.426


*** ACTION NAME:(DAVE_ACTION) 2005-04-10 20:06:22.396
*** MODULE NAME:(DAVE_MODULE) 2005-04-10 20:06:22.396
*** SERVICE NAME:(SYS$USERS) 2005-04-10 20:06:22.396
*** SESSION ID:(148.49932) 2005-04-10 20:06:22.386
*** CLIENT ID:(HERE IS THE CLIENTID) 2005-04-11 07:57:45.135
trcsess options

output=<output file name> output destination default being standard output.


session=<session Id> session to be traced.
Session id is a combination of session Index & session serial number e.g. 8.13.

clientid=<clientid> clientid to be traced.


service=<service name> service to be traced.
action=<action name> action to be traced.
module=<module name> module to be traced.
<trace_file_names> Space separated list of trace files with wild card '*' supported.
The following command…
$ trcsess output=aoug.trc module=SQL*Plus *.trc

says to consolidate all session


information found for any session
connecting with a module of SQL*Plus
in any .trc file (in this directory) into
aoug.trc
From this point on, it’s SQL Tuning
101…

With tkprof
$ tkprof aoug.trc SQLPLusSessions.txt

Or Trace Analyzer
$ trcanlzr.sql UDUMP aoug.trc
Using trcsess
• Use DBMS_MONITOR package
• Leverage trcsess in shared server
environment, connection pooling
• Set session variables
– From application
– Logon triggers
– Make them meaningful (IP address,
network user name, application)
Verifying data files with
DBMS_DBVERIFY
DBMS_DBVERIFY

• Simple utility that performs same


task as dbv.exe
• Executed from PL/SQL
• Output difficult to decipher –
package header didn’t help 
Old Way
. oraenv
wlogfile=dbv.${ORACLE_SID}
SQLPLUS=${ORACLE_HOME}/bin/sqlplus
$SQLPLUS -s system/manager >> $wlogfile <<EOF
set echo off feedback off verify off pages 0 termout off
linesize 150
spool dbv.cmd
select 'dbv file=' || name || ' blocksize=' || block_size ||
' feedback=' || round(blocks*.10,0) -- 10 dots per file
from v\$datafile;
spool off
set feedback on verify on pages24 echo on termout on
EOF
ksh dbv.cmd
DBV output
DBVERIFY - Verification complete

Total Pages Examined : 82400


Total Pages Processed (Data) : 81575
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 41
Total Pages Failing (Index): 0
Total Pages Processed (Other): 644
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 140
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
DBMS_DBVERIFY Example
DECLARE
vOutput VARCHAR2(4000) := '';
vError VARCHAR2(4000) := '';
vStats VARCHAR2(4000) := '';
BEGIN
dbms_dbverify.dbv2('c:\oracle\product\10.1.0\oradata\orcl10g\users01.dbf',
1,2, 8192, vOutput, vError, vStats);

dbms_output.put_line('Output: ' || vOutput);


dbms_output.put_line('Error: ' || vError);
dbms_output.put_line('Stats: ' || vStats);
END;
/
DBMS_DBVERIFY Output
Output:
Error:
Stats: 0 0 0 0 0 0 0 0 2 2 0 0 0 0 0
PL/SQL procedure successfully completed.

These appear to match up with the


dbv output! (Example)

Stay tuned for details


The plan …

• Provide wrapper for output


• Run PL/SQL job that checks files
regularly
• If anything abnormal, send e-mail
with details
Others worth investigating

• DBMS_MONITOR
• DBMS_ADVANCED_REWRITE
• DBMS_DBUPGRADE

You might also like