Ora 10 GUtilities
Ora 10 GUtilities
Ora 10 GUtilities
Session Objectives
• Utility discovery
• Utilities detailed
– data pump (impdp, expdp,
dbms_data_pump)
– dbms_file_transfer
– trcsess
What is a utility?
GENCURSORMANAGERINTERFACE LT_CTX_PKG
GENDATABASEINTERFACE LT_EXPORT_PKG
GENDATAPROVIDERINTERFACE METADATAPROVIDERINTERFACE
GENDATATYPEIDCONSTANTS
GENDEFINITIONMANAGERINTERFACE
GENFUNCTIONIDCONSTANTS
GENINTERRUPTABLEINTERFACE
Packages – OS Comparison
500
400
300
PL/SQL
200 OS
100
0
7.3.4 8.1.7 9.2.0 10.1
Why Emphasis on PL/SQL?
• Platform independence
• Speed for DB processing
• Mature Dev environment
Moving Data With Data Pump
Data Pump
DUMPFILE=myexport.dmp
DIRECTORY=DATA_PUMP_DIR
JOB_NAME=MY_EXPORT_JOB
SCHEMAS=DAVE
INCLUDE=PROCEDURE
INCLUDE=VIEW
INCLUDE=TABLE:”like ‘DAVE_TEST%’”
expdp options mapped
dumpfile = file
logfile = log
include = grants, indexes, constraints
status = feedback
impdp consistently consumed more CPU than imp 1.3 vs. 7.7%
Test Results
• expdp is faster than exp
• impdp is WAY faster than imp
• Size of the export file is nearly the
same (exp vs. expdp)
• Dump file produced by exp cannot
be used by impdp
• expdp dump file is platform
independent
DBMS_DATAPUMP
• Same capability from within
PL/SQL
– Uses same Oracle background
processes
– Leverage it from Oracle jobs
• Less documentation = harder to
use
DECLARE
h1 NUMBER; -- handle for data pump job
BEGIN
DBMS_DATAPUMP DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'example1.log',
h1 := DBMS_DATAPUMP.OPEN(
OPERATION => 'EXPORT',
JOB_MODE => 'SCHEMA',
REMOTE_LINK => NULL,
JOB_NAME => 'DAVE_TEST',
VERSION => 'LATEST');
Step 2 – Specify export
file
DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'twomillionrows.dmp',
DIRECTORY => 'DATA_PUMP_DIR',
FILESIZE => NULL,
FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
Step 3- Specify Log file
DBMS_DATAPUMP.ADD_FILE(
HANDLE => h1,
FILENAME => 'example1.log',
DIRECTORY => 'DATA_PUMP_DIR',
FILESIZE => NULL,
FILETYPE =>
DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
Step 4 – Specify Filter
DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'SCHEMA_EXPR',
VALUE => 'IN (''DAVE'')',
OBJECT_TYPE => NULL);
DBMS_DATAPUMP.METADATA_FILTER(
HANDLE => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (''TWO_MILLION_ROWS'')',
OBJECT_TYPE => 'TABLE');
Step 5- Let it roll
Proceed …
If job_stopped = true then
start datapump job –- dbms_datapump.start_job
End if
When using DB Links
• DBA_DATAPUMP_JOBS
• DBA_DATAPUMP_SESSIONS
• V$DATAPUMP_JOB
• V$DATAPUMP_SESSION
• SCHEMA.<job name>
Limited support in PL/SQL
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
ORA-06512: at line 11
The real cause of the error was that the export file already existed
in the target directory. Obvious?
Copying files with
DBMS_FILE_TRANSFER
DBMS_FILE_TRANSFER
• Provides a way to copy data files
without any OS credentials (copying
files that require oracle password is
restricted)
• Requires the following DB privs
– Execute on dbms_file_transfer
– Read on directory source_dir
– Write on directory dest_dir
• Can copy local files (data files or
datapump) to a remote server or vice
versa.
• Created mainly for Transportable
Tablespaces
The Three Procedures
BEGIN DBMS_STREAMS_TABLESPACE_ADM.PULL_SIMPLE_TABLESPACE (
tablespace_name => 'TS_ORDERS',
database_link => 'MID101SO',
directory_object => 'DEST_DIR',
conversion_extension => 'dbf’
);
END;
dbms_streams_tablespace_adm uses
dbms_file_transfer underneath
Example: Redistribute I/O
ALTER TABLESPACE ts_hotspot READ ONLY;
BEGIN
dbms_file_transfer.copy_file( source_directory_object => 'SOURCE_DIR',
source_file_name => 'hotspot.dbf',
destination_directory_object => 'DEST_DIR',
destination_file_name => 'hotspot.dbf');
END;
/
ALTER TABLESPACE ts_hotspot OFFLINE;
ALTER TABLESPACE ts_hotspot RENAME datafile
'/usr/oracle/databases/mid101so/datafiles/hotspot.dbf' TO '/tmp/hotspot.dbf';
ALTER TABLESPACE ts_hotspot ONLINE;
ALTER TABLESPACE ts_hotspot READ WRITE;
Detected the I/O hot spot (prior),
changed the tablespace to read only
mode, copied the data file, renamed the
datafile, put tablespace online …
dbms_application_info.set_module('DAVE_MODULE','DAVE_ACTION');
With tkprof
$ tkprof aoug.trc SQLPLusSessions.txt
Or Trace Analyzer
$ trcanlzr.sql UDUMP aoug.trc
Using trcsess
• Use DBMS_MONITOR package
• Leverage trcsess in shared server
environment, connection pooling
• Set session variables
– From application
– Logon triggers
– Make them meaningful (IP address,
network user name, application)
Verifying data files with
DBMS_DBVERIFY
DBMS_DBVERIFY
• DBMS_MONITOR
• DBMS_ADVANCED_REWRITE
• DBMS_DBUPGRADE