Oracle 19c - Important Feature For DBA
Oracle 19c - Important Feature For DBA
Oracle 19c - Important Feature For DBA
2
Oracle Database 19c offers market-leading performance, scalability,
reliability, and security both on-premises and in the cloud. And since
it’s the Long Term Release of the Oracle Database 12c and 18c
family of products, it provides the highest level of release stability
and longest time frame for support and bug fixes.
Oracle Database 19c is the final, and therefore 'long term support'
release of the Oracle Database 12c family of products (which
includes Oracle Database 18c). 'Long term support' means that
Oracle Database 19c comes with 4 years of premium support and a
minimum of 3 years extended support.
3
Oracle Database 19c Supported OS (Operating Systems)
Supported OS Memo
12
New Auto Upgrade Tool
❖Oracle Database Auto Upgrade allows DBAs to upgrade one
or many databases without human intervention
13
See MOS Note 2485457.1 for more details
Auto-Upgrade for Oracle Database
The Oracle Database AutoUpgrade utility is a new tiny little command line tool which allows you
to upgrade your databases in an unattended way. I call it the Hands-Free Upgrade. The idea of
the tool is to run the prechecks against multiple databases, fix 99% of the potential issues, set a
restore point in case something goes wrong – and then upgrade your databases. And of course,
do the postupgrade, recompilation and time zone adjustment.
It includes automatic retry and fallback, the possibility to schedule upgrades for future points in
time, and the ability to set, change or remove initialization parameters as desired.
It saves time and money by upgrading hundreds of databases with one command and replacing
bespoke high maintenance upgrade solutions.
The only thing you need to provide is a config file in text format.
Note:-
You get it when you install Oracle Database 19c (19.3) or newer. Or – and this is the
recommended source, you download the most recent version from MOS Note: 2485457.1
14
Example of config file
15
16
Automatic Indexing
The automatic indexing feature automates the index management tasks in an Oracle database. Automatic indexing automatically
creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database
performance. The automatically managed indexes are also known as auto indexes.
Functionality
❖ Runs the automatic indexing process in the background periodically at a predefined time interval.
❖ The index management task is now dynamically performed by the database itself via a task which executes in the background
every 15 minutes
❖ Analyzes application workload, and accordingly creates new indexes and drops the existing under performing indexes to
improve database performance.
❖ Rebuilds the indexes that are marked unusable due to table partitioning maintenance operations, such as ALTER TABLE MOVE.
❖ Provides PL/SQL APIs for configuring automatic indexing in a database and generating reports related to automatic indexing
operations.
Note:-
This feature is currently restricted to Enterprise Edition on Engineered Systems. 17
Enable automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be
used in SQL statements:
The on-off switch for automatic indexing is controlled using the AUTO_INDEX_MODE property, which has the
following allowed values.
• IMPLEMENT : Turns on automatic indexing. New indexes that improve performance are made visible and
available for use by the optimizer.
• REPORT ONLY : Turns on automatic indexing, but new indexes remain invisible.
• OFF : Turns off automatic indexing
The automatic indexing process runs in background every 15 minutes and performs the following operations:
1. Identifies auto index candidates based on the usage of table columns in SQL statements.
2. Creates invisible auto indexes for the auto index candidates.
3. The invisible auto indexes are validated against SQL statements.
18
4. The auto indexes that are not used for a long period are deleted (373 days by default).
How to Configure?
❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');
Disable automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:
❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');
20
Real-Time Statistics
Oracle database 19c introduced real-time statistics to reduce the chances that stale statistics will adversely affect optimizer
decisions when generating execution plans.
Oracle database 12.1 introduced online statistics gathering for bulk loads. This feature allowed the database to gather a subset of
statistics during CTAS and some direct path insert operations. Real-time statistics take this further, allowing statistics to be
gathered as part of some conventional path operations.
• To minimize service disruption, we gather statistics at a quiet time, for example, in the middle of the night.
• We then use those statistics during the business day whilst user activity is at its highest.
• Highest user activity will typically mean the highest frequency of data changes.
• Hence the statistics are at their peak accuracy when no-one is using them to optimize queries, and they are
at their least accurate when everyone is using them to optimize queries!
Having great statistics that are not used, and poor statistics that are used all the time seems a nonsensical way to do things,
but there really hasn’t been any alternative unless you were prepared to consume precious server resources to collect
statistics whilst your users are performing their transactions. I have known customers to do this in the past, but of course, it
takes careful monitoring and management to ensure that the slicing up of the resource “pie” is done appropriately to keep
the right balance between business needs and background database tasks.
19c brings a nifty solution to this contradiction by having the database automatically maintain details about how data is
changing in a table via a feature called Real Time Statistics.
22
Enable the Real time Statistics
23
Real-Time Statistics
24
SQL Quarantine
Oracle Database 19c introduced an extension of Runaway Query Management called SQL Quarantine.
Cancelling a runaway query is helpful to prevent wasting system resources, but if that problem query is run repeatedly, it could
still result in a considerable amount of wasted resources. SQL Quarantine solves this problem by quarantining cancelled SQL
statements, so they can't be run multiple times.
With Oracle Resource Manager, we had a way to limit and regulate use of resources like CPU and I/O as well as we had the ability
to prevent the execution of any long running queries which exceeded a defined threshold.
So we could ‘cancel’ or terminate a SQL query which was running longer than a defined threshold of say 20 minutes.
All that was good – but nothing prevented that same query from being executed again and again running each time for 20
minutes and only then getting terminated – wasting 20 minutes of resources each time it was executed.
New in Oracle 19c is the new concept of SQL Quarantine where if a particular SQL statement exceeds the specified resource limit
(set via Oracle Resource Manager), then the Resource Manager terminates the execution of that statement and “quarantines”
the plan.
Example→
https://oracle-base.com/articles/19c/sql-quarantine-19c#prerequisites
Note:-
This feature is only available on Engineered Systems such as Exadata and Exadata Cloud Service. There is a workaround for testing by enabling the
"_exadata_feature_on" initialisation parameter. Don't use this on a real instance or you will be breaking your license agreement.
25
Create a quarantine configuration for an execution plan for a SQL statement
Specify quarantine thresholds in the quarantine configuration
You can also perform the following operations related to quarantine configurations using the DBMS_SQLQ package
subprograms:
At a very high level, this is what you need, to setup SQL Quarantine
• Create a resource plan that limits the SQL execution time to a specific number of seconds. SQL Statements that exceed this limit, run
longer than X seconds, will be terminated.
• Run a SQL statement that exceeds this limit, X seconds.
• You quarantine the SQL statement using DBMS_SQLQ package. You can check the DBA_SQL_QUARANTINE view to see quarantined SQL
statements.
• Run the SQL statement again. If the statement runs with the same execution plan, then the statement is terminated right away with: ORA-
56955 quarantined plan used. 26
SQL Quarantine
27
Automatic Database Diagnostic Monitor (ADDM) Support for Pluggable Databases (PDBs)
30
Active Data Guard DML Redirection
You can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which
occasionally execute DML’s, on the standby database.
DML operations on a standby can be transparently redirected to and run on the primary database. This includes DML
statements that are part of PL/SQL blocks. The Active Data Guard session waits until the corresponding changes are shipped to
and applied to the Active Data Guard standby. Read consistency is maintained during the DML operation and the standby
database on which the DML is run can view its uncommitted changes. However, all the other standby database instances can
view these changes only after the transaction is committed.
How to configure?
To configure automatic redirection of DML operations for all standby sessions in an Active Data Guard environment:
❖ ADG_REDIRECT_DML=TRUE
To configure automatic redirection of DML operations for the current session, use the following command:
Starting with Oracle 19c, a standby database that is in a mounted state can automatically follow the primary
database after a RESETLOGS operation on the primary. This simplifies standby management after a RESETLOGS
operation on the primary.
• The process of flashing back a physical standby to a point in time that was captured on the primary is
simplified by automatically replicating restore points from primary to the standby.
• These restore points are called replicated restore points.
• These restore points are called Replicated Restore Points and have the restore point name suffixed with a
“_PRIMARY”.
• Irrespective of whether a restore point on the primary database is a guaranteed restore point or a normal
restore point, the corresponding replicated restore point is
always a normal restore point.
• COMPATIBLE initialization parameter for both the primary database and the standby database is set to 19.0.0 or
higher
• Primary database is open
• A restore point that is created on a primary database when the primary is in mount mode is not replicated. This
restriction is because the restore point information is replicated though the redo.
• When you delete a restore point on the primary, the corresponding replicated restore point on the
standby is also deleted.
• The managed redo process (MRP) manages the creation and maintenance of replicated restore
points. If restore points are created on the primary database when MRP is not running, then these
restore points are replicated to the standby database after MRP is started.
34
Clear Flashback Logs Periodically for Increased Fast Recovery Area (FRA) Size
DBA`s cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using
guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention
of flashback logs. The Only quick way we purge the flashback is to turn off Flashback and Turn on Again.
Starting with Oracle Database Release 19c, Oracle Database monitors flashback logs in the fast recovery area and automatically deletes
flashback logs that are beyond the retention period (Not Waiting for FRA Crunch). When the retention target is reduced, flashback logs that
are beyond the retention period are deleted immediately.
In 19c you can duplicate a standby database from an active primary DB using COMPRESSION feature which can send the
database block via the network in a compressed format saving the bandwidth and the time as well.
Command example:
run {
allocate channel disk1 type disk;
allocate auxiliary channel aux1 type disk;
duplicate target database for standby from active database USING COMPRESSED BACKUPSET
spfile
parameter_value_convert 'orcl','orcls'
set db_name='orcl'
set db_unique_name='orcls';
35
}
New features in DBCA in the silent mode
• Ability to Create a PDB by Cloning a Remote PDB Using DBCA in Silent Mode
36
New features of data pump support for resource usage restrictions
During the export and import of the data pump, the use of its resources can be restricted. This can be achieved
with two new parameters:
• You can more easily determine how long an export takes, and discover unforeseen issues not reported by the
closure check.
• Test mode for Transportable Tablespaces (TTSs) performs a metadata-only export test using TTSs or full
transportable export or import.
• It also removes the requirement for the source database tablespaces to be in read-only mode.
• The resulting of export dump file is not available for use by Data Pump Import.
37
Oracle Network Log File Segmentation
One annoying thing that happens time to time on a database listener, specially with
client/server applications, is starting rejecting some connections when listener.log file is
greater than 2gb. In versions between 11g and 18c, you can monitor this using a customize
shell script or scheduling some jobs via scheduler.
On Oracle 19c, Oracle has introduced two new parameters on listener administration. We can configure log rotation inside
Oracle scope, without any additional configuration on OS side or shell script.
LOG_FILE_NUM_<<listener_name>> –>
The LOG_FILE_NUM_listener_name is a diagnostic parameter of the listener.ora file that specifies the number of log file
segments. If you don’t define this parameter, the feature will not work and file will grow indefinitely.
LOG_FILE_SIZE_<<listener_name>> –>
The LOG_FILE_SIZE_listener_name diagnostic parameter of the listener.ora file specifies the size of each log file segment. This
38
parameter is in MB and default is 300Mb
Passwords Removed from Oracle Database Schema/Accounts
What are schema only accounts? These are users (schemas) in the database, that have no password.
Since they have no password, these schemas cannot connect directly to the database (this is the whole purpose of the schema
only account).
Without being able to connect directly to the schema only accounts, the schema itself is more secure. In general you don’t
want users, applications connecting directly with the schema account.
No more workarounds are needed to properly secure the objects and data of a schema.
• Most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from
authenticating to these accounts.
• This enhancement does not affect the sample schemas. Sample schemas are still installed with their default passwords.
• Administrators can still assign passwords to the default schema-only accounts. Oracle recommends changing the schemas
back to a schema-only account afterward.
• The benefit of this feature is that administrators no longer have to periodically rotate the passwords for these Oracle
Database provided schemas. This feature also reduces the security risk of attackers using default passwords to hack into
these accounts.
39
Privilege Analysis Features
• Privilege Analysis (PA), dynamically analyzes privilege and role usage for database users and application service accounts at
run time.
• PA helps to further harden the application by identifying unused privileges and roles based upon the actual usage of the
roles and privileges by a user or from within the application.
Understanding the set of unused roles and privileges is key to identify the least number of privileges the application needs to
run. Reports generated by PA are richer and reflect the actual privileges and roles used/unused by users and applications
during runtime.
Static based role/privilege analysis tools can only show which roles and privileges are granted to users. Understanding actual
usage of roles and privilege is essential to implementing a least privilege model for all database accounts and reduces your
application attack surface
❖ Create: DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE
❖ Enable : DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
❖ Wait: Leave the policy enabled for a period of time: hours, days, weeks, months.
❖ Disable DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE.
❖ Generate Reports DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT.
❖ Take action where-ever appropiate
❖ Drop: DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE.
40
EASY CONNECT PLUS - EZCONNECT
Oracle Database 19c introduces a significantly enhanced connection string syntax called Easy Connect Plus.
By reducing the need for external configuration files, Easy Connect Plus makes it easier to use features such as TLS
connections, wallets, load balancing, connection timeouts, and to tune network buffer sizes. With this feature there is no
need to have tnsnames.ora,sqlnet.ora files present or TNS_ADMIN is set
Easy Connect Plus can be used in JDBC and .NET applications. It is also usable in applications and drivers built on the C
language Oracle Call Interface (OCI) such as the Oracle Database drivers for Python, Node.js, PHP, Ruby, and Go.
Easy connect background
An Easy Connect string includes a host name, an optional port, and the database service name:
database_host[:port][/[service_name]
Examples
1. Using protocol
tnsping tcp://joords1:1521/sales.us.example.com
Hybrid Partitioned table – some partitions exist in the database and some partitions are hosted external to the database – maybe
on a normal file system or ACFS or even Hadoop File System.
• Using this feature we may like to move older non-active partitions of a table to possibly cheaper storage hosted outside the
database – while the active data is hosted in partitions stored inside the Oracle database
• External partitions may reside on standard file systems, HDFS or Object Storage (coming soon).
• This model supports the placement of data for long term archiving on lower cost storage whilst allowing it’s continued online
access
• Supports long term access to infrequently access data for regulatory requirements. 42
43
Finer granularity Supplemental Logging
Supplemental logging was designed and implemented for Logical Standby or full database replication requirements. This adds
unnecessary overhead in environments where only a subset of tables is being replicated. Fine-grained supplemental logging
provides a way for partial database replication users to disable supplemental logging for uninteresting tables so that even
when supplemental logging is enabled in database or schema level, there is no supplemental logging overhead for
uninteresting tables.
Use of this feature can significantly reduce the overhead in terms of resource usage and redo generation in case when only
some of the tables in the database require supplemental logging, such as in a Golden Gate partial replication configuration.
If both source and mining database are at redo compatibility 19 or higher and the value of enable_goldengate_replication is
TRUE, then Oracle Database 19c and higher provides an advantage of reduced supplemental logging overhead for Oracle
GoldenGate.
How to enable→
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;
Database altered.
SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_SR
------------------------------ ------------------------------
44
IMPLICIT YES
HIGH-FREQUENCY STATISTICS
The automatic optimizer statistics collection job which calls DBMS_STATS package runs in predefined maintenance windows and
these maintenance windows are open once a day during which various jobs including the gathering of statistics is performed.
For volatile tables statistics can go stale between two consecutive executions of such automatic statistics collection jobs. The
presence of stale statistics could potentially cause performance problems because the optimizer is choosing sub-optimal execution
plans.
The new feature introduced in Oracle 19c called High-Frequency Automatic Optimizer Statistics Collection complements the
standard automatic statistics collection job.
By default, the high-frequency statistics collection occurs every 15 minutes and as such there is less possibility of having stale
statistics even for those tables where data is changing continuously.
The DBMS_STATS.SET_GLOBAL_PREFS procedure is used to enable and disable the high-frequency statistics gather task as well as
change the execution interval (default 15 minutes) and the maximum run time (60 minutes).
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300'); 45
Root Scripts Automation Support for Oracle Database Installation
Starting with Oracle Database 19c, the database installer, or setup wizard, provides options to set up permissions
to run the root configuration scripts automatically, as required, during a database installation.
You continue to have the option to run the root configuration scripts manually.
Setting up permissions for root configuration scripts to run without user intervention can simplify database
installation and help avoid inadvertent permission errors.
46
Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux
In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on
the node which we were patching on.
Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep
running and accessible to the database users during Oracle Grid Infrastructure patching.
Downtime Oracle Grid Infrastructure Patching enables patching of Oracle Grid Infrastructure without interrupting database
operations. Patches are applied out-of-place and in a rolling fashion, with one node being patched at a time, while the
database instances on the node remain operational. Zero-downtime Oracle Grid Infrastructure Patching supports Oracle Real
Application Clusters (Oracle RAC) databases on clusters with two or more nodes. Zero-Downtime Grid Infrastructure Patching
significantly increases database availability by allowing customers to perform a rolling patch of Oracle Grid Infrastructure
without interrupting database operations on the node being patched and without impacting capacity or performance on those
database instances.
47
New ALTER SYSTEM Clause FLUSH PASSWORDFILE_METADATA_CACHE
• The ALTER SYSTEM clause FLUSH PASSWORDFILE_METADATA_CACHE refreshes the metadata cache with the latest
details of the database password file.
• Querying the V$PASSWORDFILE_INFO view retrieves the latest details of the database password file.
• This functionality is useful when the database password file name or location is changed, and the metadata cache
needs to be refreshed with the details of the updated database password file.
48
What is De-supported in Oracle 19c
• Oracle Multimedia is desupported. Oracle recommends to store multimedia content in SecureFiles LOBs,
and use third party products for image processing and conversion.
• Oracle Streams feature is desupported. Oracle recommends to use Oracle GoldenGate product to replace
all replication features of Oracle Streams.
• DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c. DBMS_JOBS already
deprecated since 12.2.
49
50
51
Q&A
Thank You
52