Redo Transport Services
Data Guard Redo Transport Services coordinate the transmission of redo from a primary database
to the standby database. At the same time that the primary database LGWR process is writing redo
to its ORL, a separate Data Guard process called the Log Network Server (LNS) is reading from
the redo buffer in SGA and passes redo to Oracle Net Services for transmission to the standby
database.
Data Guard’s flexible architecture allows a primary database to transmit redo directly to a
maximum of nine standby databases. Data Guard is also well integrated with Oracle RAC. An
Oracle RAC database has two or more servers (nodes), each running its own Oracle instance and
all having shared access to the same Oracle database. Either the primary, or standby, or both can
be an Oracle RAC database. Each primary instance that is active generates its own thread of redo
and has its own LNS process to transmit redo to the standby database.
Redo records transmitted by the LNS are received at the standby database by another Data
Guard process called the Remote File Server (RFS). The RFS receives the redo at the standby
database and writes it to a sequential file called a standby redo log file (SRL). In a multi-standby
configuration, the primary database has a separate LNS process that manages redo transmissions
for each standby database. In a configuration with three standby databases, for example, three
LNS processes are active on each primary database instance.
Data Guard supports two redo transport methods using the LNS process: synchronous or
asynchronous. A high-level overview of the redo transport architecture is provided in Figure 1-2.
Synchronous Redo Transport
Synchronous transport (SYNC) is also referred to as a “zero data loss” method because the LGWR
is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo
needed to recover the transaction has been written to disk at the standby site.
Data Guard Protection Modes
Many DBAs are interested in the superior data protection of Data Guard SYNC redo transport, but
they are often concerned that the primary database may hang indefinitely if it does not receive
acknowledgment from its standby database, due to the standby database being unavailable or
a network down condition. The last thing that most DBAs want to report to their customers is that
while the primary database is completely healthy, it is refusing to process any more transactions
until it can guarantee that data is protected by a standby database. Then again, perhaps you have
a different set of requirements and you must absolutely guarantee that data is protected even at
the expense of primary database availability. Both of these use cases can utilize SYNC transport to
provide zero data loss protection, but the two cases require a different response to a network or
standby failure. Data Guard protection modes implement rules that govern how the configuration
will respond to failures, enabling you to achieve your specific objectives for data protection,
availability, and performance. Data Guard can support multiple standby databases in a single
Data Guard configuration, and they may all have the same, or different, protection mode setting,
depending on your requirements. The different Data Guard protection modes are Maximum
Performance, Maximum Availability, and Maximum Protection.
Maximum Performance
This mode emphasizes primary database performance over data protection. It requires ASYNC
redo transport so that the LGWR process never waits for acknowledgment from the standby
database. Primary database performance and availability are not impacted by redo transport, by
the status of the network connection between primary and standby, or by the availability of the
standby database. As discussed earlier in this chapter, ASYNC enhancements in Data Guard 11g
have made it the default redo transport method for Maximum Performance. Oracle no longer
recommends the ARCH transport for Maximum Performance in Data Guard 11g given that it
provides a lower level of data protection with no performance advantage compared to ASYNC.
Maximum Availability
This mode emphasizes availability as its first priority and zero data loss protection as a very close
second priority. It requires SYNC redo transport, thus primary database performance may be
impacted by the amount of time required to receive an acknowledgment from the standby that
redo has been written to disk. SYNC transport, however, guarantees 100-percent data protection
during normal operation in the event that the primary database fails.
However, events that have no impact on the availability of the primary database can impact
its ability to transmit redo to the standby. For example, a network or standby database failure will
make it impossible to transmit to the standby database, yet the primary database is still capable of
accepting new transactions. A primary database configured for Maximum Availability will wait a
maximum of NET_TIMEOUT seconds (a user configurable parameter which is discussed more
completely in Chapter 2) before giving up on the standby destination and allowing primary
database processing to proceed even though it can no longer communicate with the standby. This
prevents a failure in communication between the primary and standby databases from impacting
the availability of the primary database.
Data Guard will automatically resynchronize the standby database once the primary is able to
re-establish a connection to the standby (utilizing the gap resolution process described earlier in this
chapter). Specifically, once NET_TIMEOUT seconds expire, the LGWR process disconnects from the
Chapter 1: Data Guard Architecture 19
LNS process, acknowledges the commit, and proceeds without the standby. Processing continues
until the current ORL is complete and the LGWR cycles into a new ORL. As the new ORL is
opened, the LGWR will terminate the previous LNS process, if necessary, and start a new LNS
process that will attempt to make a new connection to the standby database. If it succeeds, the
contents of the new ORL will be sent as usual. If the LNS does not succeed within NET_TIMEOUT
seconds, the LGWR continues as before, acknowledges the current commit, and proceeds without
the standby. This process is repeated at each log switch until LNS succeeds in making a connection
to the standby database. (How soon the LGWR retries a failed standby can be tuned using the
REOPEN attribute, which is discussed in Chapter 2.)
Meanwhile, the primary database has archived one or more ORLs that have not been
completely transmitted to the standby database. A Data Guard ARCH process continuously pings
the standby database until it can again make contact and determine which archive logs are
incomplete or missing at the standby. With this knowledge in-hand, Data Guard immediately
begins transmitting any log files needed to resynchronize the standby database. Once the ping
process makes contact with the standby Data Guard will also force a log switch on the primary
database. This closes off the current online log file and initiates a new LNS connection to
immediately begin shipping current redo, preventing redo transport from falling any further
behind while gap resynchronization is in progress. The potential for data loss during this process
exists only if another failure impacts the primary database before the automatic resynchronization
process is complete.
Maximum Protection
As its name implies, this mode places utmost priority on data protection. It also requires SYNC
redo transport. The primary will not acknowledge a commit to the application unless it receives
acknowledgment from at least one standby database in the configuration that the data needed to
recover that transaction is safely on disk. It has the same impact on primary database performance
as Maximum Availability, except that it does not consider the NET_TIMEOUT parameter. If the
primary does not receive acknowledgment from a SYNC standby database, it will stall and
eventually abort, preventing any unprotected commits from occurring. This behavior guarantees
complete data protection even in the case of multiple failure events (for example, first the network
drops, and later the primary site fails). Note that most users who implement Maximum Protection
configure a minimum of two SYNC standby databases at different locations, so that failure of an
individual standby database does not impact the availability of the primary database.
Data Guard
documentation uses the term switchover to describe a planned role transition, usually for the
purpose of minimizing downtime during planned maintenance activities. The term failover is used
to describe a role transition in response to unplanned events.
Switchover
Switchover is a planned event in which Data Guard reverses the roles of the primary and a
standby database. Switchover is particularly useful for minimizing downtime during planned
maintenance. The most obvious case is when migrating to new Oracle Database releases or
patchsets using a rolling database upgrade. A Data Guard switchover also minimizes downtime
when migrating to new storage (including Exadata storage8), migrating volume managers (for
example, moving to Oracle Automatic Storage Management), migrating from single instance to
Oracle RAC, performing technology refresh, operating system or hardware maintenance, and
even relocating data centers. The switchover command executes the following steps:-
1. Notifies the primary database that a switchover is about to occur.
2. Disconnects all users from the primary.
3. Generates a special redo record that signals the End Of Redo (EOR).
4. Converts the primary database into a standby database.
5. Once the standby database applies the final EOR record, guaranteeing that no data has
been lost, converts the standby to the primary role.
The new primary automatically begins transmitting redo to all other standby databases in the
configuration. The transition in a multi-standby configuration is orderly because each standby
received the identical EOR record transmitted the original primary, they know that the next redo
received will be from the database that has just become the new primary database.
The basic principle for using switchover to reduce downtime during planned maintenance is
usually the same. The primary database runs unaffected while you implement the required
changes on your standby database (e.g. patchset upgrades, full Oracle version upgrades, etc).
Once complete, production is switched over to the standby site running at the new release. In the
case of a data center move, you simply create a standby database in the new data center and
move production to that database using a switchover operation.
Alternatively, before performing maintenance that will impact the availability of the primary site,
you can first switch production to the standby site so that applications remain available the entire
time that site maintenance is being performed. Once the work is complete Data Guard will
resynchronize both databases and enable you to switch production back to the original primary site.
Regardless of how much time is required to perform planned maintenance, the only production
database downtime is the time required to execute a switchover—a task that can be completed in
less than 60 seconds as documented by Oracle best practices9, and in as fast as 5 seconds as
documented in collaborative validation testing performed more recently by Oracle Japan and IBM.10
Switchover operations become even more valuable given Oracle’s increasing support for
different primary/standby systems in the same Data Guard configuration. For example, Oracle
Database 11g can support a Windows primary and Linux standby, or a 32-bit Oracle primary and
a 64-bit Oracle standby, and other select mixed configurations.11 This makes it very easy to
migrate between supported platform combinations with very little risk simply by creating a
standby database on the new platform and then switching over. In most cases, you are able to
minimize your risk even more by continuing to keep the old database on the previous platform
synchronized with the new. If an unanticipated problem occurs and you need to fall back to the
previous platform, you can simply execute a second switchover and no data is lost.
Failover
Failover is the term used to describe role transitions due to unplanned events. The process is
similar to switchover except that the primary database never has the chance to write an EOR
record. From the perspective of the standby database, redo transport has suddenly gone dormant.
The standby database faithfully applies the redo from the last committed transaction that it has
received and waits for redo transport to resume. At this point, whether or not a failover results in
data loss depends upon the Data Guard protection mode in effect at the time of failure. There
will never be any data loss in Maximum Protection. There will be zero data loss in Maximum
Availability, except when a previous failure (e.g. a network failure) had interrupted redo transport
and allowed the primary database to diverge from the standby database. Any committed
transactions that have not been transmitted to the standby will be lost if a second failure destroys
the primary database. Similarly, configurations using Maximum Performance (ASYNC) will lose
any committed transactions that were not transmitted to the standby database before the primary
database failed.
Data Guard Snapshot Standby:-
Data Guard Snapshot Standby in Oracle Database 11g, a feature included with the Enterprise
Edition license, has been developed to help address this problem. Using a single command, a
Data Guard 11g physical standby can be converted to a snapshot standby, independent of the
primary database, that is open read-write and able to be used for preproduction testing. Behind
the scenes, Data Guard uses Flashback Database and sets a guaranteed restore point (GRP)15 at the
SCN before the standby was open read-write. Primary database redo continues to be shipped to
a snapshot standby, and while not applied, it is archived for later use.
A second command converts the snapshot back into a synchronized physical standby
database when testing is complete. Behind the scenes the standby is flashed back to the
GRP, discarding changes made while it was open read-write. Redo Apply is started and all
primary database redo archived while a snapshot standby is applied until it is caught up with the
primary database. While a snapshot standby does not impact recovery point objectives, it can
lengthen recovery time at failover due to the time required to apply the backlog of redo archived
while it was open read-write.
Oracle Real Application Testing is a new option for the Oracle Database 11g Enterprise
Edition and is an ideal complement to Data Guard snapshot standby. It enables the capture of an
actual production workload, the replay of the captured workload on a test system (your Data
Guard snapshot standby), and subsequent performance analysis. You no longer have to invest time
and money writing tests that ultimately do an inadequate job of simulating actual workload. You
don’t have to try to re-create your transaction profile, load, timing, and concurrency. Using Data
Guard, the process is simple:
1. Convert a physical standby database to a snapshot standby and begin capturing workload
on your primary database.
2. Explicitly set a second guaranteed restore point on your snapshot standby database.
3. Replay the workload captured from the primary database on the snapshot standby to obtain
a base line performance sample.
4. Flash the snapshot standby back to your explicit guaranteed restore point set in step 2.
5. Implement whatever changes you want to test on the snapshot standby.
6. Replay the workload captured from the primary database on the snapshot standby and
analyze the impact of the change by comparing the results to your baseline run.
7. If you aren’t satisfied with the results and want to modify the change, simply flash the
snapshot standby back to your explicit guaranteed restore point set in step 2, make your
modifications, replay the same workload, and reassess the impact of the change.
8. When testing is complete, convert from snapshot standby back to a physical standby.
Data Guard will discard any changes made during testing and resynchronize the standby
with redo it had received from primary and archived while the snapshot standby was
open read-write.
Parameters of Interest to Data Guard:-
Three types of parameters exist as far as Data Guard is concerned: those that are independent of
the role of the database, those that are used only when the database is a primary, and those that
are used only when the database is a standby. While numerous parameters can be used with a
Data Guard configuration, you really need to configure only a few. And because so much of Data
Guard’s functionality is being moved into the code, many of these parameters and attributes have
been deprecated in the last few releases. It is important to note that just like your TNS names,
listeners, and SRL files, these parameters need to be defined on all databases in your configuration
in preparation for role transition. So what are these parameters?
Role-independent Parameters
■ DB_UNIQUE_NAME This parameter defines the unique name for a database. Since the
DB_NAME parameter must be the same for a physical standby database and different for
a logical standby database, this was introduced in 10g to provide a way to identify each
database in a Data Guard configuration. You need to set this on all of your databases,but it does require a bounce. If the parameter is not defined, it is defaulted to the
DB_NAME, which means you do not have to take an outage on production to create a standby. You can set it there later.db_unique_name='Matrix'
■ LOG_ARCHIVE_CONFIG This defines the list of valid DB_UNIQUE_NAME parameters for
your Data Guard configuration. When used with the DB_UNIQUE_NAME attribute of the
destination parameter (discussed in a moment), it provides a security check for Data
Guard that the connection between the two databases is allowed. This parameter is
dynamic as long as you do not use the SEND and RECEIVE attributes. Those are leftovers
from the old REMOTE_ARCHIVE_ENABLE parameter and are no longer needed, so do not
use them.
You need to add only the database unique names of the other databases in your
configuration. The current database unique name is always added behind the scenes. But
for clarity’s sake and to have the exact same parameter defined on all the databases, add
all the names explicitly. There is no requirement as to the order of the names in this
parameter, but it is absolutely mandatory for RAC databases in a Data Guard
configuration. This parameter should be used at all times.
log_archive_config='dg_config=(Matrix,Matrix_DR0)'
CONTROL_FILES Of course, you all know what this parameter is for, but with a standby
database it points to the Standby Control File. This is a special control file that is created
for you or that you create yourself depending on the method you use to create your
standby database. control_files='/Oracle/oradata/Matrix/control01.ctl'
■ LOG_ARCHIVE_MAX_PROCESSES We mention this parameter here because the default
setting is still 2, which is not enough. Archive processes on the primary database are
responsible for archiving the ORL files as they become full and for resolving gaps in the
redo stream to a standby database. And on a standby database, they are responsible for
archiving the SRL files and forwarding the archive logs to a cascaded standby database.
On the primary, one archive process is limited to servicing only the ORL files and is not
allowed to talk to a standby database at all. This special ARCH process is referred to as
the “Dedicated ARCH Process.” But the others are all allowed to perform both functions.
While an archive process is sending an archive log to a standby database, it is not
available to assist in archiving the ORL files. Even though the prime directive of an
archive process is “Always archive the online log files first before processing a gap,” it is
still possible in the worst case to have only that one archive process archiving the online
log files. If you do not have enough processes, then in a time of a large gap of a slow
network, you could be reduced to one archive process for the ORL files. And we are all
painfully aware that if the ORL files all get full at the same time, production stalls until
one gets archived. The multi-threaded gap resolution attribute (MAX_CONNECTIONS),
introduced in Oracle Database 10g, allows Data Guard to use more than one archive
process to send a single log file to a standby, which uses even more of the processes. So,
at a minimum, set this parameter at 4 with a maximum of 30.
log_archive_max_processes='4'
Standby Dedicated ARCH Process
It is important to note that even a physical standby database has a “Dedicated ARCH” process,
but that this just means that you have one less ARCH process available on the standby database
to archive the SRL files. In a physical standby the dedicated ARCH process is also not allowed
to archive the standby redo log files either.
One note on using multiple archive processes: While you need quite a few of them to
ensure that you do not have stalls on production, a large number of archive processes can
slow down switchovers because they all have to be awakened and asked to exit. You can
avoid this by reducing the parameter before starting a switchover. In addition, in Oracle
Database 11g with the new streaming capability, you can saturate your network with too
many archive processes if you happen to suffer a very large redo gap.
■ DB_CREATE_FILE_DEST Although this is not a Data Guard–specific parameter, it is
worth mentioning here since you will need to define it at the standby database if you are
using ASM.
db_create_file_dest=+DATA
Primary Role Parameters
■ LOG_ARCHIVE_DEST_n This is the main parameter for Data Guard redo transport and
is usually in action when used on a primary database. Some exceptions to that rule
mainly deal with cascaded standby destinations. This parameter can also be used to
specify where the archive log files from the ORL files or the SRL files are supposed to go.
But as of Oracle Database 10g Release 1 and the introduction of the flash recovery area,
the local archiving is defaulted to the flash recovery area and you no longer need to
define a local destination. We will discuss local archiving and the LOCATION attribute,
but since you should be using the flash recovery area, you will not be setting a local
destination.
This parameter has seventeen attributes, all of which you can configure when setting up
redo transport to a standby database. You need to set only seven of them to have a properly
functioning Data Guard redo transport to a standby database. We will talk about those
seven first and will then show you some examples of how to use them. Then we’ll discuss
the remaining attributes and describe where you may use them and why. We recommend
that you do not use six of them.
The following attributes are required:
■ SERVICE Specifies the TNSNAMES descriptor you created that points to your standby
database. The network tuning you performed earlier will come from here.
■ SYNC Specifies that you want the redo sent using a synchronous method, meaning
that the LGWR process will wait for acknowledgment from the LNS before telling
the client that the transaction has committed. This is required on at least one standby
destination for Maximum Availability or Maximum Protection mode.
■ ASYNC This is the default, and if you do not specify a transport type you will get
asynchronous redo transport. This is the Maximum Performance redo transport
method.
■ NET_TIMEOUT Specifies the number of seconds that the LGWR process will wait for
an LNS process to respond before abandoning the standby as failed. The default is 30
seconds, but 10 to 15 seconds would be a better value depending on the reliability
of your network. Do not set it below 10 as you will experience failed reconnects after
a standby database comes back, since it take a few seconds to reconnect everything.
Reconnection requires the following:
■ Stopping a stale LNS process
■ Starting a new LNS process
■ Making the connection to the standby database
■ Detecting and stopping a stale RFS process
■ Starting a new RFS process
■ Selecting and opening a new SRL
■ Initializing the header of the SR
■ Responding back to the LNS that all is ready to go
All of this occurs before the LNS process can tell the LGWR that it is ready to go. If
this process takes longer than your value for NET_TIMEOUT the LGWR will abandon
the standby anew and this whole thing will happen again at every log switch.
■ REOPEN Controls the wait time before Data Guard will allow the primary database
to attempt a reconnection to a failed standby database. Its default is 300 seconds
(5 minutes), and this is usually the reason people complain that Data Guard isn’t
reconnecting after they abort their standby. Generally speaking, in test mode we all
do things very fast. So the actions are SHUTDOWN ABORT the standby, watch the alert
log of the primary database to see it disconnect from the standby, restart the standby
database, and then switch logs on the primary database in hopes of seeing Data
Guard reconnect. And all of this happens in less than 300 seconds, so Data Guard
does not reconnect at the first log switch or a few more if you try them too fast. This
attribute was designed to avoid a potentially stalling reconnect attempt if a log switch
occurred immediately after a standby database destination failed. You will want to
reduce this attribute to 30 or even 15 seconds so that Data Guard gets reconnected
as fast as possible.
■ DB_UNIQUE_NAME Using this attribute in your LOG_ARCHIVE_DEST_n parameter
requires that you also set the LOG_ARCHIVE_CONFIG parameter; otherwise, Data
Guard will refuse to connect to this destination. The name you would use here for
a SERVICE destination (a remote one) is the unique name you specified for the
database at the other end of the connection—that is, the standby database.
You must also enter this unique name into the LOG_ARCHIVE_CONFIG parameter on
both databases. When a primary database makes a connection to a standby database,
it will send its own unique database name to the standby and ask for the standby’s
unique name in return. The standby will check in its configuration parameter, LOG_
ARCHIVE_CONFIG, to make sure that the primary’s unique name is present. If it is not
the connection is refused. If it is present, the standby will send its own unique name
back to the primary LNS process. If that returned value does not match the value you
specified in this attribute, the connection is terminated.
Like the LOG_ARCHIVE_CONFIG parameter, this attribute is mandatory for RAC
databases in a Data Guard configuration.
■ VALID_FOR This is the last of the required attributes. Even if you think that your
Data Guard configuration will function just fine without this attribute (and it will), it
is a very good idea to use it anyway. The main function of this attribute is to define
when the LOG_ARCHIVE_DEST_n destination parameter should be used and on what
type of redo log file it should operate.
Following are the legal values for log files:
■ ONLINE_LOGFILE Valid only when archiving ORL files
■ STANDBY_LOGFILE Valid only when archiving SRL files
■ ALL_LOGFILES Valid regardless of redo log files type
Following are the legal values for roles:
■ PRIMARY_ROLE Valid only when the database is running in the primary role
■ STANDBY_ROLE Valid only when the database is running in the standby role
■ ALL_ROLES Valid regardless of database role.
A VALID_FOR will allow the destination parameter to be used if the answer to both of its parameters
is TRUE. This attribute enables you to predefine all of your destination parameters on all databases
in your Data Guard configuration knowing that they will be used only if the VALID_FOR is TRUE.
No more enabling or disabling destinations at role transition time.
So what will your LOG_ARCHIVE_DEST_n parameter look like? Up to nine destinations are
available, meaning that you can have up to nine standby databases. In reality, ten destinations are
available, but one is reserved for the default local archiving destination, which we will discuss in a
moment. We’ll use parameter number 2 to start and add a standby database that is in Manchester
and will be our Maximum Availability standby database (edited for appearance):
log_archive_dest_2='service=Matrix_DR0
SYNC REOPEN=15 NET_TIMEOUT=15
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=Matrix_DR0'
Now let’s add in our Newark standby as parameter number 3, which has a network latency
greater than we would like for SYNC so it will operate in asynchronous mode:
log_archive_dest_3='service=Matrix_DR1
ASYNC REOPEN=15
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=Matrix_DR1'
And of course since we used the proper DB_UNIQUE_NAME attribute, we need to define our LOG_
ARCHIVE_CONFIG parameter, too:
log_archive_config='dg_config=(Matrix,Matrix_DR0,Matrix_DR1)'
The following attributes are optional:
■ AFFIRM Default for SYNC destinations. Requires that the LNS process waits for the RFS to
perform a direct I/O on the SRL file before returning a success message. Required for SYNC
in Maximum Availability or Maximum Protection. You do not need to set this as it will
default based on the destination. And even though you can set it for an ASYNC destination
in 10g, there is no reason to do so. In fact, it will slow down the LNS process. AFFIRM is
ignored for ASYNC destinations in Oracle Database 11g.
■ NOAFFIRM Default for ASYNC destinations if not specified. Used in Maximum
Performance destinations. Again, there’s no need to specify this as it is the default for
ASYNC destinations. And if you try to set NOAFFIRM with a SYNC destination, your
protection mode will fail to meet the rules and will be marked as being resynchronized.
If this is your only SYNC standby and you are in Maximum Availability mode, you will
not be able to perform a zero data loss failover and you will lose data. If this is your
only SYNC destination, you are running in Maximum Protection mode, and you set
NOAFFIRM, your primary database will crash!
■ COMPRESSION This attribute turns on compression using the Advanced Compression
option for this standby destination. By default, this means that any ARCH process that
is sending a gap to this destination will compress the archive as it is sending it. If you
set the hidden parameter,13 then it will also compress as the current redo stream is
being sent. For example, assuming we set the hidden parameter, with our previous two
destinations let’s add the COMPRESSION attribute:
log_archive_dest_2='service=Matrix_DR0
LGWR SYNC REOPEN=15 NET_TIMEOUT=15
COMPRESSION=ENABLE
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=Matrix_DR0'
log_archive_dest_3='service=Matrix_DR1
LGWR ASYNC REOPEN=15
COMPRESSION=ENABLE
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=Matrix_DR1'
Matrix_DR0 will be compressed only when an ARCH process is sending a gap (no
compression for SYNC, remember?), and Matrix_DR1 will have the redo compressed at all
times. This does not mean that the redo remains compressed on disk, as this compression
is only during transport. The data is uncompressed at the standby side before it is written
to the SRL file.
■ MAX_CONNECTIONS This attribute was introduced in 10g Release 2 to allow you to
specify the number of archive processes that should be used for the standby destination
when sending a gap; it is no longer used in 11g. But if you are using 10g, you can specify
1 to 5 (with 1 being the default). If you specify more than 1, whenever this standby
destination needs to receive a gap, that many archive processes will be assigned to send
the archive log. The file will be split up among them, sent in parallel streams across the
network, and reassembled on the standby side.
log_archive_dest_2='service=Matrix_DR0
LGWR SYNC REOPEN=15 NET_TIMEOUT=15
MAX_CONNECTIONS=5
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=Matrix_DR0'
Now when Matrix_DR0 suffers a disconnect from the primary, the gap resolution process
on the primary will use multiple streams of redo for each missing archive log file.
CAUTION
Do not use the MAX_CONNECTIONS attribute if you are running Oracle
Database 11g as it will impede the redo transport performance.
■ DELAY Rather than delaying the shipment of the redo, which is what a lot of people
think it does, this attribute merely instructs the apply processes of the target standby
database not to apply the redo without a lag of the number of seconds defined by this
attribute. With Flashback Database, this attribute is almost obsolete, especially since we
recommend that you always enable Flashback Database on your standby databases and
your primary database. If you tend to do a lot of things that Flashback Database cannot
handle, then you might want to specify a delay. Flashback Database and Data Guard will
be discussed in Chapter 8.
■ ALTERNATE Alternate destinations were originally used to keep a database up and
running when the local disk where you are archiving the ORL files fills up. Using an
alternate destination, you could redirect the archive processes to use an auxiliary disk
for the archive logs. This problem has basically disappeared with the flash recovery area,
which self-manages its space.
You could also use this attribute for remote standby destinations if you had multiple
network paths to a standby database. Obviously, you would use multiple paths to the
standby database with an Oracle RAC, but that is not what ALTERNATE was designed to do.
It is easier in both the single instance with multiple network interfaces case or the Oracle
RAC case to use connect time failover in your TNS descriptor for the standby database.
Standby Redo Logs (SRL) : is similar to Online Redo Log (ORL) and only difference between two is that Standby Redo Log is used to store redo data received from another database (primary database).
Standby Redo Logs are only used if you have the LGWR as transport mode to Remote Standby Database. (step to check transport mode below).
In what scenarios Standby Redo Logs are required ?
Standby Redo Log is required if
1) Your standby database is in maximum protection or maximum availability modes. (Physical Standby Database can run in one of three modes – Maximum Protection, Maximum Availability and Maximum Performance)
or
2) If you are using Real-Time Apply on Standby Database.
or
3) If you are using Cascaded Destinations.
Things good to know about SRL
i) Standby Redo Logs should be same size as of Online Redo Logs. (The RFS process will attach Standby Redo Logs only if they are of same size as of Online Redo Log)
ii) Although the standby redo log is only used when the database is running in the standby role, Oracle recommends that you create a standby redo log on the primary database so that the primary database can switch over quickly to the standby role without the need for additional DBA intervention.
iii)Standby redo logs can be created even after the standby has been created. In this case create the SRL’s on the primary before the creation of SRL on the standby database. (Standby Redo Log is not mandatory for Primary Database but its good practice and useful in role conversion from Primary to Standby Database)
iv)It is a best practice/recommendation to maintain Standby Redo Logs (SRLs) on both the standby AND primary database when using LGWR transport mode regardless of protection mode (Maximum Protection/Performance/Availability.
Q. What is RFS process on Standby Database ?
RFS – Remote File System on standby database receives data from Primary Database and writes it to Disk.
.
Q. How to identify which transport mode (Archiver or Log Writer) you are using to ship ?
SQL> show parameter log_archive_dest_;
log_archive_dest_<n> SERVICE=visr12_standby [ARCH | LGWR]
If neither the ARCH or LGWR attribute is specified, the default is ARCH.
More information about log_archive_dest_<n> here.
Q. How to check if you are using Real-Time Apply ?
SQL> SELECT DEST_ID, RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;
Q. How to identify standby redo logs ?
SQL> select * from v$standby_log;
Q. How to see members of standby redo log file
SQL> select * from v$logfile where type=’STANDBY';
Q. How to add Standby Redo Log File Group to a Specific Group Number
SQL> alter database add standby logfile group 4 (
‘/<full_path_for_srl>/log04a.dbf’,
‘/<full_path_for_srl>/log04b.dbf’
) size 50m;
What is force logging ?
FORCE LOGGING Option
In Oracle9i release 2, the FORCE LOGGING option was introduced. The FORCE LOGGING option can be set at the database level or the tablespace level. The precedence is from database to tablespace. If a tablespace is created or altered to have FORCE LOGGING enabled, any change in that tablespace will go into the redo log and be usable for recovery.
Similarly, if a database is created or altered to have the FORCE LOGGING enabled, any change across the database, with exception of temporary segments and temporary tablespace, will be available in redo logs for recovery.
The FORCE LOGGING option can be set at database creation time or later using the alter database command.
To set FORCE LOGGING during the database creation, specify the following:
CREATE DATABASE <dbname>…..FORCE LOGGING…
To enable FORCE LOGGING after the database is created, use the following command:
ALTER DATABASE FORCE LOGGING;
The FORCE LOGGING option is the safest method to ensure that all the changes made in the database will be captured and available for recovery in the redo logs. Force logging is the new feature added to the family of logging attributes.
Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.
FORCE LOGGING Option
The database or tablespaces in the database should be put into FORCE LOGGING mode before creating the backup for the standby database. Either a database or all of its tablespaces should be put into this mode but not both.
The following statement will put a tablespace in FORCE LOGGING mode:
ALTER TABLESPACE <tablespace name> FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the database level using the following statement:
ALTER DATABASE NO FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the tablespace level using the following statement:
ALTER TABLESPACE <tablespace name> NO FORCE LOGGING;
Temporary tablespaces and temporary segments have no effect during FORCE LOGGING mode because these objects do not generate any redo. Undo tablespaces are in FORCE LOGGING mode by default, so they cannot be put into FORCE LOGGING mode. Oracle will generate an error if an attempt is made to put a temporary tablespace or undo tablespace into FORCE LOGGING mode.
The FORCE_LOGGING column of v$database view can be queried to verify that the database is in FORCE LOGGING mode. Similarly, the FORCE_LOGGING column of dba_tablespaces view provides the same logging information for each tablespace.
select force_logging from v$database;
select force_logging from dba_tablespaces;
Also see these important notes on running DML in nologging mode:
Force logging mode is persistent across database startup, but it is not maintained when the control file is recreated unless the FORCE LOGGING clause is specified in the create controlfile statement. Also, a tablespace in the FORCE LOGGING mode, when transported to another database, does not maintain this mode.
In these situations, the FORCE LOGGING mode would have to be re-enabled. The primary database should remain in FORCE LOGGING mode as long as there is at least one Oracle instance in use. Putting a database in FORCE LOGGING mode will have some performance impact.