db2z 12 Adminbook
db2z 12 Adminbook
db2z 12 Adminbook
Administration Guide
Last updated: 2023-07-20
IBM
SC27-8844-02
Notes
Before using this information and the product it supports, be sure to read the general information under
"Notices" at the end of this information.
Subsequent editions of this PDF will not be delivered in IBM Publications Center. Always download the
latest edition from IBM Documentation.
2023-07-20 edition
This edition applies to Db2® 12 for z/OS® (product number 5650-DB2), Db2 12 for z/OS Value Unit Edition (product
number 5770-AF3), and to any subsequent releases until otherwise indicated in new editions. Make sure you are using
the correct edition for the level of the product.
Specific changes are indicated by a vertical bar to the left of a change. A vertical bar to the left of a figure caption
indicates that the figure has changed. Editorial changes that have no technical significance are not noted.
© Copyright International Business Machines Corporation 1982, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
iii
Syntax and descriptions for creating non-UTS table spaces (deprecated)...................................63
EA-enabled table spaces and index spaces...................................................................................68
Implementing Db2 tables.................................................................................................................... 69
Types of tables................................................................................................................................ 70
Guidelines for table names.............................................................................................................72
Creating base tables....................................................................................................................... 73
Partitioning data in Db2 tables....................................................................................................... 74
Nullable partitioning columns........................................................................................................ 76
Creating temporary tables..............................................................................................................77
Creating temporal tables................................................................................................................ 83
Creating materialized query tables................................................................................................ 96
Creating a clone table .................................................................................................................... 97
Creating an archive table................................................................................................................ 99
Implementing Db2 views...................................................................................................................100
Creating Db2 views....................................................................................................................... 100
Guidelines for view names........................................................................................................... 102
Querying views that reference temporal tables.......................................................................... 102
How Db2 inserts and updates data through views......................................................................103
Dropping Db2 views......................................................................................................................103
Implementing Db2 indexes............................................................................................................... 104
Types of indexes........................................................................................................................... 104
Creating Db2 indexes................................................................................................................... 107
How indexes can help to avoid sorts............................................................................................108
Index keys.....................................................................................................................................109
Index names and guidelines........................................................................................................ 110
General index attributes...............................................................................................................111
XML index attributes.....................................................................................................................117
Indexes on partitioned tables...................................................................................................... 118
How Db2 implicitly creates an index............................................................................................123
Implementing Db2 schemas............................................................................................................. 124
Creating a schema by using the schema processor.................................................................... 124
Processing schema definitions.....................................................................................................125
Loading data into Db2 tables............................................................................................................. 125
Loading data with the LOAD utility............................................................................................... 126
Loading data by using the INSERT statement............................................................................. 128
Loading data with DRDA fast load (zLoad)................................................................................... 130
Loading data from DL/I.................................................................................................................133
Implementing Db2 stored procedures.............................................................................................. 134
Creating stored procedures..........................................................................................................134
Dropping stored procedures........................................................................................................ 135
Implementing relationships with referential constraints................................................................. 136
How Db2 enforces referential constraints................................................................................... 136
Insert rules................................................................................................................................... 137
Update rules................................................................................................................................. 138
Delete rules...................................................................................................................................138
Constructing a referential structure.............................................................................................139
Implementing Db2 triggers................................................................................................................139
Implementing Db2 user-defined functions.......................................................................................140
Creating user-defined functions...................................................................................................140
Deleting user-defined functions...................................................................................................143
Implementing Db2 system-defined routines.................................................................................... 143
Obfuscating source code of SQL procedures, SQL functions, and triggers...................................... 144
Estimating disk storage for user data................................................................................................ 146
General approach to estimating storage......................................................................................146
Calculating the space required for a table .................................................................................. 148
Calculating the space required for an index................................................................................ 152
Identifying databases that might exceed the OBID limit..................................................................157
iv
Chapter 3. Altering your database design...............................................................................................161
Using the catalog in database design................................................................................................ 162
Retrieving catalog information about Db2 storage groups..........................................................162
Retrieving catalog information about a table...............................................................................162
Retrieving catalog information about partition order.................................................................. 163
Retrieving catalog information about aliases.............................................................................. 163
Retrieving catalog information about columns............................................................................164
Retrieving catalog information about indexes............................................................................. 165
Retrieving catalog information about views................................................................................ 165
Retrieving catalog information about materialized query tables................................................ 166
Retrieving catalog information about authorizations.................................................................. 166
Retrieving catalog information about primary keys.....................................................................167
Retrieving catalog information about foreign keys...................................................................... 167
Retrieving catalog information about check pending..................................................................168
Retrieving catalog information about check constraints.............................................................169
Retrieving catalog information about LOBs................................................................................. 169
Retrieving catalog information about user-defined functions and stored procedures.............. 170
Retrieving catalog information about triggers............................................................................. 170
Retrieving catalog information about sequences........................................................................ 171
Adding and retrieving comments................................................................................................. 171
Verifying the accuracy of the database definition....................................................................... 172
Trailing blanks in Db2 catalog columns....................................................................................... 172
Altering Db2 databases......................................................................................................................173
Altering Db2 storage groups.............................................................................................................. 174
Letting SMS manage your Db2 storage groups............................................................................174
Adding or removing volumes from a Db2 storage group............................................................. 175
Migrating existing data sets to a solid-state drive....................................................................... 176
Altering table spaces..........................................................................................................................177
Changing the logging attribute for a table space......................................................................... 178
Changing the space allocation for user-managed data sets....................................................... 180
Dropping and re-creating a table space to change its attributes................................................180
Redistributing data in partitioned table spaces...........................................................................182
Increasing partition size............................................................................................................... 183
Altering a page set to contain Db2-defined extents....................................................................184
Converting deprecated table spaces to the UTS types............................................................... 185
Moving tables from multi-table table spaces to partition-by-growth table spaces................... 186
Converting partitioned (non-UTS) table spaces to partition-by-range universal table spaces..190
Converting table spaces to use table-controlled partitioning.....................................................191
Altering Db2 tables............................................................................................................................ 194
Adding a column to a table...........................................................................................................195
Specifying a default value when altering a column..................................................................... 197
Altering the data type of a column............................................................................................... 198
Altering a table for referential integrity........................................................................................209
Adding or dropping table check constraints................................................................................ 213
Adding partitions.......................................................................................................................... 214
Altering partitions......................................................................................................................... 218
Adding XML columns.................................................................................................................... 227
Altering the size of your hash spaces ..........................................................................................228
Adding a system period and system-period data versioning to an existing table...................... 229
Adding an application period to a table....................................................................................... 231
Manipulating data in a system-period temporal table................................................................ 232
Altering materialized query tables............................................................................................... 232
Altering the assignment of a validation routine........................................................................... 234
Altering a table to capture changed data.....................................................................................235
Changing an edit procedure or a field procedure........................................................................ 236
Altering the subtype of a string column....................................................................................... 236
Altering the attributes of an identity column............................................................................... 237
v
Changing data types by dropping and re-creating the table....................................................... 237
Moving a table to a table space of a different page size..............................................................241
Altering Db2 views............................................................................................................................. 242
Altering views by using the INSTEAD OF trigger......................................................................... 242
Changing data by using views that reference temporal tables................................................... 243
Altering Db2 indexes.......................................................................................................................... 243
Alternative method for altering an index..................................................................................... 244
Adding columns to an index......................................................................................................... 245
Altering how varying-length index columns are stored...............................................................247
Altering the clustering of an index............................................................................................... 248
Dropping and redefining a Db2 index...........................................................................................249
Reorganizing indexes....................................................................................................................250
Pending data definition changes........................................................................................................250
Materializing pending definition changes.................................................................................... 254
Restrictions for pending data definition changes........................................................................ 259
Pending column alterations..........................................................................................................264
Altering stored procedures................................................................................................................ 266
Altering user-defined functions......................................................................................................... 267
Altering implicitly created XML objects............................................................................................. 268
Changing the high-level qualifier for Db2 data sets..........................................................................269
Defining a new integrated catalog alias....................................................................................... 269
Changing the qualifier for system data sets................................................................................ 270
Changing qualifiers for other databases and user data sets....................................................... 273
Tools for moving Db2 data................................................................................................................. 277
Moving Db2 data........................................................................................................................... 279
Moving a Db2 data set.................................................................................................................. 280
When to regenerate Db2 database objects and routines................................................................. 282
vi
Running application programs using RRSAF..................................................................................... 421
vii
Controlling the IRLM.......................................................................................................................... 479
z/OS commands that operate on IRLM........................................................................................ 480
Starting the IRLM..........................................................................................................................480
Stopping the IRLM........................................................................................................................ 481
Monitoring threads............................................................................................................................. 482
Monitoring threads with DISPLAY THREAD commands.............................................................. 482
Controlling allied threads and connections.......................................................................................489
Controlling TSO connections........................................................................................................ 490
Controlling CICS connections.......................................................................................................493
Controlling IMS connections........................................................................................................ 498
Controlling RRS connections........................................................................................................ 508
Controlling distributed data connections and database access threads (DBATs)........................... 514
Starting DDF..................................................................................................................................514
Stopping DDF................................................................................................................................ 515
Suspending and resuming DDF server activity............................................................................ 516
Displaying information about DDF work...................................................................................... 517
Monitoring remote connections by using profile tables.............................................................. 521
Monitoring threads by using profile tables.................................................................................. 526
Monitoring idle threads by using profile tables........................................................................... 532
Canceling SQL from an IBM data server driver............................................................................ 537
Canceling threads......................................................................................................................... 538
Monitoring DDF problems by using NetView............................................................................... 539
Controlling traces............................................................................................................................... 541
Diagnostic traces for attachment facilities.................................................................................. 541
Controlling Db2 trace data collection.......................................................................................... 542
Diagnostic trace for the IRLM.......................................................................................................543
Setting special registers by using profile tables................................................................................543
Setting built-in global variables by using profile tables.................................................................... 548
viii
Discarding archive log records...........................................................................................................587
Locating archive log data sets............................................................................................................587
Management of the bootstrap data set............................................................................................. 589
Restoring dual-BSDS mode.......................................................................................................... 590
BSDS copies with archive log data sets....................................................................................... 590
Recommendations for changing the BSDS log inventory............................................................ 591
ix
Tips for maximizing data availability during backup and recovery................................................... 631
Where to find recovery information................................................................................................... 635
How to report recovery information.................................................................................................. 636
Discarding SYSCOPY and SYSLGRNX records................................................................................... 636
Preparations for disaster recovery.................................................................................................... 637
System-wide points of consistency............................................................................................. 639
Recommendations for more effective recovery from inconsistency................................................ 639
Actions to take to aid in successful recovery of inconsistent data............................................. 639
Actions to avoid in recovery of inconsistent data........................................................................ 641
How to recover multiple objects in parallel.......................................................................................641
Recovery of page sets and data sets................................................................................................. 642
Recovery of the work file database.............................................................................................. 643
Page set and data set copies........................................................................................................644
System-level backups for object-level recoveries.......................................................................647
Recovery of data to a prior point in time........................................................................................... 648
Plans for point-in-time recovery.................................................................................................. 648
Point-in-time recovery with system-level backups..................................................................... 649
Point-in-time recovery using the RECOVER utility.......................................................................651
Implications of moving data sets after a system-level backup...................................................660
Recovery of table spaces..............................................................................................................660
Recovery of indexes......................................................................................................................663
Recovery of FlashCopy image copies...........................................................................................664
Preparing to recover to a prior point of consistency....................................................................665
Preparing to recover an entire Db2 subsystem to a prior point in time using image copies or
object-level backups.....................................................................................................................667
Creating essential disaster recovery elements................................................................................. 668
Resolving problems with a user-defined work file data set..............................................................669
Resolving problems with Db2-managed work file data sets............................................................ 670
Recovering error ranges for a work file table space..........................................................................670
Recovery of error ranges for a work file table space................................................................... 670
Recovering after a conditional restart of Db2................................................................................... 671
Recovery of the catalog and directory......................................................................................... 671
Regenerating missing identity column values...................................................................................672
Recovery of tables that contain identity columns....................................................................... 672
Recovering a table space and all of its indexes.................................................................................673
Recovery implications for objects that are not logged................................................................ 673
Removing various pending states from LOB and XML table spaces................................................. 677
Restoring data by using DSN1COPY.................................................................................................. 677
Backing up and restoring data with non-Db2 dump and restore......................................................677
Recovering accidentally dropped objects......................................................................................... 678
Recovering an accidentally dropped table...................................................................................678
Recovering an accidentally dropped table space........................................................................ 680
Recovering a Db2 system to a given point in time using the RESTORE SYSTEM utility................... 684
Recovering by using Db2 restart recovery.........................................................................................686
Recovering by using FlashCopy volume backups..............................................................................686
Making catalog definitions consistent with your data after recovery to a prior point in time..........687
Recovery of catalog and directory tables.....................................................................................689
Performing remote site recovery from a disaster at a local site....................................................... 689
Recovering with the BACKUP SYSTEM and RESTORE SYSTEM utilities......................................689
Recovering without using the BACKUP SYSTEM utility............................................................... 690
Backup and recovery involving clone tables..................................................................................... 690
Recovery of temporal tables with system-period data versioning................................................... 691
Data restore of an entire system........................................................................................................691
Accessing historical data from moved tables by using image copies...............................................691
Recovering from different Db2 for z/OS problems................................................................................. 283
Recovering from IRLM failure............................................................................................................ 283
Recovering from z/OS or power failure..............................................................................................284
Recovering from disk failure.............................................................................................................. 284
x
Recovering from application errors................................................................................................... 286
Backing out incorrect application changes (with a quiesce point)..............................................287
Backing out incorrect application changes (without a quiesce point)........................................ 287
Recovering from IMS-related failures .............................................................................................. 288
Recovering from IMS control region failure ................................................................................ 288
Recovering from IMS indoubt units of recovery.......................................................................... 289
Recovering from IMS application failure......................................................................................291
Recovering from a Db2 failure in an IMS environment................................................................291
Recovering from CICS-related failure ...............................................................................................292
Recovering from CICS application failures.................................................................................. 292
Recovering Db2 when CICS is not operational ........................................................................... 293
Recovering Db2 when the CICS attachment facility cannot connect to Db2 .............................294
Recovering CICS indoubt units of recovery..................................................................................294
Recovering from CICS attachment facility failure .......................................................................297
Recovering from a QMF query failure................................................................................................ 297
Recovering from subsystem termination ..........................................................................................298
Recovering from temporary resource failure ....................................................................................299
Recovering from active log failures ...................................................................................................299
Recovering from being out of space in active logs ......................................................................300
Recovering from a write I/O error on an active log data set .......................................................301
Recovering from a loss of dual active logging ............................................................................. 301
Recovering from I/O errors while reading the active log ............................................................ 302
Recovering from archive log failures ................................................................................................ 304
Recovering from allocation problems with the archive log ........................................................ 304
Recovering from write I/O errors during archive log offload ...................................................... 304
Recovering from read I/O errors on an archive data set during recovery .................................. 305
Recovering from insufficient disk space for offload processing .................................................305
Recovering from BSDS failures.......................................................................................................... 306
Recovering from an I/O error on the BSDS ................................................................................. 307
Recovering from an error that occurs while opening the BSDS ................................................. 307
Recovering from unequal timestamps on BSDSs ....................................................................... 308
Recovering the BSDS from a backup copy................................................................................... 309
Recovering from BSDS or log failures during restart.........................................................................311
Recovering from failure during log initialization or current status rebuild................................. 313
Recovering from a failure during forward log recovery............................................................... 324
Recovering from a failure during backward log recovery............................................................ 329
Recovering from a failure during a log RBA read request............................................................332
Recovering from unresolvable BSDS or log data set problem during restart............................. 333
Recovering from a failure resulting from total or excessive loss of log data.............................. 335
Resolving inconsistencies resulting from a conditional restart...................................................339
Recovering from Db2 database failure ............................................................................................. 344
Recovering a Db2 subsystem to a prior point in time....................................................................... 345
Recovering from a down-level page set problem .............................................................................346
Recovering from a problem with invalid LOBs...................................................................................348
Recovering from table space I/O errors ........................................................................................... 348
Recovering from Db2 catalog or directory I/O errors .......................................................................349
Recovering from integrated catalog facility failure .......................................................................... 350
Recovering VSAM volume data sets that are out of space or destroyed.................................... 351
Recovering from out-of-disk-space or extent limit problems .................................................... 352
Recovering from referential constraint violation ..............................................................................356
Recovering from distributed data facility failure .............................................................................. 356
Recovering from conversation failure ......................................................................................... 357
Recovering from communications database failure.................................................................... 357
Recovering from database access thread failure ........................................................................358
Recovering from VTAM failure ..................................................................................................... 359
Recovering from VTAM ACB OPEN problems...............................................................................359
Recovering from TCP/IP failure ................................................................................................... 360
Recovering from remote logical unit failure ................................................................................361
xi
Recovering from an indefinite wait condition.............................................................................. 361
Recovering database access threads after security failure ........................................................362
Performing remote-site disaster recovery ........................................................................................362
Recovering from a disaster by using system-level backups........................................................363
Restoring data from image copies and archive logs.................................................................... 363
Recovering from disasters by using a tracker site....................................................................... 377
Using data mirroring for disaster recovery.................................................................................. 386
Scenarios for resolving problems with indoubt threads................................................................... 392
Scenario: Recovering from communication failure .....................................................................393
Scenario: Making a heuristic decision about whether to commit or abort an indoubt thread... 395
Scenario: Recovering from an IMS outage that results in an IMS cold start.............................. 397
Scenario: Recovering from a Db2 outage at a requester that results in a Db2 cold start.......... 398
Scenario: What happens when the wrong Db2 subsystem is cold started.................................401
Scenario: Correcting damage from an incorrect heuristic decision about an indoubt thread....403
Appendix A. Exit routines...................................................................................733
Edit procedures........................................................................................................................................733
Specifying edit procedures................................................................................................................ 734
When edit routines are taken.............................................................................................................734
Parameter list for edit procedures.....................................................................................................734
Incomplete rows and edit routines................................................................................................... 735
xii
Expected output for edit routines......................................................................................................736
Validation routines...................................................................................................................................737
Specifying validation routines............................................................................................................737
When validation routines are taken................................................................................................... 738
Parameter list for validation routines................................................................................................ 738
Incomplete rows and validation routines..........................................................................................739
Expected output for validation routines............................................................................................ 739
Date and time routines............................................................................................................................ 739
Specifying date and time routines..................................................................................................... 740
When date and time routines are taken............................................................................................ 741
Parameter list for date and time routines......................................................................................... 741
Expected output for date and time routines..................................................................................... 742
Conversion procedures............................................................................................................................743
Specifying conversion procedures.....................................................................................................743
When conversion procedures are taken............................................................................................ 744
Parameter list for conversion procedures......................................................................................... 744
Expected output for conversion procedures..................................................................................... 745
Field procedures...................................................................................................................................... 746
Field-definition for field procedures..................................................................................................747
Specifying field procedures............................................................................................................... 747
When field procedures are taken.......................................................................................................747
Control blocks for execution of field procedures.............................................................................. 748
Field-definition (function code 8)...................................................................................................... 752
Field-encoding (function code 0).......................................................................................................754
Field-decoding (function code 4)...................................................................................................... 756
Log capture routines................................................................................................................................758
Specifying log capture routines......................................................................................................... 758
When log capture routines are invoked............................................................................................. 758
Parameter list for log capture routines..............................................................................................759
Routines for dynamic plan selection in CICS.......................................................................................... 760
General guidelines for writing exit routines............................................................................................761
Coding rules for exit routines.............................................................................................................761
Modifying exit routines.......................................................................................................................762
Execution environment for exit routines........................................................................................... 762
Registers at invocation for exit routines............................................................................................ 762
Parameter list for exit routines.......................................................................................................... 762
Row formats for edit and validation routines..........................................................................................764
Column boundaries for edit and validation procedures....................................................................764
Null values for edit procedures, field procedures, and validation routines......................................764
Fixed-length rows for edit and validation routines........................................................................... 764
Varying-length rows for edit and validation routines........................................................................ 765
Varying-length rows with nulls for edit and validation routines....................................................... 765
EDITPROCs and VALIDPROCs for handling basic and reordered row formats................................ 766
Converting basic row format table spaces with edit and validation routines to reordered row
format............................................................................................................................................ 766
Dates, times, and timestamps for edit and validation routines........................................................ 768
Parameter list for row format descriptions....................................................................................... 768
Db2 decoding for numeric data in edit and validation routines........................................................770
xiii
Information resources for Db2 for z/OS and related products..............................779
Notices..............................................................................................................781
Programming interface information........................................................................................................ 782
Trademarks.............................................................................................................................................. 783
Terms and conditions for product documentation................................................................................. 783
Privacy policy considerations.................................................................................................................. 783
Glossary............................................................................................................ 785
Index................................................................................................................ 787
xiv
About this information
This information provides guidance information that you can use to perform a variety of administrative
tasks with Db2 for z/OS (Db2).
Throughout this information, "Db2" means "Db2 12 for z/OS". References to other Db2 products use
complete names or specific abbreviations.
Important: To find the most up to date content for Db2 12 for z/OS, always use IBM® Documentation
or download the latest PDF file from PDF format manuals for Db2 12 for z/OS (Db2 for z/OS in IBM
Documentation).
Most documentation topics for Db2 12 for z/OS assume that the highest available function level is
activated and that your applications are running with the highest available application compatibility level,
with the following exceptions:
• The following documentation sections describe the Db2 12 migration process and how to activate new
capabilities in function levels:
– Migrating to Db2 12 (Db2 Installation and Migration)
– What's new in Db2 12 (Db2 for z/OS What's New?)
– Adopting new capabilities in Db2 12 continuous delivery (Db2 for z/OS What's New?)
• FL 501 A label like this one usually marks documentation changed for function level 500 or higher,
with a link to the description of the function level that introduces the change in Db2 12. For more
information, see How Db2 function levels are documented (Db2 for z/OS What's New?).
The availability of new function depends on the type of enhancement, the activated function level, and
the application compatibility levels of applications. In the initial Db2 12 release, most new capabilities are
enabled only after the activation of function level 500 or higher.
Virtual storage enhancements
Virtual storage enhancements become available at the activation of the function level that introduces
them or higher. Activation of function level 100 introduces all virtual storage enhancements in
the initial Db2 12 release. That is, activation of function level 500 introduces no virtual storage
enhancements.
Subsystem parameters
New subsystem parameter settings are in effect only when the function level that introduced them or
a higher function level is activated. Many subsystem parameter changes in the initial Db2 12 release
take effect in function level 500. For more information about subsystem parameter changes in Db2
12, see Subsystem parameter changes in Db2 12 (Db2 for z/OS What's New?).
Optimization enhancements
Optimization enhancements become available after the activation of the function level that introduces
them or higher, and full prepare of the SQL statements. When a full prepare occurs depends on the
statement type:
• For static SQL statements, after bind or rebind of the package
• For non-stabilized dynamic SQL statements, immediately, unless the statement is in the dynamic
statement cache
• For stabilized dynamic SQL statements, after invalidation, free, or changed application compatibility
level
Activation of function level 100 introduces all optimization enhancements in the initial Db2 12
release. That is, function level 500 introduces no optimization enhancements.
SQL capabilities
New SQL capabilities become available after the activation of the function level that introduces them
or higher, for applications that run at the equivalent application compatibility level or higher. New SQL
capabilities in the initial Db2 12 release become available in function level 500 for applications that
Accessibility features
The following list includes the major accessibility features in z/OS products, including Db2 for z/OS. These
features support:
• Keyboard-only operation.
• Interfaces that are commonly used by screen readers and screen magnifiers.
• Customization of display attributes such as color, contrast, and font size
Tip: IBM Documentation (which includes information for Db2 for z/OS) and its related publications are
accessibility-enabled for the IBM Home Page Reader. You can operate all features using the keyboard
instead of the mouse.
Keyboard navigation
For information about navigating the Db2 for z/OS ISPF panels using TSO/E or ISPF, refer to the z/OS
TSO/E Primer, the z/OS TSO/E User's Guide, and the z/OS ISPF User's Guide. These guides describe how
to navigate each interface, including the use of keyboard shortcuts or function keys (PF keys). Each guide
includes the default settings for the PF keys and explains how to modify their functions.
If an optional item appears above the main path, that item has no effect on the execution of the
statement and is used only for readability.
optional_item
required_item
• If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the remaining choices are shown
below.
optional_choice
• An arrow returning to the left, above the main line, indicates an item that can be repeated.
required_item repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a comma.
,
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the stack.
• Sometimes a diagram must be split into fragments. The syntax fragment is shown separately from the
main syntax diagram, but the contents of the fragment should be read as if they are on the main path of
the diagram.
required_item fragment-name
fragment-name
required_item
optional_name
• For some references in syntax diagrams, you must follow any rules described in the description for that
diagram, and also rules that are described in other syntax diagrams. For example:
– For expression, you must also follow the rules described in Expressions (Db2 SQL).
– For references to fullselect, you must also follow the rules described in fullselect (Db2 SQL).
– For references to search-condition, you must also follow the rules described in Search conditions
(Db2 SQL).
• With the exception of XPath keywords, keywords appear in uppercase (for example, FROM). Keywords
must be spelled exactly as shown.
• XPath keywords are defined as lowercase names, and must be spelled exactly as shown.
• Variables appear in all lowercase letters (for example, column-name). They represent user-supplied
names or values.
• If punctuation marks, parentheses, arithmetic operators, or other such symbols are shown, you must
enter them as part of the syntax.
Related concepts
Commands in Db2 (Db2 Commands)
Db2 online utilities (Db2 Utilities)
Db2 stand-alone utilities (Db2 Utilities)
The objects in a relational database are organized into sets called schemas. A schema provides a logical
classification of objects in the database. The schema name is used as the qualifier of SQL objects such as
tables, views, indexes, and triggers.
You define, or create, objects by executing SQL statements. This information summarizes some of the
naming conventions for the various objects that you can create. Also in this information, you will see
examples of the basic SQL statements and keywords that you can use to create objects in a Db2
database. (This information does not document the complete SQL syntax.)
Tip: When you create Db2 objects (such as tables, table spaces, views, and indexes), you can precede
the object name with a qualifier to distinguish it from objects that other people create. (For example,
MYDB.TSPACE1 is a different table space than YOURDB.TSPACE1.) When you use a qualifier, avoid using
SYS as the first three characters. If you do not specify a qualifier, Db2 assigns a qualifier for the object.
Procedure
To model data:
1. Build critical user views.
a) Carefully examining a single business activity or function.
b) Develop a user view, which is the model or representation of critical information that the business
activity requires.
This initial stage of the data modeling process is highly interactive. Because data analysts cannot
fully understand all areas of the business that they are modeling, they work closely with the actual
One-to-one relationships
When you are doing logical database design, one-to-one relationships are bidirectional relationships,
which means that they are single-valued in both directions. For example, an employee has a single
resume; each resume belongs to only one person. The previous figure illustrates that a one-to-one
relationship exists between the two entities. In this case, the relationship reflects the rules that an
employee can have only one resume and that a resume can belong to only one employee.
Many-to-many relationships
A many-to-many relationship is a relationship that is multivalued in both directions. The following figure
illustrates this kind of relationship. An employee can work on more than one project, and a project can
have more than one employee assigned.
Try to answer the following questions using the information in the Db2 sample tables (Introduction to Db2
for z/OS):
• What does Wing Lee work on?
• Who works on project number OP2012?
Both questions yield multiple answers. Wing Lee works on project numbers OP2011 and OP2012. The
employees who work on project number OP2012 are Ramlal Mehta and Wing Lee.
Entity attributes
When you define attributes for the entities, you generally work with the data administrator to decide on
names, data types, and appropriate values for the attributes.
Attribute names
Naming conventions for attributes help database designers ensure consistency within an organization.
Most organizations have naming guidelines. In addition to following these guidelines, data analysts also
base attribute definitions on class words.
A class word is a single word that indicates the nature of the data that the attribute represents.
The class word NUMBER indicates an attribute that identifies the number of an entity. Therefore, attribute
names that identify the numbers of entities should include the class word of NUMBER. Some examples
are EMPLOYEE_NUMBER, PROJECT_NUMBER, and DEPARTMENT_NUMBER.
When an organization does not have well-defined guidelines for attribute names, data analysts try to
determine how the database designers have historically named attributes. Problems occur when multiple
individuals are inventing their own naming guidelines without consulting one another.
Examples
You might use the following data types for attributes of the EMPLOYEE entity:
• EMPLOYEE_NUMBER: CHAR(6)
• EMPLOYEE_LAST_NAME: VARCHAR(15)
• EMPLOYEE_HIRE_DATE: DATE
• EMPLOYEE_SALARY_AMOUNT: DECIMAL(9,2)
The data types that you choose are business definitions of the data type. During physical database design,
you might need to change data type definitions or use a subset of these data types. The database or
the host language might not support all of these definitions, or you might make a different choice for
performance reasons.
For example, you might need to represent monetary amounts, but Db2 and many host languages do not
have a data type MONEY. In the United States, a natural choice for the SQL data type in this situation is
DECIMAL(10,2) to represent dollars. But you might also consider the INTEGER data type for fast, efficient
performance.
Related concepts
Data types of columns (Introduction to Db2 for z/OS)
Related reference
CREATE TABLE (Db2 SQL)
SQL data type attributes (Db2 Programming for ODBC)
Domain
A domain describes the conditions that an attribute value must meet to be a valid value. Sometimes
the domain identifies a range of valid values. By defining the domain for a particular attribute, you apply
business rules to ensure that the data will make sense.
Example 1: A domain might state that a phone number attribute must be a 10-digit value that contains
only numbers. You would not want the phone number to be incomplete, nor would you want it to contain
alphabetic or special characters and thereby be invalid. You could choose to use either a numeric data
type or a character data type. However, the domain states the business rule that the value must be a
10-digit value that consists of numbers.
Example 2: A domain might state that a month attribute must be a 2-digit value from 01 to 12. Again, you
could choose to use datetime, character, or numeric data types for this value, but the domain demands
that the value must be in the range of 01 through 12. In this case, incorporating the month into a datetime
data type is probably the best choice. This decision should be reviewed again during physical database
design.
Null values
When you are designing attributes for your entities, you will sometimes find that an attribute does not
have a value for every instance of the entity. For example, you might want an attribute for a person's
middle name, but you can't require a value because some people have no middle name. For these
occasions, you can define the attribute so that it can contain null values.
A null value is a special indicator that represents the absence of a value. The value can be absent because
it is unknown, not yet supplied, or nonexistent. The DBMS treats the null value as an actual value, not as a
zero value, a blank, or an empty string.
Just as some attributes should be allowed to contain null values, other attributes should not contain null
values.
Example: For the EMPLOYEE entity, you might not want to allow the attribute EMPLOYEE_LAST_NAME to
contain a null value.
Default values
In some cases, you may not want a given attribute to contain a null value, but you don't want to require
that the user or program always provide a value. In this case, a default value might be appropriate.
A default value is a value that applies to an attribute if no other valid value is available.
Example: Assume that you don't want the EMPLOYEE_HIRE_DATE attribute to contain null values and
that you don't want to require users to provide this data. If data about new employees is generally added
to the database on the employee's first day of employment, you could define a default value of the current
date.
A relational entity satisfies the requirement of first normal form if every instance of an entity contains only
one value, never multiple repeating attributes. Repeating attributes, often called a repeating group, are
different attributes that are inherently the same. In an entity that satisfies the requirement of first normal
form, each attribute is independent and unique in its meaning and its name.
Example: Assume that an entity contains the following attributes:
EMPLOYEE_NUMBER
JANUARY_SALARY_AMOUNT
FEBRUARY_SALARY_AMOUNT
MARCH_SALARY_AMOUNT
This situation violates the requirement of first normal form, because JANUARY_SALARY_AMOUNT,
FEBRUARY_SALARY_AMOUNT, and MARCH_SALARY_AMOUNT are essentially the same attribute,
EMPLOYEE_ MONTHLY_SALARY_AMOUNT.
An entity is in second normal form if each attribute that is not in the primary key provides a fact that
depends on the entire key. A violation of the second normal form occurs when a nonprimary key attribute
is a fact about a subset of a composite key.
Example: An inventory entity records quantities of specific parts that are stored at particular warehouses.
The following figure shows the attributes of the inventory entity.
Here, the primary key consists of the PART and the WAREHOUSE attributes together. Because the
attribute WAREHOUSE_ADDRESS depends only on the value of WAREHOUSE, the entity violates the rule
for second normal form. This design causes several problems:
• Each instance for a part that this warehouse stores repeats the address of the warehouse.
• If the address of the warehouse changes, every instance referring to a part that is stored in that
warehouse must be updated.
• Because of the redundancy, the data might become inconsistent. Different instances could show
different addresses for the same warehouse.
• If at any time the warehouse has no stored parts, the address of the warehouse might not exist in any
instances in the entity.
To satisfy second normal form, the information in the previous figure would be in two entities, as the
following figure shows.
Key
Key
WAREHOUSE WAREHOUSE_ADDRESS
An entity is in third normal form if each nonprimary key attribute provides a fact that is independent of
other non-key attributes and depends only on the key. A violation of the third normal form occurs when a
nonprimary attribute is a fact about another non-key attribute.
Example: The first entity contains the attributes EMPLOYEE_NUMBER and DEPARTMENT_NUMBER.
Suppose that a program or user adds an attribute, DEPARTMENT_NAME, to the entity. The new attribute
depends on DEPARTMENT_NUMBER, whereas the primary key is on the EMPLOYEE_NUMBER attribute.
The entity now violates third normal form.
Changing the DEPARTMENT_NAME value based on the update of a single employee, David Brown,
does not change the DEPARTMENT_NAME value for other employees in that department. The updated
version of the entity as shown in the previous figure illustrates the resulting inconsistency. Additionally,
updating the DEPARTMENT_ NAME in this table does not update it in any other table that might contain a
DEPARTMENT_NAME column.
EMPLOYEE EMPLOYEE
EMPLOYEE _FIRST _LAST DEPARTMENT DEPARTMENT
_NUMBER _NAME _NAME _NUMBER _NAME
000200 DAVID BROWN D11 MANUFACTURING
000320 RAMAL MEHTA E21 SOFTWARE
000220 JENIFFER LUTZ D11 MANUFACTURING
EMPLOYEE EMPLOYEE
EMPLOYEE _FIRST _LAST DEPARTMENT DEPARTMENT
_NUMBER _NAME _NAME _NUMBER _NAME
000200 DAVID BROWN D11 INSTALLATION
000320 RAMAL MEHTA E21 SOFTWARE
000220 JENIFFER LUTZ D11 MANUFACTURING
Figure 7. Results of an update in a table that violates the third normal form
You can normalize the entity by modifying the EMPLOYEE_DEPARTMENT entity and creating two new
entities: EMPLOYEE and DEPARTMENT. The following figure shows the new entities. The DEPARTMENT
entity contains attributes for DEPARTMENT_NUMBER and DEPARTMENT_NAME. Now, an update such as
changing a department name is much easier. You need to make the update only to the DEPARTMENT
entity.
Employee table
Key
EMPLOYEE EMPLOYEE
EMPLOYEE _FIRST _LAST
_NUMBER _NAME _NAME
000200 DAVID BROWN
000320 RAMAL MEHTA
000220 JENIFER LUTZ
Department table
Key
DEPARTMENT DEPARTMENT
_NUMBER _NAME
D11 MANUFACTURING
E21 SOFTWARE
Employee_Department table
Key
DEPARTMENT EMPLOYEE
_NUMBER _NUMBER
D11 000200
D11 000220
E21 000320
Figure 8. Employee and department entities that satisfy the third normal form
An entity is in fourth normal form if no instance contains two or more independent, multivalued facts
about an entity.
Example: Consider the EMPLOYEE entity. Each instance of EMPLOYEE could have both SKILL_CODE and
LANGUAGE_CODE. An employee can have several skills and know several languages. Two relationships
exist, one between employees and skills, and one between employees and languages. An entity is not in
fourth normal form if it represents both relationships, as the previous figure shows.
Instead, you can avoid this violation by creating two entities that represent both relationships, as the
following figure shows.
Key
Key
If, however, the facts are interdependent (that is, the employee applies certain languages only to certain
skills), you should not split the entity.
You can put any data into fourth normal form. A good rule to follow when doing logical database design is
to arrange all the data in entities that are in fourth normal form. Then decide whether the result gives you
an acceptable level of performance. If the performance is not acceptable, denormalizing your design is a
good approach to improving performance.
Related concepts
Practical examples of data modeling
To better understand the key activities that are necessary for creating valid data models, investigate one
or more real-life data modeling scenarios.
Denormalization of tables
During physical design, analysts transform the entities into tables and the attributes into columns.
Resolving many-to-many relationships is a particularly important activity because doing so helps maintain
clarity and integrity in your physical database design. To resolve many-to-many relationships, you
introduce associative tables, which are intermediate tables that you use to tie, or associate, two tables to
each other.
Example: Employees work on many projects. Projects have many employees. In the logical database
design, you show this relationship as a many-to-many relationship between project and employee. To
resolve this relationship, you create a new associative table, EMPLOYEE_PROJECT. For each combination
of employee and project, the EMPLOYEE_PROJECT table contains a corresponding row. The primary key
for the table would consist of the employee number (EMPNO) and the project number (PROJNO).
Another decision that you must make relates to the use of repeating groups.
Example: Assume that a heavily used transaction requires the number of wires that are sold by month in
a given year. Performance factors might justify changing a table so that it violates the rule of first normal
form by storing repeating groups. In this case, the repeating group would be: MONTH, WIRE. The table
would contain a row for the number of sold wires for each month (January wires, February wires, March
wires, and so on).
Recommendation: If you decide to denormalize your data, document your denormalization thoroughly.
Describe, in detail, the logic behind the denormalization and the steps that you took. Then, if your
organization ever needs to normalize the data in the future, an accurate record is available for those who
must do the work.
Related concepts
Entity normalization
After you define entities and decide on attributes for the entities, you normalize entities to avoid
redundancy.
Database design with denormalization (Introduction to Db2 for z/OS)
Procedure
To maintain archive data:
1. Create an archive table.
2. Turn archiving on and off as needed by using the SYSIBMADM.MOVE_TO_ARCHIVE built-in global
variable, as described in “Creating an archive table” on page 99.
When archiving is turned on, you cannot update the archive-enabled table.
3. For queries against the archive-enabled table, set them to include or exclude archive data as needed
by using the SYSIBMADM.GET_ARCHIVE built-in global variable, as described in Archive-enabled
tables and archive tables (Introduction to Db2 for z/OS).
Related reference
GET_ARCHIVE (Db2 SQL)
MOVE_TO_ARCHIVE (Db2 SQL)
Tip: GUPI You can simplify your database implementation by letting Db2 implicitly create certain objects
for you. For example, if you omit the IN clause in a CREATE TABLE statement, Db2 creates a table space
and database for the table, and creates other required objects such as:
• The primary key enforcing index and the unique key index
• The ROWID index (if the ROWID column is defined as GENERATED BY DEFAULT)
• LOB table spaces and auxiliary tables and indexes for LOB columns GUPI
Related concepts
Altering your database design
After using a relational database for a while, you might want to change some aspects of its design.
Related tasks
Designing databases for performance (Db2 Performance)
Compressing your data (Db2 Performance)
Related reference
CREATE TABLE (Db2 SQL)
Procedure
To create a database, use one of the following approaches:
• Issue a CREATE DATABASE statement.
Example
GUPI The following example CREATE DATABASE statement creates a database named MYDB:
The STOGROUP, BUFFERPOOL, and INDEXBP clauses that this example shows establish default values.
You can override these values on the definitions of the table space or index space. GUPI
Related concepts
Db2 databases (Introduction to Db2 for z/OS)
Related tasks
Dropping Db2 databases
You can drop a Db2 database by removing the database at the current server. When you drop a database,
all of its table spaces, tables, index spaces, and indexes are dropped, too.
Related reference
CREATE DATABASE (Db2 SQL)
Procedure
Issue the DROP DATABASE statement.
Related concepts
Db2 databases (Introduction to Db2 for z/OS)
Related tasks
Creating Db2 databases
You can create a Db2 database by defining a database at the current server.
Related reference
DROP (Db2 SQL)
Procedure
GUPI To create a Db2 storage group:
1. Issue the SQL statement CREATE STOGROUP.
2. Specify the storage group name.
Db2 storage group names are unqualified identifiers of up to 128 characters. A Db2 storage group
name cannot be the same as any other storage group name in the Db2 catalog. GUPI
Results
After you define a storage group, Db2 stores information about it in the Db2 catalog. (This catalog is not
the same as the integrated catalog facility catalog that describes Db2 VSAM data sets). The catalog table
SYSIBM.SYSSTOGROUP has a row for each storage group, and SYSIBM.SYSVOLUMES has a row for each
volume. With the proper authorization, you can retrieve the catalog information about Db2 storage groups
by using SQL statements.
Related reference
CREATE STOGROUP (Db2 SQL)
Procedure
To enable SMS to control Db2 storage groups:
1. Issue a CREATE STOGROUP SQL statement to define a Db2 storage group.
You can specify SMS classes when you create a storage group.
2. Indicate how you want SMS to control the allocation of volumes in one of the following ways:
• Specify an asterisk (*) for the VOLUMES attribute.
• Specify the DATACLAS, MGMTCLAS, or STORCLAS keywords.
What to do next
If you use Db2 to allocate data to specific volumes, you must assign an SMS storage class with
guaranteed space, and you must manage free space for each volume to prevent failures during the initial
allocation and extension. Using guaranteed space reduces the benefits of SMS allocation, requires more
time for space management, and can result in more space shortages. You should only use guaranteed
space when space needs are relatively small and do not change.
Related tasks
Migrating to DFSMShsm
If you decide to use DFSMShsm for your Db2 data sets, you should develop a migration plan with your
system administrator.
Related reference
CREATE STOGROUP (Db2 SQL)
Procedure
Issue a CREATE TABLESPACE statement with the DEFINE NO clause.
The DEFINE NO clause is allowed on some Db2 objects, such as explicitly created LOB table spaces,
auxiliary indexes, and XML indexes. Additionally, the IMPDSDEF subsystem parameter specifies whether
Db2 defines the underlying data set for implicitly created table spaces and index spaces. When you
specify this subsystem parameter as NO, the data set is not defined when the table space or index space
is implicitly created.
Results
The table space is created, but Db2 does not allocate (that is, define) the associated data sets until a
row is inserted or loaded into a table in that table space. The Db2 catalog table SYSIBM.SYSTABLEPART
contains a record of the created table space and an indication that the data sets are not yet allocated.
GUPI
Nonpartitioned spaces
For a nonpartitioned table space or a nonpartitioned index space, Db2 defines the first piece of the page
set starting with a primary allocation space, and extends that piece by using secondary allocation spaces.
When the end of the first piece is reached, Db2 defines a new piece (which is a new data set) and extends
that new piece starting with a primary allocation space.
Exception: When a table space requires a new piece, the primary allocation quantity of the new piece is
determined as follows:
The primary quantity is the maximum of the following values:
• The quantity that is calculated through sliding scale methodology
• The primary quantity from rule 1
• The specified SECQTY value
Extension failures
If a data set uses all possible extents, Db2 cannot extend that data set. For a partitioned page set, the
extension fails only for the particular partition that Db2 is trying to extend. For nonpartitioned page sets,
Db2 cannot extend to a new data set piece, which means that the extension for the entire page set fails.
To avoid extension failures, allow Db2 to use the default value for primary space allocation and to use a
sliding scale algorithm for secondary extent allocations.
Db2 might not be able to extend a data set if the data set is in an SMS data class that constrains
the number of extents to less than the number that is required to reach full size. To prevent extension
failures, make sure that the SMS data class setting for the number of allowed extents is large enough to
accommodate 128 GB and 256 GB data sets.
Related concepts
Primary space allocation
Db2 uses default values for primary space allocation of Db2-managed data sets.
Secondary space allocation
Db2 can calculate the amount of space to allocate to secondary extents by using a sliding scale algorithm.
Related tasks
Avoiding excessively small extents (Db2 Performance)
GUPI Db2 uses a sliding scale for secondary extent allocations of table spaces and indexes when:
• You do not specify a value for the SECQTY option of a CREATE TABLESPACE or CREATE INDEX
statement
• You specify a value of -1 for the SECQTY option of an ALTER TABLESPACE or ALTER INDEX statement.
Otherwise, Db2 always uses a SECQTY value for secondary extent allocations, if one is explicitly
specified. GUPI
Exception: For those situations in which the calculated secondary quantity value is not large enough,
you can specify a larger value for the SECQTY option when creating or altering table spaces and indexes.
However, if you specify a value for the SECQTY option, Db2 uses the value of the SECQTY option to
allocate a secondary extent only if the value of the option is larger than the value that is derived from the
sliding scale algorithm. The calculation that Db2 uses to make this determination is:
In this calculation, ss_extent represents the value that is derived from the sliding scale algorithm, and
MaxAlloc is the maximum allocation in cylinders, which depends on the maximum potential data set size,
as described in Table 3 on page 26. This approach allows you to reach the maximum page set size faster.
Otherwise, Db2 uses the value that is derived from the sliding scale algorithm.
If you do not provide a value for the secondary space allocation quantity, Db2 uses the following
calculation to determine a secondary space allocation value.
Actual secondary extent size = max (ss_extent, min ( 0.1 × PRIQTY, MaxAlloc))
That is, Db2 uses the following process to determine the secondary space allocation quantity:
1. Db2 first determines the lessor the following two values:
• 10% of the primary space allocation (PRIQTY) value.
• The maximum allocation in cylinders (MaxAlloc), as described in Table 3 on page 26.
2. Db2 then compares the result of the preceding step to the value determined by the sliding scale
algorithm (ss_extent) and uses the greater of these two values for the actual secondary space
allocation quantity.
Secondary space allocation quantities do not exceed DSSIZE or PIECESIZE clause values.
If you do not want Db2 to extend a data set, you can specify a value of 0 for the SECQTY option. Specifying
0 is a useful way to prevent DSNDB07 work files from growing out of proportion.
Related concepts
How Db2 extends data sets
Migrating to DFSMShsm
If you decide to use DFSMShsm for your Db2 data sets, you should develop a migration plan with your
system administrator.
Procedure
To enable DFSMS to manage your Db2 storage groups:
1. Issue either a CREATE STOGROUP or ALTER STOGROUP SQL statement.
2. Specify one or more asterisks as volume-ID in the VOLUMES option, and optionally, specify the SMS
class options.
The following example causes all database data set allocations and definitions to use nonspecific
selection through DFSMS filtering services.
GUPI
GUPI
3. Define the SMS classes for your table space data sets and index data sets.
4. Code the SMS automatic class selection (ACS) routines to assign indexes to one SMS storage class and
to assign table spaces to a different SMS storage class.
Example
GUPI The following example shows how to create a storage group in a SMS managed subsystem:
GUPI
DSN$locn-name$cp-type
The variables that are used in this naming convention are described in the following table.
The Db2 BACKUP SYSTEM and RESTORE SYSTEM utilities invoke DFSMShsm to back up and restore the
copy pools. DFSMShsm interacts with DFSMSsms to determine the volumes that belong to a given copy
pool so that the volume-level backup and restore functions can be invoked.
Tip: The BACKUP SYSTEM utility can dump the copy pools to tape automatically if you specify the options
that enable that function.
Related concepts
How archive logs are recalled by DFSMShsm
DFSMShsm can automatically migrate and recall archive log data sets and image copy data sets. If Db2
needs an archive log data set or an image copy data set that DFSMShsm has migrated, a recall begins
automatically and Db2 waits for the recall to complete before continuing.
The RECOVER utility and the DFSMSdss RESTORE command
The RECOVER utility can run the DFSMSdss RESTORE command, which generally uses extensions that are
larger than the primary and secondary space allocation values of a data set.
Incremental system-level backups
You can use the BACKUP SYSTEM utility to take incremental FlashCopy backups of the data of a non-data
sharing Db2 subsystem or a Db2 data sharing group. All of the Db2 data sets must reside on volumes that
are managed by DFSMSsms.
Related tasks
Migrating to DFSMShsm
If you decide to use DFSMShsm for your Db2 data sets, you should develop a migration plan with your
system administrator.
Managing DFSMShsm default settings when using the BACKUP SYSTEM, RESTORE SYSTEM, and RECOVER
utilities
Procedure
To define your own user-managed VSAM data sets, complete the following steps:
1. Issue a DEFINE CLUSTER statement to create the data set an specify the following attributes:
catname.DSNDBx.dbname.psname.y0001.znnn
catalog-name
The catalog name or alias.
The data sets are VSAM linear data sets cataloged in the integrated catalog facility catalog
that catalog-name identifies. For more information about catalog-name values, see Naming
conventions (Db2 SQL).
Use the same name or alias here as in the USING VCAT clause of the CREATE TABLESPACE and
CREATE INDEX statements.
x
C (for VSAM clusters) or D (for VSAM data components).
dbname
Db2 database name. If the data set is for a table space, dbname must be the name given in the
CREATE TABLESPACE statement. If the data set is for an index, dbname must be the name of
the database containing the base table. If you are using the default database, dbname must be
DSNDB04.
psname
Table space name or index name. This name must be unique within the database.
You use this name on the CREATE TABLESPACE or CREATE INDEX statement. (You can use
a name longer than eight characters on the CREATE INDEX statement, but the first eight
characters of that name must be the same as in the psname for that data set.)
y0001
Instance qualifier for the data set.
If you plan to run any of the following utilities, define two data sets, one data set with a value of
I for y, and one with a value of J for y:
• LOAD REPLACE SHRLEVEL REFERENCE
• REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE
• CHECK DATA with SHRLEVEL REFERENCE
• CHECK INDEX with SHRLEVEL REFERENCE
• CHECK LOB with SHRLEVEL REFERENCE
Otherwise, define one data set for the table space or index with a value of I for y.
znnn
Data set number. The first digit z of the data set number is represented by the letter A, B, C, D,
or E, which corresponds to the value 0, 1, 2, 3, or 4 as the first digit of the partition number.
For partitioned table spaces, if the partition number is less than 1000, the data set number is
Annn in the data set name (for example, A999 represents partition 999). For partitions 1000
to 1999, the data set number is Bnnn (for example, B000 represents partition 1000). For
partitions 2000 to 2999, the data set number is Cnnn. For partitions 3000 to 3999, the data set
number is Dnnn. For partitions 4000 up to a maximum of 4096, the data set number is Ennn.
The naming convention for data sets that you define for a partitioned index is the same as the
naming convention for other partitioned objects.
For simple or segmented (non-UTS) table spaces, the number is 001 (preceded by A) for the
first data set. When little space is available, Db2 issues a warning message. If the size of the
data set for a simple or a segmented (non-UTS) table space approaches the maximum limit,
define another data set with the same name as the first data set and the number 002. The next
data set will be 003, and so on.
DEFINE CLUSTER -
(NAME(DSNCAT.DSNDBC.DSNDB06.SYSUSER.I0001.A001) -
LINEAR -
REUSE -
VOLUMES(DSNV01) -
KILOBYTES(40 40) -
SHAREOPTIONS(3 3) ) -
DATA -
(NAME(DSNCAT.DSNDBD.DSNDB06.SYSUSER.I0001.A001) -
CATALOG(DSNCAT)
The DEFINE CLUSTER command has many optional parameters that do not apply when Db2 uses
the data set. If you use the parameters SPANNED, EXCEPTIONEXIT, BUFFERSPACE, or WRITECHECK,
VSAM applies them to your data set, but Db2 ignores them when it accesses the data set.
The value of the OWNER parameter for clusters that are defined for storage groups is the first SYSADM
authorization ID specified at installation.
2. With user-managed data sets, you must pre-allocate shadow data sets before you can run the
following Db2 utilities against the table space:
• CHECK DATA with SHRLEVEL CHANGE
• CHECK INDEX with SHRLEVEL CHANGE
• CHECK LOB with SHRLEVEL CHANGE
• REORG INDEX utility with SHRLEVEL REFERENCE or SHRLEVEL CHANGE
• REORG TABLESPACE with SHRLEVEL CHANGE or SHRLEVEL REFERENCE
For example, you can specify the MODEL option for the DEFINE CLUSTER command so that the
shadow is created like the original data set, as shown in the following example code.
DEFINE CLUSTER -
(NAME('DSNCAT.DSNDBC.DSNDB06.SYSUSER.x0001.A001') -
MODEL('DSNCAT.DSNDBC.DSNDB06.SYSUSER.y0001.A001')) -
DATA -
(NAME('DSNCAT.DSNDBD.DSNDB06.SYSUSER.x0001.A001') -
MODEL('DSNCAT.DSNDBD.DSNDB06.SYSUSER.y0001.A001')) -
In the example, the instance qualifiers x and y are distinct and are equal to either I or J. You can
querying the Db2 catalog for the database and table space to determine the correct instance qualifier
to use.
What to do next
• Before the current volume runs out of space, you must extend the data set. See “Extending user-
managed data sets” on page 37.
• When you drop indexes or table spaces that you defined your data sets for, you must also delete the
data sets. See “Deleting user-managed data sets” on page 37.
Procedure
Issue the Access Method Services commands ALTER ADDVOLUMES or ALTER REMOVEVOLUMES for
candidate volumes.
Related information
ALTER command (DFSMS Access Method Services for Catalogs)
Procedure
Issue the DELETE CLUSTER command for candidate volumes.
Related information
DELETE command (DFSMS Access Method Services for Catalogs)
catname.DSNDBx.dbname.psname.y0001.znnn
catalog-name
The catalog name or alias.
The data sets are VSAM linear data sets cataloged in the integrated catalog facility catalog that
catalog-name identifies. For more information about catalog-name values, see Naming conventions
(Db2 SQL).
Use the same name or alias here as in the USING VCAT clause of the CREATE TABLESPACE and
CREATE INDEX statements.
Table A1 Table A2
Index Index
on Table on Table Volume 3
Volume 2
A1 A2
Database B
Volume 1
(Disk)
Table space 2
(partitioned) Index space
Table B1 Partitioning
Part 1 index Part 1
Part 2 Part 2
Part 3 Part 3
Part 4 Part 4
Storage group G2
Volume 3
Volume
2 Volume 1
(Disk)
To create a Db2 storage group, use the SQL statement CREATE STOGROUP. Use the VOLUMES(*) clause
to specify the SMS management class (MGMTCLAS), SMS data class (DATACLAS), and SMS storage class
(STORCLAS) for the Db2 storage group.
After you define a storage group, Db2 stores information about it in the Db2 catalog. The catalog table
SYSIBM.SYSSTOGROUP has a row for each storage group, and SYSIBM.SYSVOLUMES has a row for each
volume in the group.
The process of installing Db2 includes the definition of a default storage group, SYSDEFLT. If you have
authorization, you can define tables, indexes, table spaces, and databases. Db2 uses SYSDEFLT to
allocate the necessary auxiliary storage. Db2 stores information about SYSDEFLT and all other storage
groups in the catalog tables SYSIBM.SYSSTOGROUP and SYSIBM.SYSVOLUMES.
Recommendation: Use storage groups whenever you can, either explicitly or implicitly (by using the
default storage group). In some cases, organizations need to maintain closer control over the physical
storage of tables and indexes. These organizations choose to manage their own user-defined data sets
rather than using storage groups. Because this process is complex, this information does not describe the
details.
Example
GUPI
This statement creates storage group MYSTOGRP. The asterisk (*) on the VOLUMES clause indicates that
SMS is to manage your storage group. The VCAT catalog-name clause identifies ALIASICF as the name or
alias of the catalog of the integrated catalog facility that the storage group is to use. The catalog of the
integrated catalog facility stores entries for all data sets that Db2 creates on behalf of a storage group.
The data sets are VSAM linear data sets cataloged in the integrated catalog facility catalog that catalog-
name identifies. For more information about catalog-name values, see Naming conventions (Db2 SQL).
GUPI
Procedure
Issue a CREATE INDEX statement.
Optionally, for indexes that are not on Db2 catalog tables, include the USING clause to specify whether
you want Db2-managed or user-managed data sets. For Db2-managed data sets, you can also specify
the primary and secondary space allocation parameters for the index or partition in the USING clause. If
you do not specify USING, Db2 assigns the index data sets to the default storage groups with the default
space attributes.
For indexes on Db2 catalog tables, Db2 defines and manages the index data sets. The data sets are
defined in the same SMS environment that is used for the catalog data sets with default space attributes.
If you specify the USING clause for indexes on the catalog, Db2 ignores that clause.
GUPI
Results
Information about space allocation for the index is stored in the Db2 catalog table
SYSIBM.SYSINDEXPART. Other information about the index is in the SYSIBM.SYSINDEXES table.
Related reference
CREATE INDEX (Db2 SQL)
Procedure
To create EA-enabled page sets:
1. Use SMS to manage the data sets that are associated with the EA-enabled page sets.
2. Associate the data sets with a data class (an SMS construct) that specifies the extended format and
extended addressability options.
To make this association between data sets and the data class, use an automatic class selection
(ACS) routine to assign the Db2 data sets to the relevant SMS data class. The ACS routine does the
assignment based on the data set name. No performance penalty occurs for having non-EA-enabled
Db2 page sets assigned to this data class, too, if you would rather not have two separate data classes
for Db2.
For user-managed data sets, you can use ACS routines or specify the appropriate data class on the
DEFINE CLUSTER command when you create the data set.
3. Create the partitioned or LOB table space with a DSSIZE of 8 GB or greater. The partitioning index for
the partitioned table space takes on the EA-enabled attribute from its associated table space.
After a page set is created, you cannot use the ALTER TABLESPACE statement to change the DSSIZE.
You must drop and re-create the table space.
Also, you cannot change the data sets of the page set to turn off the extended addressability or
extended format attributes. If someone modifies the data class to turn off the extended addressability
or extended format attributes, Db2 issues an error message the next time that it opens the page set.
Related tasks
Creating table spaces explicitly
Db2 can create table spaces for you. However, you might also create table spaces explicitly by issuing
CREATE TABLESPACE statements if you manage your own data sets, among other reasons.
Notes:
1. FL 504 Non-UTS table spaces for base tables are deprecated. CREATE TABLESPACE statements that
run at application compatibility level V12R1M504 or higher always create a partition-by-growth or
partition-by-range table space, and CREATE TABLE statements that specify a non-UTS table space
(including existing multi-table segmented table spaces) return an error. However, you can use a lower
application compatibility level to create table spaces of the deprecated types if needed, such as for
recovery situations. For instructions, see “Creating non-UTS table spaces (deprecated)” on page 63.
2. Db2 12 does not support creating simple table spaces. Existing simple table spaces remain supported,
but they are likely to be unsupported in the future.
Db2 manages PBG table spaces automatically as data grows, by automatically adding a new partition
when more space is needed to satisfy an insert operation.
PBG table spaces are best used for small or medium sized tables, especially when a table does not have
a suitable partitioning key. Partition-by-growth table spaces can grow up to 128 TB, depending on the
buffer pool page size used, and the MAXPARTITIONS and DSSIZE values specified when the table space
is created.
Any index created on a table in a PBG table space must be a non-partitioned index. That is, partitioned
indexes, including partitioning indexes and data-partitioned secondary indexes (DPSIs) are not supported
on table in PBG table spaces. For more information, see “Indexes on partitioned tables” on page 118.
Tip: PBG table spaces are best used for small to medium-sized tables. If you expect a table to grow much
larger than the 64 GB, consider using a partition-by-range (PBR) table space instead.
When a table in a PBG table space grows too large, several drawbacks can begin to arise, including the
following issues:
• Insert and query performance degradation, which is perhaps the most important factor suggesting that
conversion is required. Such performance degradation can have many causes, but for large tables in
PBG tables spaces, the size of the table space is often one of the major causes.
• Difficulty regaining clustering of the data (which requires a REORG of the entire table space).
• Problems associated with very large non-partitioned indexes, because partitioned (partitioning and
DPSI) indexes are not supported for tables in PBG table spaces. For more information, see
• Lack of partition parallelism support for utilities.
• Limited support for partition-level utility operations.
If you encounter these issues, consider using partition-by-range (PBR) table spaces instead.
Tip: To use a PBR table space for a table without a naturally suitable partitioning scheme, consider
creating the table with an implicitly hidden ROWID column in the partitioning key. Any ROWID column
in the partitioning key guarantees a very even distribution of data across the partitions, and an implicitly-
hidden ROWID column can also be transparent to applications.
Table 7. Base table space types and approximate maximum size of LOB data for a LOB column
Maximum (approximate) LOB data for each
Base table space type column
Segmented 16 TB
Partitioned, with NUMPARTS up to 64 1000 TB
Partitioned with DSSIZE, NUMPARTS up to 254 4000 TB
Partitioned with DSSIZE, NUMPARTS up to 4096 64000 TB
Recommendations:
• Consider defining long string columns as LOB columns when a row does not fit in a 32 KB page. Use the
following guidelines to determine if a LOB column is a good choice:
– Defining a long string column as a LOB column might be better if the following conditions are true:
- Table space scans are normally run on the table.
- The long string column is not referenced often.
- Removing the long string column from the base table is likely to improve the performance of table
space scans.
– LOBs are physically stored in another table space. Therefore, performance for inserting, updating,
and retrieving long strings might be better for non-LOB strings than for LOB strings.
• Consider specifying a separate buffer pool for large object data.
Related concepts
Creation of large objects (Introduction to Db2 for z/OS)
Related tasks
Choosing data page sizes for LOB data (Db2 Performance)
Choosing data page sizes (Db2 Performance)
Related reference
CREATE AUXILIARY TABLE (Db2 SQL)
CREATE LOB TABLESPACE (Db2 SQL)
Partition 2
Key range M-Z
When you use an INSERT statement, a MERGE statement, or the LOAD utility to insert records into a table,
records from the same table are stored in different segments. You can reorganize the table space to move
segments of the same table together.
If the table contains a LOB column and the SQLRULES bind option is STD, Db2 also creates the LOB table
space, the auxiliary table, and an auxiliary index. Db2 also creates all underlying XML objects. In this case,
Db2 uses the default storage group, SYSDEFLT.
Db2 also creates the following objects:
• Unique indexes for UNIQUE constraints.
• The primary key index.
• The ROWID index, if the ROWID column is defined as GENERATED BY DEFAULT.
Db2 stores the names and attributes of all table spaces in the SYSIBM.SYSTABLESPACE catalog table,
regardless of whether you define the table spaces explicitly or Db2 creates them implicitly.
Related concepts
Table space types and characteristics in Db2 for z/OS
Db2 supports several different types of table spaces. The partitioning method and segmented
organization are among the main characteristics that define the table space type.
Related reference
CREATE TABLE (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
SYSTABLESPACE catalog table (Db2 SQL)
Notes
1. The attribute is available if the SYSTABLESPACE value for the base table space is NULL. If the attribute
is not available, the XML table space inherits the attribute from the first logical partition of the base
table space.
2. The DSSIZE of the XML table space depends on the type of base table space:
The following table shows the DSSIZE for an implicitly created XML table space for a base table in
a partition-by-range (PBR) or range-partitioned (non-UTS) table space. For partition-by-range (PBR)
table spaces with relative page numbering, Db2 also rounds the DSSIZE up to the nearest power of
two before using the following table.
Table 8. Default DSSIZE for XML table spaces, given the base table space DSSIZE and buffer-pool page
size
Base table space 4KB base page 8KB base page 16KB base page 32KB base page
DSSIZE size size size size
1-4 GB 4G B 4 GB 4 GB 4 GB
5-8 GB 32 GB 16 GB 16 GB 16 GB
9-16 GB 64 GB 32 GB 16 GB 16 GB
17-32 GB 64 GB 64 GB 32 GB 16 GB
33-64 GB 64 GB 64 GB 64 GB 32 GB
65-128 GB 256 GB 256 GB 128 GB 64 GB
129-256 GB 256 GB 256 GB 256 GB 128 GB
257-512 GB 512 GB 512 GB 512 GB 256 GB
513-1024 GB 1024 GB 1024 GB 1024 GB 512 GB
3. If the base table resides in a segmented (non-UTS) or simple table space, the default value is used.
If an edit procedure is defined on the base table, the XML table inherits the edit procedure.
For more information see Storage structure for XML data (Db2 Programming for XML).
Related reference
ALTER TABLE (Db2 SQL)
CREATE TABLE (Db2 SQL)
FL 504 You can create partition-by-range or partition-by-growth table spaces. For base tables, table
spaces of other types are deprecated, creating them is not supported, and support for such existing tables
might be removed in the future. For more information about the different types, see “Table space types
and characteristics in Db2 for z/OS” on page 43.
Procedure
Issue a CREATE TABLESPACE statement and specify the type of table space to create and other
attributes.
a) Specify the table space type to create.
For instructions for creating the supported types, see “Creating partition-by-range table spaces” on
page 59 and “Creating partition-by-growth table spaces” on page 61.
FL 504The following table shows the resulting table space types.
Table 9. CREATE TABLESPACE clauses for specifying table space types, by application compatibility level.
Table space type APPLCOMPAT(V12R1M504) and APPLCOMPAT(V12R1M503) and
higher lower
Partition-by-growth Any of the following combinations: Any of the following combinations:
• MAXPARTITIONS and NUMPARTS • MAXPARTITIONS and NUMPARTS
• MAXPARTITIONS • MAXPARTITIONS and SEGSIZE
• Omit both n“2.a.i” on page 56
• MAXPARTITIONS
Notes:
i) Where n is a non-zero value. The DPSEGSZ subsystem parameter determines the default value. For more
information, see DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem parameter) (Db2 Installation and
Migration).
ii) FL 504 Non-UTS table spaces for base tables are deprecated. CREATE TABLESPACE statements that run
at application compatibility level V12R1M504 or higher always create a partition-by-growth or partition-by-
range table space, and CREATE TABLE statements that specify a non-UTS table space (including existing
multi-table segmented table spaces) return an error. However, you can use a lower application compatibility
level to create table spaces of the deprecated types if needed, such as for recovery situations. For
instructions, see “Creating non-UTS table spaces (deprecated)” on page 63.
Examples
The following examples illustrate how to use SQL statements to create different types of table spaces.
Creating partition-by-growth table spaces
The following example CREATE TABLE statement implicitly creates by a partition-by-growth table
space.
The following SQL statement creates a partition-by-growth table space that has a maximum size of 2
GB for each partition, 4 pages per segment, with a maximum of 24 partitions for table space.
What to do next
Generally, when you use the CREATE TABLESPACE statement with the USING STOGROUP clause, Db2
allocates data sets for the table space. However, if you also specify the DEFINE NO clause, you can defer
the allocation of data sets until data is inserted or loaded into a table in the table space.
GUPI
Related tasks
Altering table spaces
Use the ALTER TABLESPACE statement to change the description of a table space at the current server.
Choosing data page sizes (Db2 Performance)
Choosing data page sizes for LOB data (Db2 Performance)
Creating EA-enabled table spaces and index spaces
DFSMS has an extended-addressability function, which is necessary to create data sets that are larger
than 4 GB. Therefore, the term for page sets that are enabled for extended addressability is EA-enabled.
Defining your own user-managed data sets
You can use Db2 storage groups to let Db2 manage the VSAM data sets. However, you can also define
your own user-managed data sets. With user-managed data sets, Db2 checks whether you have defined
your data sets correctly.
Related reference
CREATE TABLESPACE (Db2 SQL)
CREATE LOB TABLESPACE (Db2 SQL)
SYSTABLESPACE catalog table (Db2 SQL)
DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem parameter) (Db2 Installation and Migration)
Procedure
To create a partition-by-range table space, use one of the following approaches:
• Issue a CREATE TABLE statement and specify the PARTITION BY RANGE clause.
The following example creates a table with partitions based on ranges of data values in the ACCTNUM
column, which resides in an implicitly created PBR table space:
• Issue a CREATE TABLESPACE statement that specifies the NUMPARTS clause and omits the
MAXPARTITIONS clause.
The following example creates a partition-by-range table space, TS1, in database DSN8D12A using
storage group DSN8G120. The table space has 16 pages per segment and has 55 partitions.
• To create a table without a naturally suitable partitioning scheme in a PBR table space, consider
creating the table with an implicitly hidden ROWID column in the partitioning key.
The ROWID column in the partitioning key guarantees a very even distribution of data across the
partitions. An implicitly-hidden ROWID column can also be transparent to applications.
PARTITION BY (ROW_ID)
(PARTITION 1 ENDING AT (X'0FFF'),
PARTITION 2 ENDING AT (X'1FFF'),
PARTITION 3 ENDING AT (X'2FFF'),
PARTITION 4 ENDING AT (X'3FFF'),
PARTITION 5 ENDING AT (X'4FFF'),
PARTITION 6 ENDING AT (X'5FFF'),
PARTITION 7 ENDING AT (X'6FFF'),
PARTITION 8 ENDING AT (X'7FFF'),
PARTITION 9 ENDING AT (X'8FFF'),
PARTITION 10 ENDING AT (X'9FFF'),
PARTITION 11 ENDING AT (X'AFFF'),
PARTITION 12 ENDING AT (X'BFFF'),
PARTITION 13 ENDING AT (X'CFFF'),
PARTITION 14 ENDING AT (X'DFFF'),
PARTITION 15 ENDING AT (X'EFFF'),
PARTITION 16 ENDING AT (MAXVALUE))
CCSID UNICODE;
Related concepts
Partition-by-range table spaces
A partition-by-range (PBR) table space is a universal table space (UTS) that has partitions based on ranges
of data values. It holds data pages for a single table and has segmented space management capabilities
within each partition. PBR table spaces can use absolute or relative page numbering.
ROWID data type (Introduction to Db2 for z/OS)
Related reference
CREATE TABLE (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
Db2 manages PBG table spaces automatically as data grows, by automatically adding a new partition
when more space is needed to satisfy an insert operation.
PBG table spaces are best used for small or medium sized tables, especially when a table does not have
a suitable partitioning key. Partition-by-growth table spaces can grow up to 128 TB, depending on the
buffer pool page size used, and the MAXPARTITIONS and DSSIZE values specified when the table space
is created.
Any index created on a table in a PBG table space must be a non-partitioned index. That is, partitioned
indexes, including partitioning indexes and data-partitioned secondary indexes (DPSIs) are not supported
on table in PBG table spaces. For more information, see “Indexes on partitioned tables” on page 118.
Tip: PBG table spaces are best used for small to medium-sized tables. If you expect a table to grow much
larger than the 64 GB, consider using a partition-by-range (PBR) table space instead.
Procedure
To create a partition-by-growth table space, use one of the following approaches:
• Issue a CREATE TABLE statement, and specify the PARTITION BY SIZE clause.
Db2 implicitly creates a partition-by-growth table space for the new table.
The following example creates a table with partitions based on data growth, which resides in an
implicitly created partition-by-growth table space:
• Issue a CREATE TABLESPACE statement and specify any of the following combinations of the
MAXPARITIONS and NUMPARTS clauses:
– Specify MAXPARTITIONS without NUMPARTS, for example:
Procedure
1. FL 504 Change the application compatibility level of the CREATE statement to V12R1M503 or lower, by
using one of the following approaches:
• Issue the following statement first:
Then issue the CREATE statement as a dynamic SQL statement. For more information, see SET
CURRENT APPLICATION COMPATIBILITY (Db2 SQL).
• For remote applications, you can avoid application changes by using the DSN_PROFILE_TABLE to
set the special register value. For more information, see “Setting special registers by using profile
tables” on page 543.
• Bind the package that issues the CREATE statement with APPLCOMPAT(V12R1M503) or lower. For
more information, see APPLCOMPAT bind option (Db2 Commands).
2. Use one of the following specifications in the CREATE statement.
• To create a segmented (non-UTS) table space, issue a CREATE TABLESPACE statement and specify
a non-zero SEGSIZE value. Do not specify NUMPARTS or MAXPARTITIONS. For the syntax and
descriptions, see “segmented-non-uts-specification” on page 64.
• To create a partitioned (non-UTS) table space, issue a CREATE TABLESPACE statement and specify
NUMPARTS, omit MAXPARTITIONS, and specify SEGSIZE 0. For the syntax and descriptions, see
“paritioned-non-UTS-specification” on page 66.
• To create a new table in an existing segmented table space, specify the existing table space in the
IN clause of the CREATE TABLE statement. For the syntax and descriptions, see CREATE TABLE
(Db2 SQL).
Notes:
1. Where n is a non-zero value. The DPSEGSZ subsystem parameter determines the default value. For
more information, see DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem parameter) (Db2
Installation and Migration).
2. Non-UTS table spaces for base tables are deprecated and likely to be unsupported in the future.
segmented-non-uts-specification
At application compatibility level V12R1M503 or lower, segmented-non-UTS-specification has the
following syntax and descriptions.
SEGSIZE 4
SEGSIZE integer
segmented-non-UTS-specification (deprecated)
If MAXPARTITIONS and NUMPARTS are both omitted, a segmented (non-UTS) table space is created.
It is not partitioned, and initially occupies one data set.
SEGSIZE integer
Specifies the size in pages for each segment of the table space. The integer value must be a
multiple of 4, in the range 4 - 64.
Because segmented (non-UTS) table spaces are not partitioned, the description of the using-block
specification also differs from the description for UTS.
using-block (for nonpartitioned table spaces)
For nonpartitioned table spaces, the USING clause indicates whether the data set for the table space
is defined by you or by Db2. If Db2 is to define the data set, the clause also gives space allocation
parameters and an erase rule.
If you omit USING, Db2 defines the data sets using the default storage group of the database and the
defaults for PRIQTY, SECQTY, and ERASE.
VCAT catalog-name
Specifies that the first data set for the table space is managed by the user, and following data sets,
if needed, are also managed by the user.
SECQTY integer
Specifies the minimum secondary space allocation for a Db2-managed data set. integer must
be a positive integer, 0, or -1. If you do not specify SECQTY, or specify SECQTY with a value of
-1, Db2 uses a formula to determine a value. For information on the actual value that is used
for secondary space allocation, whether you specify a value or not, see Rules for primary and
secondary space allocation (Introduction to Db2 for z/OS).
If you specify SECQTY, and do not specify a value of -1, Db2 specifies the secondary space
allocation to access method services using the smallest multiple of p KB not less than integer,
where p is the page size of the table space. The allocated space can be greater than the
amount of space requested by Db2. For example, it could be the smallest number of tracks
that will accommodate the request. To more closely estimate the actual amount of storage,
see DEFINE CLUSTER command (DFSMS Access Method Services for Catalogs).
ERASE
Indicates whether the Db2-managed data sets for the table space are to be erased when they
are deleted during the execution of a utility or an SQL statement that drops the table space.
NO
Does not erase the data sets. Operations involving data set deletion will perform better
than ERASE YES. However, the data is still accessible, though not through Db2. This is the
default.
YES
Erases the data sets. As a security measure, Db2 overwrites all data in the data sets with
zeros before they are deleted.
The components of the USING block are discussed separately for nonpartitioned table spaces and
partitioned table spaces. If you omit USING, the default storage group of the database must exist.
paritioned-non-UTS-specification
At application compatibility level V12R1M503 or lower, partitioned-non-UTS-specification has
the following syntax and descriptions.
( PARTITION integer )
using-block
free-block
gbpcache-block
COMPRESS NO
COMPRESS YES
TRACKMOD imptkmod-parameter
TRACKMOD YES
TRACKMOD NO
2
DSSIZE integer G
3
SEGSIZE 0
Notes:
1 Group multiple PARTITION clauses. Other clauses must not be specified more than one time.
2 Specify a power-of-two integer in the range 1 - 256, or accept the default value based on the NUMPARTS
value and the buffer pool page size. See the tables in "Maximum number of partitions and table space size"
in CREATE TABLESPACE (Db2 SQL).
3 SEGSIZE 0 must be specified unless the DPSEGSZ subsystem parameter value is 0. For more
partitioned-non-UTS-specification (deprecated)
Specifies a NUMPARTS value and SEGSIZE 0 to create a partitioned (non-UTS) table space.
NUMPARTS integer
The integer value specifies the number of partition schema definitions to create. Data sets are also
allocated for this many partitions, unless DEFINE NO is also specified. integer must be a value in
the range 1 - 4096 inclusive.
The maximum number of partitions depends on the buffer pool page size and DSSIZE. The total
table space size depends on the number of partitions and DSSIZE. See the tables in "Maximum
number of partitions and table space size" in CREATE TABLESPACE (Db2 SQL).
PARTITION integer
Specifies the partition to which the following partition-level clauses apply. integer can range from
1 to the number of partitions given by NUMPARTS.
You can specify the PARTITION clause as many times as needed. If you use the same partition
number more than once, only the last specification for that partition is used.
The DSSIZE value affects the number of partitions that can be used. See the tables in "Maximum
number of partitions and table space size" in CREATE TABLESPACE (Db2 SQL).
For any DSSIZE value greater than 4 G, the data sets for the table space must be associated with a
DFSMS data class that is specified with extended format and extended addressability.
Related concepts
Table space types and characteristics in Db2 for z/OS
Db2 supports several different types of table spaces. The partitioning method and segmented
organization are among the main characteristics that define the table space type.
Related tasks
Converting deprecated table spaces to the UTS types
The non-UTS segmented and partitioned table space types are deprecated. That is, they remain
supported, but support might be removed eventually, and it is best to convert them to the non-deprecated
types.
Related reference
Function level 504 (activation enabled by APAR PH07672 - April 2019) (Db2 for z/OS What's New?)
CREATE TABLESPACE (Db2 SQL)
Deprecated function in Db2 12 (Db2 for z/OS What's New?)
Related tasks
Creating EA-enabled table spaces and index spaces
DFSMS has an extended-addressability function, which is necessary to create data sets that are larger
than 4 GB. Therefore, the term for page sets that are enabled for extended addressability is EA-enabled.
Creating table spaces explicitly
Db2 can create table spaces for you. However, you might also create table spaces explicitly by issuing
CREATE TABLESPACE statements if you manage your own data sets, among other reasons.
Related reference
CREATE TABLESPACE (Db2 SQL)
Procedure
Issue a CREATE TABLE statement that specifies the attributes of the table and its columns.
Table name
When choosing the name for the table, follow the naming conventions of your organization and the
basic requirements described in “Guidelines for table names” on page 72.
Column list
For each column, specify the name and attributes of the column, including the data type, length
attribute, and optional default values or value constraints. For more information, see Db2 table
columns (Introduction to Db2 for z/OS).
Referential or check constraints (optional)
For more information, see Check constraints (Db2 Application programming and SQL) and Referential
constraints (Db2 Application programming and SQL).
Partitioning method (optional)
Db2 uses size-based partitions by default if you do not specify how to partition the data when you
create the table. For more information, see “Partitioning data in Db2 tables” on page 74.
Table location (optional)
You can specify an existing table space and database name as the location of the new table, or you
can let Db2 create these objects for your table implicitly. For more information, see “Implementing
Db2 table spaces” on page 42.
Example
GUPI The following CREATE TABLE statement creates the EMP table, which is in a database named MYDB
and in a table space named MYTS:
GUPI
What to do next
Creating a table does not store the application data. You can put data into the table by using several
methods, such as the LOAD utility or the INSERT statement. For more information, see “Loading data into
Db2 tables” on page 125.
Related concepts
Db2 tables (Introduction to Db2 for z/OS)
Procedure
GUPI To control how the data in a table is partitioned, use the following approaches in the CREATE TABLE
statement:
FL 504 If you omit the PARTITION BY clause, the table is also created with size based partitions, and if
the IN clause specifies a table space name, it must identify an existing PBG table space.
• Specify a PARTITION BY RANGE clause to identify one or more columns that define the partitioning
key, and specify the limit key values in the PARTITION part-num ENDING AT clause.
If you specify the name of a table space in the IN clause, it must identify an existing PBR table space.
If you omit the table space name, Db2 implicitly creates a PBR table space for the table.
The following example creates a table with partitions based on ranges of data values in the ACCTNUM
column, which resides in an implicitly created PBR table space:
• To create a table without a naturally suitable partitioning scheme in a PBR table space, consider
creating the table with an implicitly hidden ROWID column in the partitioning key.
The ROWID column in the partitioning key guarantees a very even distribution of data across the
partitions. An implicitly-hidden ROWID column can also be transparent to applications.
For example, the following CREATE TABLE statement creates the TB02 table in a PBR table space with
16 partitions based on the implicitly-hidden ROWID column named ROW_ID.
PARTITION BY (ROW_ID)
(PARTITION 1 ENDING AT (X'0FFF'),
PARTITION 2 ENDING AT (X'1FFF'),
PARTITION 3 ENDING AT (X'2FFF'),
PARTITION 4 ENDING AT (X'3FFF'),
PARTITION 5 ENDING AT (X'4FFF'),
PARTITION 6 ENDING AT (X'5FFF'),
PARTITION 7 ENDING AT (X'6FFF'),
PARTITION 8 ENDING AT (X'7FFF'),
PARTITION 9 ENDING AT (X'8FFF'),
PARTITION 10 ENDING AT (X'9FFF'),
PARTITION 11 ENDING AT (X'AFFF'),
PARTITION 12 ENDING AT (X'BFFF'),
PARTITION 13 ENDING AT (X'CFFF'),
What to do next
You might eventually need to add or modify the data partitions. For more information, see “Adding
partitions” on page 214 and “Altering partitions” on page 218.
Related concepts
Partitioned (non-UTS) table spaces (deprecated)
A partitioned (non-UTS) table space stores data pages for a single table. Db2 divides the table space
into partitions. Non-UTS table spaces for base tables are deprecated and likely to be unsupported in the
future.
Related tasks
Creating base tables
When you create a table, Db2 records a definition of the table in the Db2 catalog.
Converting deprecated table spaces to the UTS types
The non-UTS segmented and partitioned table space types are deprecated. That is, they remain
supported, but support might be removed eventually, and it is best to convert them to the non-deprecated
types.
Converting partitioned (non-UTS) table spaces to partition-by-range universal table spaces
You can convert existing partitioned (non-UTS) table spaces, which are deprecated, to partition-by-range
table spaces.
Converting table spaces to use table-controlled partitioning
Before you can convert a partitioned (non-UTS) table space that uses index-controlled partitioning to a
partition-by-range table space, you must convert it to use table controlled partitioning. Table spaces that
use index-controlled partitioning, like all non-UTS table spaces are deprecated.
Examples
GUPI
Example
Assume that a partitioned table space is created with the following SQL statements:
CREATE TABLESPACE TS IN DB
USING STOGROUP SG
NUMPARTS 4 BUFFERPOOL BP0;
Because the CREATE TABLE statement does not specify the order in which to put entries, Db2 puts
them in ascending order by default. Db2 subsequently prevents any INSERT into the TB table of a row
with a null value for partitioning column C01, because no partition specifies MAXVALUE. If the CREATE
TABLE statement had specified the key as descending and the first partition specified MINVALUE,
Db2 would subsequently have allowed an INSERT into the TB table of a row with a null value for
partitioning column C01. Db2 would have inserted the row into partition 1.
With index-controlled partitioning, Db2 does not restrict the insertion of null values into a value with
nullable partitioning columns.
Example
Assume that a partitioned table space is created with the following SQL statements:
CREATE TABLESPACE TS IN DB
USING STOGROUP SG
NUMPARTS 4 BUFFERPOOL BP0;
Regardless of the entry order, Db2 allows an INSERT into the TB table of a row with a null value for
partitioning column C01. If the entry order is ascending, Db2 inserts the row into partition 4; if the
entry order is descending, Db2 inserts the row into partition 1. Only if the table space is created with
the LARGE keyword does Db2 prevent the insertion of a null value into the C01 column.
GUPI
Procedure
To create a temporary table:
1. Determine the type of temporary table that you want to create.
2. Issue the appropriate SQL statement for the type of temporary table that you want to create:
• To define a created temporary table, issue the CREATE GLOBAL TEMPORARY TABLE statement.
• To define a declared temporary table, issue the DECLARE GLOBAL TEMPORARY TABLE statement.
Procedure
Issue the CREATE GLOBAL TEMPORARY TABLE statement.
Example
GUPI The following statement defines a created temporary table that is named TEMPPROD.
GUPI
Related tasks
Setting default statistics for created temporary tables (Db2 Performance)
Related reference
CREATE GLOBAL TEMPORARY TABLE (Db2 SQL)
Example
GUPI The following statement defines a declared temporary table, TEMP_EMP. (This example assumes
that you have already created the WORKFILE database and corresponding table space for the temporary
table.)
If specified explicitly, the qualifier for the name of a declared temporary table, must be SESSION. If the
qualifier is not specified, it is implicitly defined to be SESSION. GUPI
Related reference
DECLARE GLOBAL TEMPORARY TABLE (Db2 SQL)
Table privileges The owner implicitly has The owner implicitly has all PUBLIC implicitly has all table
and all table privileges on the table privileges on the table privileges on the table without
authorization table and the authority to and the authority to drop GRANT authority and has the
drop the table. The owner's the table. The owner's table authority to drop the table.
table privileges can be privileges can be granted These table privileges cannot be
granted and revoked, either and revoked, but only with granted or revoked.
individually or with the ALL the ALL clause; individual
Any authorization ID can access
clause. table privileges cannot be
the table without a grant of any
granted or revoked.
Another authorization ID can privileges for the table.
access the table only if it has Another authorization ID can
been granted appropriate access the table only if it has
privileges for the table. been granted ALL privileges
for the table.
Indexes and Indexes and SQL statements Indexes, UPDATE (searched Indexes and SQL statements
other SQL that modify data (INSERT, or positioned), and DELETE that modify data (INSERT,
statement UPDATE, DELETE, and so on) (positioned only) are not UPDATE, DELETE, and so on)
support are supported. supported. are supported.
Related concepts
Temporary tables (Db2 Application programming and SQL)
Related tasks
Creating temporary tables
Temporary tables are useful when you need to sort or query intermediate result tables that contain large
numbers of rows and identify a small subset of rows to store permanently. The two types of temporary
tables are created temporary tables and declared temporary tables.
Setting default statistics for created temporary tables (Db2 Performance)
Related reference
CREATE GLOBAL TEMPORARY TABLE (Db2 SQL)
Bitemporal tables
A bitemporal table is a table that is both a system-period temporal table and an application-period
temporal table. You can use a bitemporal table to keep application period information and system-based
historical information. Therefore, you have a lot of flexibility in how you query data, based on periods of
time.
Related concepts
Recovery of temporal tables with system-period data versioning
You must recover a system-period temporal table that is defined with system-period data versioning
and its corresponding history table, as a set, to a single point in time. You can recover the table spaces
individually only if you specify the VERIFYSET NO option in the RECOVER utility statement.
Related tasks
Adding a system period and system-period data versioning to an existing table
You can alter existing tables to use system-period data versioning.
Creating a system-period temporal table
You can create a temporal table that has a system period and define system-period data versioning on the
table, so that the data is versioned after insert, update, and delete operations.
Adding an application period to a table
You can alter a table to add an application period so that you maintain the beginning and ending values for
a row.
Creating an application-period temporal table
An application-period temporal table is a type of temporal table where you maintain the values that
indicate when a row is valid. The other type of temporal table is a system-period temporal table where Db2
maintains the values that indicate when a row is valid.
Procedure
To create a temporal table with a system period and define system-period data versioning on the table:
1. Issue a CREATE TABLE statement with a SYSTEM_TIME clause.
The created table must have the following attributes:
• A row-begin column that is defined as TIMESTAMP(12) NOT NULL with the GENERATED ALWAYS AS
ROW BEGIN attribute.
• A row-end column that is defined as TIMESTAMP(12) NOT NULL with the GENERATED ALWAYS AS
ROW END attribute.
• A system period (SYSTEM_TIME) defined on two timestamp columns. The first column is the row-
begin column and the second column is the row-end column.
• A transaction-start-ID column that defined as TIMESTAMP(12) NOT NULL with the GENERATED
ALWAYS AS TRANSACTION START ID attribute.
• The only table in the table space
• The table definition is complete
It cannot have clone table defined on it, and it cannot have the following attributes:
• Column masks
• Row permissions
• Security label columns
2. Issue a CREATE TABLE statement to create a history table that receives the old rows from the system-
period temporal table.
The history table must have the following attributes:
• The same number of columns as the system-period temporal table that it corresponds to
• Columns with the same names, data types, null attributes, CCSIDs, subtypes, hidden attributes,
and field procedures as the corresponding system-period temporal table. However, the history table
cannot have any GENERATED ALWAYS columns unless the system-period temporal table has a
ROWID GENERATED ALWAYS or ROWID GENERATED BY DEFAULT column. In that case, the history
table must have a corresponding ROWID GENERATED ALWAYS column. .
• The only table in the table space
• The table definition is complete
A history table cannot be a materialized query table, an archive-enabled table, or an archive table,
cannot have a clone table defined on it, and cannot have the following attributes:
• Identity columns or row change timestamp columns
• ROW BEGIN, ROW END, or TRANSACTION START ID columns
• Column masks
• Row permissions
• Security label columns
• System or application periods
Example
The following examples show how you can create a temporal table with a system period, create a history
table, and then define system-period data versioning on the table. Also, a final example shows how to
insert data.
GUPI The following example shows a CREATE TABLE statement for creating a temporal table with a
SYSTEM_TIME period. In the example, the sys_start column is the row-begin column, sys_end is the
row-end column, and create_id is the transaction-start-ID column. The SYSTEM_TIME period is defined
on the ROW BEGIN and ROW END columns:
This example shows a CREATE TABLE statement for creating a history table:
To define versioning, issue the ALTER TABLE statement with the ADD VERSIONING clause and the USE
HISTORY TABLE clause, which establishes a link between the system-period temporal table and the
history table:
The following example shows how to insert data in the POLICY_ID and COVERAGE columns of the
POLICY_INFO table:
If you want to use temporal tables to track auditing information, see the example in “Scenario for tracking
auditing information” on page 92.
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related reference
CREATE TABLE (Db2 SQL)
ALTER TABLE (Db2 SQL)
Related information
Managing Ever-Increasing Amounts of Data with IBM Db2 for z/OS: Using Temporal Data Management,
Archive Transparency, and the IBM Db2 Analytics Accelerator for z/OS (IBM Redbooks)
System-period temporal tables and the switch from daylight saving time to
standard time
You might get SQL errors if you update system-period temporal tables during the hour before the switch
to standard time.
If your system uses daylight saving time during a portion of the year, and your row-begin column, row-end
column, and transaction-start-ID column in a system-period temporal table are defined as TIMESTAMP
WITHOUT TIME ZONE, might get errors with SQLCODE -20528 when you update the temporal table
between 1:00 a.m. and 1:59 a.m. before or after the time change. The following example demonstrates
how the error can occur.
1. Suppose that you create system-period temporal table POLICY_INFO:
2. Next, you create history table HIST_POLICY_INFO, and alter table POLICY_INFO to associate history
table HIST_POLICY_INFO with POLICY_INFO:
3. At 1:30 a.m. on the day on which the switch to standard time occurs, you issue this SQL statement,
which inserts a row into POLICY_INFO.
The POLICY_ID, COVERAGE, SYS_START and SYSEND columns of POLICY_INFO contain these values:
4. Your system administrator switches the system to standard time at 2:00 a.m., which changes the time
to 1:00 a.m.
5. At 1:25 a.m., after the switch to standard time occurs, you issue this SQL statement, which updates
the row that you inserted in POLICY_INFO in the previous step.
If this update operation succeeded, a record like this would be written in the HIST_POLICY_INFO
table:
The row-begin column would have a greater value than the row-end column. Db2 therefore does not
allow the update operation, and issues an error with SQLCODE -20528.
To avoid SQLCODE -20528 errors because of the switch to standard time, you can take one of these
actions:
• Do not do any updates to system-period temporal tables between 1:00 a.m. and 1:59 a.m. before or
after the switch from daylight saving time to standard time.
• Define the row-begin, row-end, and transaction-start-ID columns in your system-period temporal tables
and history tables as TIMESTAMP(12) WITH TIME ZONE. When the columns are defined in that way,
their data is stored in UTC, with a time zone of +00:00, so the time change cannot result in a row-begin
column with a time that is later than the row-end column time.
Related information
-20528 (Db2 Codes)
Examples
GUPI
The specified application period means that a row is valid from bus-start, including the bus-start value,
to bus-end, but not including the bus-end value. This type of period is called an inclusive-exclusive
period and is the default behavior for application periods.
Example of creating an application-period temporal table with an inclusive-inclusive period of data
type DATE
The following example CREATE TABLE statement contains the INCLUSIVE keyword in the definition of
the application period to indicate an inclusive-inclusive period:
The inclusive-inclusive period means that a row is valid from bus-start, including the bus-start value,
to bus-end, including the bus-end value. In this case, the data type of these columns is DATE.
Suppose that you issue the following INSERT statement:
Suppose that you then issue the following update statement to change the coverage amount for policy
A123 between May 1, 2008 and May 31, 2008.
The middle row shows the updated values for the specified period of time. In addition, two rows were
inserted to represent the part of the row that was not affected by the UPDATE statement.
Example of creating an application-period temporal table with an inclusive-inclusive period of data
type TIMESTAMP
The following example CREATE TABLE statement creates a table with an inclusive-inclusive
application period with type TIMESTAMP(6).
Suppose that you then issue the following update statement to change the coverage amount for policy
A123 between the indicated TIMESTAMP values:
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related reference
CREATE TABLE (Db2 SQL)
Procedure
To create a bitemporal table and define system-period data versioning on the table:
1. Issue a CREATE TABLE statement with both the SYSTEM_TIME clause and the BUSINESS_TIME clause.
For more information about the requirements for the history table, see “Creating a system-period
temporal table” on page 84 and “Creating an application-period temporal table” on page 88.
2. Issue a CREATE TABLE statement to create a history table that receives the old rows from the
bitemporal table.
3. Issue the ALTER TABLE ADD VERSIONING statement with the USE HISTORY TABLE clause to define
system-period data versioning and establish a link between the bitemporal table and the history table.
Example
The following examples show how you can create a bitemporal table, create a history table, and then
define system-period data versioning.
GUPI This example shows a CREATE TABLE statement with the SYSTEM_TIME and BUSINESS_TIME
clauses for creating a bitemporal table:
This example shows a CREATE TABLE statement for creating a history table:
This example shows the ALTER TABLE ADD VERSIONING statement with the USE HISTORY TABLE clause
that establishes a link between the bitemporal table and the history table to enable system-period data
versioning. Also, a unique index is added to the bitemporal table.
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related tasks
Adding a system period and system-period data versioning to an existing table
You can alter existing tables to use system-period data versioning.
Adding an application period to a table
You can alter a table to add an application period so that you maintain the beginning and ending values for
a row.
Related reference
CREATE TABLE (Db2 SQL)
ALTER TABLE (Db2 SQL)
The user_id column is to store who modified the data. This column is defined as a non-deterministic
generated expression column that will contain the value of the SESSION_USER special register at the time
of a data change operation.
The op_code column is to store the SQL operation that modified that data. This column is also defined as a
non-deterministic generated expression column.
Suppose that you then issue the following statements to create a history table for STT and to associate
that history table with STT:
CREATE TABLE STT_HISTORY (balance INT, user_id VARCHAR(128) , op_code CHAR(1) ... );
In the ALTER TABLE statement, the ON DELETE ADD EXTRA ROW clause indicates that when a row is
deleted from STT, an extra row is to be inserted into the history table. This extra row in the history table
STT_HISTORY Empty
Later, on 1 December 2011, user HAAS issues the following statement to update the row:
STT_HISTORY
row 1 (1, 'KWAN', 'I', 2010-06-15, 2011-12-01)
On 20 December 2013, user THOMPSON issues the following statement to delete the row:
Procedure
Issue a SELECT statement, such as:
GUPI
The following example shows how you can request data, based on time criteria from a system-period
temporal table.
Likewise, the following example shows how you can request data, based on time criteria from an
application-period temporal table.
GUPI
If you are requesting historical data from a system-period temporal table that is defined with system-
period data versioning, Db2 rewrites the query to include data from the history table.
• Specify the time criteria by using special registers:
The advantage of this method is that you can change the time criteria later and not have to modify the
SQL and then rebind the application.
a) Write the SELECT statement without any time criteria specified.
b) When you bind the application, ensure that the appropriate bind options are set as follows:
– If you are querying a system-period temporal table, ensure that SYSTIMESENSITIVE is set to
YES.
– If you are querying an application-period temporal table, ensure that BUSTIMESENSITIVE is set
to YES.
c) Before you call the application, set the appropriate special registers to the timestamp value for
which you want to query data:
– If you are querying a system-period temporal table, set CURRENT TEMPORAL SYSTEM_TIME.
– If you are querying an application-period temporal table, set CURRENT TEMPORAL
BUSINESS_TIME.
GUPI For example, assume that you have system-period temporal table STT with the column
POLICY_ID and you want to retrieve data from one year ago. You can set the CURRENT TEMPORAL
SYSTEM_TIME period as follows:
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related reference
from-clause (Db2 SQL)
table-reference (Db2 SQL)
BIND and REBIND options for packages, plans, and services (Db2 Commands)
CURRENT TEMPORAL BUSINESS_TIME (Db2 SQL)
CURRENT TEMPORAL SYSTEM_TIME (Db2 SQL)
Procedure
Issue the CREATE TABLE statement.
Example
GUPI The following CREATE TABLE statement defines a materialized query table named TRANSCNT.
TRANSCNT summarizes the number of transactions in table TRANS by account, location, and year.
The fullselect, together with the DATA INITIALLY DEFERRED clause and the REFRESH DEFERRED clause,
defines the table as a materialized query table. GUPI
Related tasks
Using materialized query tables to improve SQL performance (Db2 Performance)
Creating a materialized query table (Db2 Performance)
Registering an existing table as a materialized query table (Db2 Performance)
Procedure
Issue the ALTER TABLE statement with the ADD CLONE option.
Example
The following example shows how to create a clone table by issuing the ALTER TABLE statement with the
ADD CLONE option:
GUPI
Related tasks
Exchanging data between a base table and clone table
You can exchange table data and index data between the base table and clone table by using the
EXCHANGE statement.
Related reference
ALTER TABLE (Db2 SQL)
Procedure
GUPI To exchange data between the base table and clone table, complete the following steps:
1. Issue an EXCHANGE statement with the DATA BETWEEN TABLE table-name1 AND table-name2
syntax.
Results
After a data exchange, the base and clone table names remain the same as they were prior to the data
exchange. No data movement actually takes place. The instance numbers in the underlying VSAM data
sets for the objects (tables and indexes) do change, and this has the effect of changing the data that
appears in the base and clone tables and their indexes. For example, a base table exists with the data set
name *I0001.*. The table is cloned and the clone's data set is initially named *.I0002.*. After an exchange,
the base objects are named *.I0002.* and the clones are named *I0001.*. Each time that an exchange
happens, the instance numbers that represent the base and the clone objects change, which immediately
changes the data contained in the base and clone tables and indexes.
GUPI
What to do next
Exchanging data between the base table and the clone table does not invalidate packages. However, Db2
writes VALID='A' in the SYSIBM.SYSPACKAGE catalog table rows for packages that reference the tables to
indicate that a rebind might be needed before the package can use the exchanged data.
Related tasks
Creating a clone table
Procedure
To create an archive table:
1. Create a table with the same columns as the table for which you want to archive data.
For a complete list of requirements for archive tables, see the information about the ENABLE ARCHIVE
clause in ALTER TABLE (Db2 SQL).
2. Designate the original table as an archive-enabled table by issuing an ALTER TABLE statement with the
ENABLE ARCHIVE clause. In that clause, specify the table that you created in the previous step as the
archive table.
3. If you want rows to be automatically archived, set the built-in global variable
SYSIBMADM.MOVE_TO_ARCHIVE to Y or E.
When this built-in global variable is set to Y or E, Db2 automatically moves deleted rows to the archive
table.
4. If you want to remove the relationship between the archive-enabled table and the archive table, issue
the ALTER TABLE statement for the archive-enabled table and specify the DISABLE ARCHIVE clause.
Both tables will still exist, but the relationship is removed.
Procedure
Issue the CREATE VIEW SQL statement.
Unless you specifically list different column names after the view name, the column names of the view are
the same as those of the underlying table. GUPI
Example
Example of defining a view on a single table: Assume that you want to create a view on the DEPT table.
Of the four columns in the table, the view needs only three: DEPTNO, DEPTNAME, and MGRNO. The order
of the columns that you specify in the SELECT clause is the order in which they appear in the view:
GUPI
GUPI
GUPI
Example of defining a view that combines information from several tables: You can create a view that
contains a union of more than one table. Db2 provides two types of joins—an outer join and an inner join.
An outer join includes rows in which the values in the join columns don't match, and rows in which the
values match. An inner join includes only rows in which matching values in the join columns are returned.
The following example is an inner join of columns from the DEPT and EMP tables. The WHERE clause
limits the view to just those columns in which the MGRNO in the DEPT table matches the EMPNO in the
EMP table:
GUPI
The result of executing this CREATE VIEW statement is an inner join view of two tables, which is shown
below:
If you want to include only those departments that report to department A00 and want to use a different
set of column names. Use the following CREATE VIEW statement:
You can execute the following SELECT statement to see the view contents:
When you execute this SELECT statement, the result is a view of a subset of the same data, but with
different column names, as follows:
GUPI
Related tasks
Altering Db2 views
To alter a view, you must drop the view and create a new view with your modified specifications.
Dropping Db2 views
You can drop a Db2 view by removing the view at the current server.
Related reference
CREATE VIEW (Db2 SQL)
Procedure
To query a view that references a temporal table, use one of the following methods:
• Specify a period specification (either a SYSTEM_TIME period or BUSINESS_TIME period) following the
name of a view in the FROM clause of a query.
• Use the CURRENT TEMPORAL SYSTEM_TIME or CURRENT TEMPORAL BUSINESS_TIME special
registers. In this case, you do not need to include a period specification in the query. For instructions
on how to use these special registers instead of a period specification, see “Querying temporal tables”
on page 94.
Example
GUPI
SELECT * FROM v0
FOR SYSTEM_TIME AS OF TIMESTAMP ‘2013-01-10 10:00:00';
GUPI
CREATE VIEW V1 AS
SELECT * FROM EMP
WHERE DEPT LIKE ‘D%';
A user with the SELECT privilege on view V1 can see the information from the EMP table for employees
in departments whose IDs begin with D. The EMP table has only one department (D11) with an ID that
satisfies the condition.
Assume that a user has the INSERT privilege on view V1. A user with both SELECT and INSERT privileges
can insert a row for department E01, perhaps erroneously, but cannot select the row that was just
inserted.
The following example shows an alternative way to define view V1.
Example 2: You can avoid the situation in which a value that does not match the view definition is
inserted into the base table. To do this, instead define view V1 to include the WITH CHECK OPTION
clause:
With the new definition, any insert or update to view V1 must satisfy the predicate that is contained in
the WHERE clause: DEPT LIKE ‘D%'. The check can be valuable, but it also carries a processing cost; each
potential insert or update must be checked against the view definition. Therefore, you must weigh the
advantage of protecting data integrity against the disadvantage of performance degradation. GUPI
Procedure
Issue the DROP VIEW statement.
Related tasks
Altering Db2 views
To alter a view, you must drop the view and create a new view with your modified specifications.
Creating Db2 views
Types of indexes
In Db2 for z/OS, you can create a number of different types of indexes. Carefully consider which type or
types best suit your data and applications.
All of the index types are listed in the following tables. These index types are not necessarily mutually
exclusive. For example, a unique index can also be a clustering index. Restrictions are noted.
The following table lists the types of indexes that you can create on any table.
Primary index A unique index on the primary key of the table. No keywords are required in Defining a
the CREATE INDEX statement. parent key
A primary key is column or set of columns
and unique
that uniquely identifies one row of a table. You An index is a primary index
index (Db2
define a primary key when you create or alter if the index key that is
Application
a table; specify PRIMARY KEY in the CREATE specified in the CREATE
programming
TABLE statement or ALTER TABLE statement. INDEX statement matches the
and SQL)
Primary keys are optional. primary key of the table.
If you define a primary key on a table, you
must define a primary index on that key.
Otherwise, if the table does not have a primary
key, it cannot have a primary index.
Each table can have only one primary index.
However, the table can have additional unique
indexes.
Secondary An index that is not a primary index. No keywords are required in None
index the CREATE INDEX statement.
In the context of a partitioned table, a
secondary index can also mean an index that A secondary index is any index
is not a partitioning index. See Table 18 on that is not a primary index or
page 106. partitioning index.
Expression- An index that is based on a general expression. In the CREATE INDEX or “Expression-
based index Use expression-based indexes when you want ALTER INDEX statement, the based
an efficient evaluation of queries that involve a index key is defined as an indexes” on
column-expression. expression rather than a page 116
column or set of columns.
The following table lists the types of indexes that you can create on partitioned tables. These indexes
apply to partition-by-range table spaces. They do not apply to partition-by-growth table spaces.
Partitioning An index that corresponds to the columns that No keywords are required in “Indexes on
index (PI) partition the table. These columns are called the CREATE INDEX statement. partitioned
the partitioning key and are specified in the tables” on
An index is a partitioning index
PARTITION BY clause of the CREATE TABLE page 118
if the index key matches the
statement.
partitioning key.
All partitioning indexes must also be
To confirm that an index is
partitioned.
partitioning index, check the
Partitioning indexes are not required. SYSIBM.SYSINDEXES catalog
table. The INDEXTYPE column
for that index contains a P
if the index is a partitioning
index.
Secondary Depending on the context, a secondary index No keywords are required in “Indexes on
index can mean one of the following two things: the CREATE INDEX statement. partitioned
tables” on
• An index that is not a partitioning index. A secondary index is any index
page 118
• An index that is not a primary index. that is not a primary index or
partitioning index.
CREATE INDEX...
GENERATE KEYS
USING XMLPATTERN
Additionally, when you create any of these types of indexes, you can define whether they have the
following characteristics:
ALTER INDEX...
COMPRESS YES...
Related concepts
Index keys
The usefulness of an index depends on the design of its key, which you define at the time that you create
the index.
Implementing Db2 indexes
Indexes provide efficient access to table data, but can require additional processing when you modify
data in a table.
Related reference
ALTER INDEX (Db2 SQL)
CREATE INDEX (Db2 SQL)
Example
The following example creates a unique index on the EMPPROJACT table. A composite key is defined on
two columns, PROJNO and STDATE.
The INCLUDE clause, which is applicable only on unique indexes, specifies additional columns that you
want appended to the set of index key columns. Any columns that are included with this clause are not
used to enforce uniqueness. These included columns might improve the performance of some queries
through index only access. Using this option might eliminate the need to access data pages for more
queries and might eliminate redundant indexes.
If you issue SELECT PROJNO, STDATE, EMPNO, and FIRSTNAME to the table on which this index
resides, all of the required data can be retrieved from the index without reading data pages. This option
might improve performance.
GUPI
Related tasks
Dropping and redefining a Db2 index
Dropping an index does not cause Db2 to drop any other objects. The consequence of dropping indexes
is that Db2 invalidates packages that use the index and automatically rebinds them when they are next
used.
Related reference
CREATE INDEX (Db2 SQL)
Examples
GUPI
Example 1
For example, if you define an index by specifying DATE DESC, TIME ASC as the column names and
order, Db2 can use this same index for both of the following ORDER BY clauses:
...
WHERE CODE = 'A'
ORDER BY CODE, DATE DESC, TIME ASC
Db2 can use any of the following index keys to satisfy the ordering:
• CODE, DATE DESC, TIME ASC
• CODE, DATE ASC, TIME DESC
• DATE DESC, TIME ASC
• DATE ASC, TIME DESC
Db2 can ignore the CODE column in the ORDER BY clause and the index because the value of the
CODE column in the result table of the query has no effect on the order of the data. If the CODE
column is included, it can be in any position in the ORDER BY clause and in the index.
GUPI
Related reference
order-by-clause (Db2 SQL)
Index keys
The usefulness of an index depends on the design of its key, which you define at the time that you create
the index.
An index key is a column, an ordered collection of columns, or an expression on which you define an
index. Db2 uses an index key to determine the order of index entries. Good candidates for index keys are
columns or expressions that you use frequently in operations that select, join, group, and order data.
All index keys do not need to be unique. For example, an index on the SALARY column of the sample EMP
table allows duplicates because several employees can earn the same salary.
A composite key is an index key that is built on 2 or more columns. An index key can contain up to 64
columns.
GUPI
For example, the following SQL statement creates a unique index on the EMPPROJACT table. A composite
key is defined on two columns, PROJNO and STDATE.
This composite key is useful when you need to find project information by start date. Consider a SELECT
statement that has the following WHERE clause:
This SELECT statement can execute more efficiently than if separate indexes are defined on PROJNO and
on STDATE.
GUPI
Index names
The name for an index is an SQL identifier of up to 128 characters. You can qualify this name with
an identifier, or schema, of up to 128 characters. An example index names is MYINDEX. For more
information, see SQL identifiers (Db2 SQL).
The following rules apply to index names:
index-name
A qualified or unqualified name that designates an index.
A qualified index name is an authorization ID or schema name followed by a period and an SQL
identifier.
An unqualified index name is an SQL identifier with an implicit qualifier. The implicit qualifier is
an authorization ID, which is determined by the context in which the unqualified name appears as
described by the rules in Qualification of unqualified object names (Db2 SQL).
For an index on a declared temporary table, the qualifier must be SESSION.
The index space name is an eight-character name, which must be unique among names of all index
spaces and table spaces in the database.
Creation of an index
If the table that you are indexing is empty, Db2 creates the index. However, Db2 does not actually create
index entries until the table is loaded or rows are inserted. If the table is not empty, you can choose to
Copies of an index
If your index is fairly large and needs the benefit of high availability, consider copying it for faster recovery.
Specify the COPY YES clause on a CREATE INDEX or ALTER INDEX statement to allow the indexes to be
copied. Db2 can then track the ranges of log records to apply during recovery, after the image copy of the
index is restored. (The alternative to copying the index is to use the REBUILD INDEX utility, which might
increase the amount of time that the index is unavailable to applications.)
Partitioned • Partitioning
• Secondary
This topic explains the types of indexes that apply to all tables. Indexes that apply to partitioned tables
only are covered separately.
Unique indexes
Db2 uses unique indexes to ensure that no identical key values are stored in a table.
When you create a table that contains a primary key or a unique constraint, you must create a unique
index for the primary key and for each unique constraint. Db2 marks the table definition as incomplete
until the explicit creation of the required enforcing indexes, which can be created implicitly depending on
whether the table space was created implicitly, the schema processor, or the CURRENT RULES special
register. If the required indexes are created implicitly, the table definition is not marked as incomplete.
Example
A good candidate for a unique index is the EMPNO column of the EMP table. The following figure shows a
small set of rows from the EMP table and illustrates the unique index on EMPNO.
Index in
EMP table EMP table
EMPNO Page Row EMPNO LASTNAMEJOB DEPT
1 000220 LUTZ DES D11
000030 1 2 000330 LEE FLD E21
000060 3 000030 KWAN MGR C01
000140
000200 1 200140 NATZ ANL C01
000220 2 2 000320 RAMLAL FLD E21
000330 3 000200 BROWN DES D11
200140
000320 1 200340 ALONZO FLD E21
200340 3 2 000140 NICHOLLS SLS C01
3 000060 STERN MGR D11
Db2 uses this index to prevent the insertion of a row to the EMP table if its EMPNO value matches that of
an existing row. The preceding figure illustrates the relationship between each EMPNO value in the index
and the corresponding page number and row. Db2 uses the index to locate the row for employee 000030,
for example, in row 3 of page 1.
If you do not want duplicate values in the key column, create a unique index by using the UNIQUE clause
of the CREATE INDEX statement.
Example
GUPI The DEPT table does not allow duplicate department IDs. Creating a unique index, as the following
example shows, prevents duplicate values.
Before you create a unique index on a table that already contains data, ensure that no pair of rows has the
same key value. If Db2 finds a duplicate value in a set of key columns for a unique index, Db2 issues an
error message and does not create the index.
If an index key allows nulls for some of its column values, you can use the WHERE NOT NULL clause to
ensure that the non-null values of the index key are unique.
Unique indexes are an important part of implementing referential constraints among the tables in your
Db2 database. You cannot define a foreign key unless the corresponding primary key already exists and
has a unique index defined on it.
GUPI
INCLUDE columns
Unique indexes can include additional columns that are not part of a unique constraint. Those columns
are called INCLUDE columns. When you specify INCLUDE columns in a unique index, queries can use the
unique index for index-only access. Including these columns can eliminate the need to maintain extra
indexes that are used solely to enable index-only access.
Related reference
CREATE INDEX (Db2 SQL)
Nonunique indexes
You can use nonunique indexes to improve the performance of data access when the values of the
columns in the index are not necessarily unique.
Recommendation: Do not create nonunique indexes on very small tables, because scans of the tables are
more efficient than using indexes.
To create nonunique indexes, use the SQL CREATE INDEX statement. For nonunique indexes, Db2 allows
users and programs to enter duplicate values in a key column.
Example
GUPI Assume that more than one employee is named David Brown. Consider an index that is defined on
the FIRSTNME and LASTNAME columns of the EMP table.
This index is an example of a nonunique index that can contain duplicate entries.
Clustering indexes
A clustering index determines how rows are physically ordered (clustered) in a table space. Clustering
indexes provide significant performance advantages in some operations, particularly those that involve
many records. Examples of operations that benefit from clustering indexes include grouping operations,
ordering operations, and comparisons other than equal.
Any index, except for an expression-based index or an XML index, can be a clustering index. You can
define only one clustering index on a table.
You can define a clustering index on a partitioned table space or on a segmented table space. On a
partitioned table space, a clustering index can be a partitioning index or a secondary index. If a clustering
index on a partitioned table is not a partitioning index, the rows are ordered in cluster sequence within
each data partition instead of spanning partitions. (Prior to Version 8 of Db2 UDB for z/OS, the partitioning
index was required to be the clustering index.)
Restriction: An expression based index or an XML index cannot be a clustering index.
When a table has a clustering index, an INSERT statement causes Db2 to insert the records as nearly as
possible in the order of their index values. The first index that you define on the table serves implicitly
as the clustering index unless you explicitly specify CLUSTER when you create or alter another index. For
example, if you first define a unique index on the EMPNO column of the EMP table, Db2 inserts rows into
the EMP table in the order of the employee identification number unless you explicitly define another
index to be the clustering index.
Although a table can have several indexes, only one index can be a clustering index. If you do not define
a clustering index for a table, Db2 recognizes the first index that is created on the table as the implicit
clustering index when it orders data rows.
Tip:
• Always define a clustering index. Otherwise, Db2 might not choose the key that you would prefer for the
index.
• Define the sequence of a clustering index to support high-volume processing of data.
You use the CLUSTER clause of the CREATE INDEX or ALTER INDEX statement to define a clustering
index.
Example
GUPI Assume that you often need to gather employee information by department. In the EMP table, you
can create a clustering index on the DEPTNO column.
As a result, all rows for the same department are probably close together. Db2 can generally access all
the rows for that department in a single read. (Using a clustering index does not guarantee that all rows
for the same department are stored on the same page. The actual storage of rows depends on the size of
the rows, the number of rows, and the amount of available free space. Likewise, some pages may contain
rows for more than one department.)
GUPI
Index on
EMP table EMP table
DEPT Page Row DEPT EMPNO LASTNAME JOB
C01 1 C01 000030 KWAN MGR
D11 3 2 C01 000140 NICHOLLS SLS
E21 3 C01
Suppose that you subsequently create a clustering index on the same table. In this case, Db2 identifies
it as the clustering index but does not rearrange the data that is already in the table. The organization
of the data remains as it was with the original nonclustering index that you created. However, when the
REORG utility reorganizes the table space, Db2 clusters the data according to the sequence of the new
clustering index. Therefore, if you know that you want a clustering index, you should define the clustering
index before you load the table. If that is not possible, you must define the index and then reorganize the
table. If you create or drop and re-create a clustering index after loading the table, those changes take
effect after a subsequent reorganization.
Related reference
Employee table (DSN8C10.EMP) (Introduction to Db2 for z/OS)
CREATE INDEX (Db2 SQL)
Expression-based indexes
By using the expression-based index capability of Db2, you can create an index that is based on a general
expression. You can enhance query performance if Db2 chooses the expression-based index.
Use expression-based indexes when you want an efficient evaluation of queries that involve a column-
expression. In contrast to simple indexes, where index keys consist of a concatenation of one or more
table columns that you specify, the index key values are not the same as values in the table columns. The
values have been transformed by the expressions that you specify.
You can create the index by using the CREATE INDEX statement, and specifying an expression, rather
than a column name. If an index is created with the UNIQUE option, the uniqueness is enforced against
the values that are stored in the index, not against the original column values.
Db2 does not use expression-based indexes for queries that use sensitive static scrollable cursors.
Related concepts
Expressions (Db2 SQL)
Index keys
The usefulness of an index depends on the design of its key, which you define at the time that you create
the index.
Related reference
CREATE INDEX (Db2 SQL)
Compression of indexes
You can reduce the amount of space that an index occupies on disk by compressing the index.
The COMPRESS YES/NO clause of the ALTER INDEX and CREATE INDEX statements allows you to
compress the data in an index and reduce the size of the index on disk. However, index compression
is heavily data-dependent, and some indexes might contain data that does not yield significant space
savings. Compressed indexes might also use more real and virtual storage than non-compressed indexes.
After the EMPINDEX index is created successfully, several entries are populated in the catalog tables.
GUPI
Example 2
You can create two XML indexes with the same pattern expression by using different data types
for each. You can use the different indexes to choose how you want to interpret the result of the
expression as multiple data types. For example, the value '12345' has a character representation but
it can also be interpreted as the number 12,345. For example, assume that you want to index the path
'/department/emp/@id' as both a character string and a number. You must create two indexes,
one for the VARCHAR data type and one for the DECFLOAT data type. The values in the document are
cast to the specified data type for each index.
Related concepts
Storage structure for XML data (Db2 Programming for XML)
Processing XML data with Db2 pureXML (Introduction to Db2 for z/OS)
XML data indexing (Db2 Programming for XML)
Pattern expressions (Db2 Programming for XML)
Best practices for XML performance in Db2 (Db2 Performance)
Related reference
XMLEXISTS predicate (Db2 SQL)
CREATE INDEX (Db2 SQL)
Partitioned index
A partitioned index is an index that is physically partitioned. Any index on a partitioned table, except for an
XML index, can be physically partitioned.
To create a partitioned index, specify PARTITIONED in the CREATE INDEX statement.
A partitioned index consists of multiple data sets. Each data set corresponds to a table partition. The
following figure illustrates the difference between a partitioned index and a nonpartitioned index.
310
321
P2
323
351
407
408
P3
430
415
510
P4 512
530
561
Partitioning index
A partitioning index is an index on the column or columns that partition the table. Partitioning indexes are
generally not required because Db2 uses table-controlled partitioning, where the partitioning scheme (the
partitioning key and limit key values) are already defined in the table definition.
The CREATE INDEX statement does not have a specific SQL keyword that designates an index as a
partitioning index. Instead, an index is a partitioning index if the index key that is specified in the CREATE
INDEX statement matches the partitioning key. The partitioning key is the column or columns that are
specified in the PARTITION BY clause of the CREATE TABLE statement. Those columns partition the table.
An index key matches the partitioning key if it has the same leftmost columns and collating sequence
(ASC/DESC) as the columns in the partitioning key.
A partitioning key is different from the limit key values. A partitioning key defines the columns on which
the table is partitioned. The limit key values define which values belong in each partition. Specifically, a
limit key value is the value of the partitioning key that defines the partition boundary. It is the highest
value of the partitioning key for an ascending index, or the lowest value for a descending index. Limit
key values are specified in the PARTITION... ENDING AT clause of a CREATE TABLE statement or ALTER
TABLE statement. The specified ranges partition the table space and the corresponding partitioning index
space.
Remember: Partitioning is different from clustering. Whereas, partitioning guarantees that rows are
grouped into certain partitions based on value ranges defined partition limit key, clustering controls how
rows are physically ordered in a partition or table space. Clustering is controlled by a clustering index and
can apply to any type of table space. For more information, see “Clustering indexes” on page 114.
Tables created in earlier Db2 releases might still use index-controlled partitioning, where the partitioning
scheme was not defined as part of the table definition. In this case, a partitioning index is required to
specify the partitioning scheme. (The partitioning key and the limit key values were specified in the PART
VALUES clause of the CREATE INDEX statement.)
Deprecated function: Db2 12 can still process range-partitioned tables and indexes that use index-
controlled partitioning. However, such tables and indexes are deprecated. For best results, convert them
to use table-controlled partitioning (and a PBR table space) as soon as possible. For more information,
see “Converting table spaces to use table-controlled partitioning” on page 191.
Optionally, you can issue the following CREATE INDEX statement to create a partitioning index on
the example AREA_CODES table. This index is not required because the partitioning scheme of the
AREA_CODES table is defined in its CREATE TABLE statement, and AREA_CODES uses table -controlled
partitioning.
Tip: If you use a partitioning index for clustering, the data rows can be physically ordered across the
entire table space.
GUPI
The following figure illustrates the partitioning index on the AREA_CODES table.
407 407 FL
408 408 CA
P3
430 430 TX
415 415 CA
510 510 CA
512 512 TX
P4
530 530 CA
561 561 FL
Related information
“Partitioning data in Db2 tables” on page 74
Creation of a table with table-controlled partitioning (Introduction to Db2 for z/OS)
“Changing the boundary between partitions” on page 219
“Clustering indexes” on page 114
“Converting table spaces to use table-controlled partitioning” on page 191
To understand the advantages of using DPSIs and NPIs, consider the following example indexes on the
AREA_CODES table:
A data partitioned secondary index (DPSI) on the STATE column
Assuming that the AREA_CODES table not partitioned on the STATE column, the following CREATE
INDEX statement creates a DPSI on the AREA_CODES table.
The following example query can make efficient use of the example DPSI. The number of key values
that need to be searched is limited to just the key values of the qualifying partitions, which are only
those with partitioning key values that are less than or equal to 300.
The following query can make efficient use of the example NPI. The number of key values that need to
be searched is limited to scanning the index key values that are greater than 'CA'.
CA 510 CA
512 TX TX
P4 FL 530 CA
TX 561 FL
GUPI
Where:
• xxx is the name of the index that Db2 generates.
• table-name is the name of the table that is specified in the CREATE TABLE statement.
• (column1,...) is the list of column names that were specified in the UNIQUE or PRIMARY KEY clause
of the CREATE TABLE statement, or the column is a ROWID column that is defined as GENERATED BY
DEFAULT.
In addition, if the table space that contains the table is implicitly created, Db2 will check the DEFINE
DATA SET subsystem parameter to determine whether to define the underlying data set for the index
space of the implicitly created index on the base table.
If DEFINE DATA SET is NO, the index is created as if the following CREATE INDEX statement is issued:
When you create a table and specify the organization-clause of the CREATE TABLE statement, Db2
implicitly creates an index for hash overflow rows. This index contains index entries for overflow rows
that do not fit in the fixed hash space. If the hash space that is specified in the organization-clause is
adequate, the hash overflow index should have no entries, or very few entries. The hash overflow index for
a table in a partition-by-range table space is a partitioned index. The hash overflow index for a table in a
partition-by-growth table space is a non-partitioned index.
Db2 determines how much space to allocate for the hash overflow index. Because this index will be
sparsely populated, the size is relatively small compared to a normal index.
Procedure
To create a schema:
1. Write a CREATE SCHEMA statement.
2. Use the schema processor to execute the statement.
Example
The following example shows schema processor input that includes the definition of a schema.
GUPI
Procedure
To process schema definitions:
1. Run the schema processor (DSNHSP) as a batch job. Use the sample JCL provided in member
DSNTEJ1S of the SDSNSAMP library.
The schema processor accepts only one schema definition in a single job. No statements that are
outside the schema definition are accepted. Only SQL comments can precede the CREATE SCHEMA
statement; the end of input ends the schema definition. SQL comments can also be used within and
between SQL statements.
The processor takes the SQL from CREATE SCHEMA (the SYSIN data set), dynamically executes it, and
prints the results in the SYSPRINT data set.
2. Optional: If a statement in the schema definition has an error, the schema processor processes the
remaining statements but rolls back all the work at the end. In this case, you need to fix the statement
in error and resubmit the entire schema definition.
Procedure
Run the LOAD utility control statement with the options that you need.
What to do next
Reset the restricted status of the table space that contains the loaded data.
Related concepts
Before running LOAD (Db2 Utilities)
Row format conversion for table spaces
The row format of a table space is converted when you run the LOAD REPLACE or REORG TABLESPACE
utilities.
Related tasks
Collecting statistics by using Db2 utilities (Db2 Performance)
Related reference
LOAD (Db2 Utilities)
INCURSOR option
The INCURSOR option of the LOAD utility specifies a cursor for the input data set. Use the EXEC SQL utility
control statement to declare the cursor before running the LOAD utility. You define the cursor so that it
selects data from another Db2 table. The column names in the SELECT statement must be identical to
the column names of the table that is being loaded. The INCURSOR option uses the Db2 cross-loader
function.
XML columns
You can load XML documents from input records if the total input record length is less than 32 KB. For
input record length greater than 32 KB, you must load the data from a separate file. (You can also use a
separate file if the input record length is less than 32 KB.)
When the XML data is to be loaded from the input record, specify XML as the input field type. The
target column must be an XML column. The LOAD utility treats XML columns as varying-length data when
loading XML directly from input records and expects a two-byte length field preceding the actual XML
value.
The XML tables are loaded when the base table is loaded. You cannot specify the name of the auxiliary
XML table to load.
XML documents must be well formed in order to be loaded.
LOB columns
The LOAD utility treats LOB columns as varying-length data. The length value for a LOB column must be
4 bytes. The LOAD utility can be used to load LOB data if the length of the row, including the length of
the LOB data, does not exceed 32 KB. The auxiliary tables are loaded when the base table is loaded. You
cannot specify the name of the auxiliary table to load.
Procedure
Issue an INSERT statement, and insert single or multiple rows.
What to do next
You can issue the statement interactively or embed it in an application program.
Related tasks
Inserting multiple rows
You can use a form of INSERT that copies rows from another table.
Inserting a single row
The simplest form of the INSERT statement inserts a single row of data. In this form of the statement, you
specify the table name, the columns into which the data is to be inserted, and the data itself.
Changing the logging attribute for a table space
You can use the ALTER TABLESPACE statement to set the logging attribute of a table space.
Related reference
INSERT (Db2 SQL)
Procedure
GUPI To insert a single row:
1. Issue an INSERT INTO statement.
2. Specify the table name, the columns into which the data is to be inserted, and the data itself.
Example
For example, suppose that you create a test table, TEMPDEPT, with the same characteristics as the
department table:
Procedure
GUPI To add multiple rows to a table:
1. Issue an INSERT INTO statement.
For example, the following statement loads TEMPDEPT with data from the department table about all
departments that report to department D01.
2. Optional: Embed the INSERT statement in an application program to insert multiple rows into a table
from the values that are provided in host-variable arrays.
a) Specify the table name, the columns into which the data is to be inserted, and the arrays that
contain the data.
Each array corresponds to a column.
For example, you can load TEMPDEPT with the number of rows in the host variable num-rows by using
the following embedded INSERT statement:
EXEC SQL
INSERT INTO SMITH.TEMPDEPT
FOR :num-rows ROWS
VALUES (:hva1, :hva2, :hva3, :hva4);
Assume that the host-variable arrays hva1, hva2, hva3, and hva4 are populated with the values that
are to be inserted. The number of rows to insert must be less than or equal to the dimension of each
host-variable array. GUPI
Related concepts
Implications of using an INSERT statement to load tables
If you plan to use the INSERT statement to load tables, you should consider some of the implications.
Related tasks
Inserting a single row
The simplest form of the INSERT statement inserts a single row of data. In this form of the statement, you
specify the table name, the columns into which the data is to be inserted, and the data itself.
BIND PACKAGE(location.DSNUT121)
MEMBER(DSNUGSQL) -
ACTION(ADD) ISOLATION(CS) ENCODING(EBCDIC) -
VALIDATE(BIND) CURRENTDATA(NO) -
LIBRARY('prefix.SDSNDBRM')
Procedure
To load a Db2 for z/OS from a client CLI application, follow these steps:
1. Invoke the SQLSetStmtAttr function to set values for the following attributes:
SQL_ATTR_DB2ZLOAD_LOADSTMT
The text of the LOAD control statement.
SQL_ATTR_DB2ZLOAD_UTILITYID
The utility ID, which is a unique identifier that you can set so that you can identify a particular LOAD
statement invocation. Setting this attribute is optional.
2. Allocate a buffer for the data that is to be loaded.
3. Invoke the SQLSetStmtAttr function to set the SQL_ATTR_DB2ZLOAD_BEGIN attribute to indicate to
the CLI driver that the LOAD operation is to begin.
4. Invoke the SQLPutData function one or more times to send the data that is to be loaded to the CLI
driver.
5. When all the data has been sent to the driver, invoke the SQLSetStmtAttr function to set the
SQL_ATTR_DB2ZLOAD_END attribute. That attribute indicates to the CLI driver that the LOAD
operation is complete.
Example
The following code fragment demonstrates using the DRDA fast load process to load data from file
customer.data into table MYID.CUSTOMER_DATA.
This code fragment uses the STMT_HANDLE_CHECK macro. STMT_HANDLE_CHECK is in utilcli.h, which is
shipped with Db2 for Linux, UNIX, and Windows. For information on STMT_HANDLE_CHECK and the utility
functions that it invokes, see the following topics:
Declarations of utility functions used by DB2 CLI samples
Utility functions used by DB2 CLI samples
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sqlcli1.h>
#include "utilcli.h" /* Header file for CLI sample code */
#include <sqlca.h>
Procedure
Use the DataRefresher licensed program.
Related concepts
Tools for moving Db2 data
Procedure
See Implementing Db2 stored procedures.
Related tasks
Creating external stored procedures (Db2 Application programming and SQL)
Creating external SQL procedures (deprecated) (Db2 Application programming and SQL)
Creating native SQL procedures (Db2 Application programming and SQL)
Related reference
Procedures that are supplied with Db2 (Db2 SQL)
Related reference
CREATE PROCEDURE (Db2 SQL)
Related information
Db2 for z/OS Stored Procedures: Through the CALL and Beyond (IBM Redbooks)
Procedure
Issue the DROP PROCEDURE statement, and specify the name of the stored procedure that you want to
drop.
Example
For example, to drop the stored procedure MYPROC in schema SYSPROC, issue the following statement:
GUPI
Related tasks
Migrating an external SQL procedure to a native SQL procedure (Db2 Application programming and SQL)
Related reference
DROP (Db2 SQL)
Insert rules
The insert rules for referential integrity apply to parent and dependent tables.
The following insert rules for referential integrity apply to parent and dependent tables:
Parent table rules
You can insert a row at any time into a parent table without taking any action in the dependent table.
For example, you can create a new department in the DEPT table without making any change to the
EMP table. If you are inserting rows into a parent table that is involved in a referential constraint, the
following restrictions apply:
• A unique index must exist on the parent key.
• You cannot enter duplicate values for the parent key.
• You cannot insert a null value for any column of the parent key.
Dependent table rules
You cannot insert a row into a dependent table unless a row in the parent table has a parent key value
that equals the foreign key value that you want to insert. You can insert a foreign key with a null value
into a dependent table (if the referential constraint allows this), but no logical connection exists if you
do so. If you insert rows into a dependent table, the following restrictions apply:
• Each nonnull value that you insert into a foreign key column must be equal to some value in the
parent key.
• If any field in the foreign key is null, the entire foreign key is null.
Example
For example, assume your company doesn't want to have a row in the PARTS table unless the PROD#
column value in that row matches a valid PROD# in the PRODUCTS table. The PRODUCTS table has a
primary key on PROD#. The PARTS table has a foreign key on PROD#. The constraint definition specifies
a RESTRICT constraint. Every inserted row of the PARTS table must have a PROD# that matches a PROD#
in the PRODUCTS table.
Update rules
The update rules for referential integrity apply to parent and dependent tables.
The following update rules for referential integrity apply to parent and dependent tables:
Parent table rules
You cannot change a parent key column of a row that has a dependent row. If you do, the dependent
row no longer satisfies the referential constraint, so Db2 prohibits the operation.
Dependent table rules
You cannot change the value of a foreign key column in a dependent table unless the new value exists
in the parent key of the parent table.
Example
When an employee transfers from one department to another, the department number for that employee
must change. The new value must be the number of an existing department, or it must be null. You
should not be able to assign an employee to a department that does not exist. However, in the event of
a company reorganization, employees might temporarily not report to a valid department. In this case, a
null value is a possibility.
If an update to a table with a referential constraint fails, Db2 rolls back all changes that were made during
the update.
Delete rules
Delete rules, which are applied to parent and dependent tables, are an important part of Db2 referential
integrity.
The following delete rules for referential integrity apply to parent and dependent tables:
For parent tables
For any particular relationship, Db2 enforces delete rules that are based on the choices that you
specify when you define the referential constraint.
For dependent tables
At any time, you can delete rows from a dependent table without acting on the parent table.
To delete a row from a table that has a parent key and dependent tables, you must obey the delete
rules for that table. To succeed, the DELETE must satisfy all delete rules of all affected relationships. The
DELETE fails if it violates any referential constraint.
Examples
Example 1
Consider the parent table in the department-employee relationship. Suppose that you delete the row
for department C01 from the DEPT table. That deletion affects the information in the EMP table about
Sally Kwan, Heather Nicholls, and Kim Natz, who work in department C01.
Example
To resolve the many-to-many relationship between employees (in the EMP table) and projects (in the
PROJ table), designers create a new associative table, EMP_PROJ, during physical design. EMP and PROJ
are both parent tables to the child table, EMP_PROJ.
When you establish referential constraints, you must create parent tables with at least one unique key
and corresponding indexes before you can define any corresponding foreign keys on dependent tables.
Related concepts
Database design with denormalization (Introduction to Db2 for z/OS)
Entities for different types of relationships (Introduction to Db2 for z/OS)
Related tasks
Using referential integrity for data consistency (Managing Security)
Procedure
Issue the CREATE FUNCTION statement, and specify the type of function that you want to create.
You can create the following types of user-defined functions:
External scalar
The function is written in a programming language and returns a scalar value. The external executable
routine (package) is registered with a database server along with various attributes of the function.
Each time that the function is invoked, the package executes one or more times. See CREATE
FUNCTION (external scalar) (Db2 SQL).
External table
The function is written in a programming language. It returns a table to the subselect from which
it was started by returning one row at a time, each time that the function is started. The external
executable routine (package) is registered with a database server along with various attributes of the
function. Each time that the function is invoked, the package executes one or more times. See CREATE
FUNCTION (external table) (Db2 SQL).
Example
GUPI
The following two examples demonstrate how to define and use both a user-defined function and a
distinct type.
Example 1
Suppose that you define a table called EUROEMP. One column of this table, EUROSAL, has a distinct
type of EURO, which is based on DECIMAL(9,2). You cannot use the built-in AVG function to find the
average value of EUROSAL because AVG operates on built-in data types only. You can, however, define
an AVG function that is sourced on the built-in AVG function and accepts arguments of type EURO:
Example 2
You can then use this function to find the average value of the EUROSAL column:
The next two examples demonstrate how to define and use an external user-defined function.
Example 4
You can then use the REVERSE function in an SQL statement wherever you would use any built-in
function that accepts a character argument, as shown in the following example:
SELECT REVERSE(:CHARSTR)
FROM SYSDUMMY1;
Although you cannot write user-defined aggregate functions, you can define sourced user-defined
aggregate functions that are based on built-in aggregate functions. This capability is useful in cases
where you want to refer to an existing user-defined function by another name or where you want to
pass a distinct type.
The next two examples demonstrate how to define and use a user-defined table function.
Example 5
You can define and write a user-defined table function that users can invoke in the FROM clause of
a SELECT statement. For example, suppose that you define and write a function called BOOKS. This
function returns a table of information about books on a specified subject. The definition looks like the
following example:
Example 6
You can then include the BOOKS function in the FROM clause of a SELECT statement to retrieve the
book information, as shown in the following example:
GUPI
Related concepts
User-defined functions (Db2 SQL)
Related tasks
Deleting user-defined functions
Use the DROP statement to delete a user-defined function at the current server.
Creating a user-defined function (Db2 Application programming and SQL)
Altering user-defined functions
You can use the ALTER FUNCTION statement to update the description of user-defined functions.
Obfuscating source code of SQL procedures, SQL functions, and triggers
You can protect the intellectual property of source code for certain types of SQL procedures, SQL
functions, and triggers by obfuscating the data definition statements that create them. The obfuscation
renders the source code bodies of the SQL functions, SQL procedures, and triggers unreadable, except
when decoded by a database server that supports obfuscated statements.
Procedure
GUPI To delete a user-defined function:
1. Issue the DROP statement.
2. Specify FUNCTION or SPECIFIC FUNCTION.
Example
For example, drop the user-defined function ATOMIC_WEIGHT from the schema CHEM:
GUPI
Related concepts
User-defined functions (Db2 SQL)
Related tasks
Creating user-defined functions
The CREATE FUNCTION statement registers a user-defined function with a database server.
Related reference
DROP (Db2 SQL)
To verify that function ADMF001.TAN is not system-defined, issue a SELECT statement like this one.
To verify that function ADMF001.TAN is system-defined, issue a SELECT statement like this one.
Procedure
To obfuscate program logic in the body of a routine or trigger in a data definition statement, use any the
following approaches:
• Call the CREATE_WRAPPED stored procedure to deploy the routine or trigger.
For example, the following CALL statement produces an obfuscated version of a function that
computes a yearly salary from an hourly wage, given a 40-hour work week.
• Invoke the WRAP built-in function to create the obfuscated form of a statement for use in your
application source code.
For example, the following statement returns the obfuscated form of a procedure:
Notice the that string delimiters inside the wrapped create procedure statement must be escaped,
such as by doubling the single quotes in P1=''A''.
The result of the statement is similar to the following form:
What to do next
The obfuscated statements can be used in scripts or batch jobs that can be distributed to users, without
exposing the intellectual property that the statements contain.
Related reference
WRAP (Db2 SQL)
CREATE_WRAPPED stored procedure (Db2 SQL)
CREATE FUNCTION (compiled SQL scalar) (Db2 SQL)
CREATE FUNCTION (inlined SQL scalar) (Db2 SQL)
Procedure
Ensure that the statistics history is current by using the MODIFY STATISTICS utility to delete outdated
statistical data from the catalog history tables.
Related concepts
General approach to estimating storage
Estimating the space requirements for Db2 objects is easier if you collect and maintain a statistical history
of those objects.
Related tasks
Collecting history statistics (Db2 Performance)
Collecting statistics history (Db2 Utilities)
Improving disk storage (Db2 Performance)
Related reference
MODIFY STATISTICS (Db2 Utilities)
The multiplier M depends on your circumstances. It includes factors that are common to all data sets on
disk, as well as others that are particular to Db2. It can vary significantly, from a low of about 1.25 to 4.0
or more. For a first approximation, set M=2.
Whether you use extended address volumes (EAV) is also a factor in estimating storage. Although, the
EAV factor is not a multiplier, you need to add 10 cylinders for each object in the cylinder-managed
space of an EAV. Db2 data sets might take more space or grow faster on EAV compared to non-extended
address volumes. The reason is that the allocation unit in the extended addressing space (EAS) of EAV is
a multiple of 21 cylinders, and every allocation is rounded up to this multiple. If you use EAV, the data set
space estimation for an installation must take this factor into account. The effect is more pronounced for
smaller data sets.
For more accuracy, you can calculate M as the product of the following factors:
In addition to the space for your data, external storage devices are required for:
GUPI Creating a table using CREATE TABLE LIKE in a table space of a larger page size changes the
specification of LONG VARCHAR to VARCHAR and LONG VARGRAPHIC to VARGRAPHIC. You can also use
CREATE TABLE LIKE to create a table with a smaller page size in a table space if the maximum record size
is within the allowable record size of the new table space. GUPI
Related concepts
XML versions (Db2 Programming for XML)
General approach to estimating storage
Procedure
To estimate the storage required for LOB table spaces, complete the following steps:
1. Begin with your estimates from other table spaces
2. Round the figure up to the next page size
3. Multiply the figure by 1.1
What to do next
An auxiliary table resides in a LOB table space. There can be only one auxiliary table in a LOB table space.
An auxiliary table can store only one LOB column of a base table and there must be one and only one
index on this column.
One page never contains more than one LOB. When a LOB value is deleted, the space occupied by that
value remains allocated as long as any application might access that value.
When a LOB table space grows to its maximum size, no more data can be inserted into the table space or
its associated base table.
Enabling LOB data compression might reduce the size of the data to allow the LOB table space to contain
more data.
Procedure
To calculate the storage required when using the LOAD utility, complete the following steps:
1. Calculate the usable page size.
Usable page size is the page size minus a number of bytes of overhead (that is, 4 KB - 40 for 4 KB
pages, 8 KB - 54 for 8 KB pages, 16 KB - 54 for 16 KB pages, or 32 KB - 54 for 32 KB pages) multiplied
by (100-p) / 100, where p is the value of PCTFREE.
If your average record size is less than 16, then usable page size is 255 (maximum records per page)
multiplied by average record size multiplied by (100-p) / 100.
2. Calculate the records per page.
Records per page is MIN(MAXROWS, FLOOR(usable page size / average record size)), but cannot
exceed 255 and cannot exceed the value you specify for MAXROWS.
3. Calculate the pages used.
Pages used is 2+CEILING(number of records / records per page).
4. Calculate the total pages used.
Total pages is FLOOR(pages used× (1+fp ) / fp ), where fp is the (nonzero) value of FREEPAGE. If
FREEPAGE is 0, then total pages is equal to pages used.
If you are using data compression, you need additional pages to store the dictionary.
5. Estimate the number of kilobytes required for a table.
• If you do not use data compression, the estimated number of kilobytes is total pages× page size (4
KB, 8 KB, 16 KB, or 32 KB).
• If you use data compression, the estimated number of kilobytes is total pages× page size (4 KB, 8
KB, 16 KB, or 32 KB) × (1 - compression ratio).
Example
For example, consider a table space containing a single table with the following characteristics:
• Number of records = 100000
• Average record size = 80 bytes
• Page size = 4 KB
• PCTFREE = 5 (5% of space is left free on each page)
• FREEPAGE = 20 (one page is left free for each 20 pages used)
• MAXROWS = 255
If the data is not compressed, you get the following results:
• Usable page size = 4056 × 0.95 = 3853 bytes
• Records per page = MIN(MAXROWS, FLOOR(3853 / 80)) = 48
• Pages used = 2 + CEILING(100000 / 48) = 2085
• Total pages = FLOOR(2085 × 21 / 20) = 2189
• Estimated number of kilobytes = 2189 × 4 = 8756
If the data is compressed, multiply the estimated number of kilobytes for an uncompressed table by (1 -
compression ratio) for the estimated number of kilobytes required for the compressed table.
Related tasks
Calculating the space that is required for a dictionary (Db2 Performance)
Level 0
Key Record-ID Key Record-ID Key Record-ID
Table Row
Row Row
If you insert data with a constantly increasing key, Db2 adds the new highest key to the top of a new page.
Be aware, however, that Db2 treats nulls as the highest value. When the existing high key contains a null
value in the first column that differentiates it from the new key that is inserted, the inserted non-null index
entries cannot take advantage of the highest-value split.
S
The value of the page size minus the length of the page header and page tail.
FLOOR
The operation of discarding the decimal portion of a real number.
CEILING
The operation of rounding a real number up to the next highest integer.
Procedure
To estimate index storage size, complete the following calculations:
1. Calculate the pages for a unique index.
a) Calculate the total leaf pages
i) Calculate the space per key
space per key is approximately k + r + 3
ii) Calculate the usable space per page
usable space per page is approximately FLOOR((100 - f)× S / 100)
iii) Calculate the entries per page
entries per page is approximately FLOOR(usable space per page / space per key)
iv) Calculate the total leaf pages
total leaf pages is approximately CEILING(number of table rows / entries per page)
b) Calculate the total nonleaf pages
i) Calculate the space per key
space per key is approximately k + 7
ii) Calculate the usable space per page
usable space per page is approximately FLOOR(MAX(90, (100 - f ))× S /100)
iii) Calculate the entries per page
entries per page is approximately FLOOR(usable space per page / space per key)
iv) Calculate the minimum child pages
minimum child pages is approximately MAX(2, (entries per page + 1))
v) Calculate the level 2 pages
level 2 pages is approximately CEILING(total leaf pages / minimum child pages)
vi) Calculate the level 3 pages
level 3 pages is approximately CEILING(level 2 pages / minimum child pages)
vii) Calculate the level x pages
level x pages is approximately CEILING(previous level pages / minimum child pages)
viii) Calculate the total nonleaf pages
total nonleaf pages is approximately (level 2 pages + level 3 pages + ...+ level x pages until the
number of level x pages = 1)
2. Calculate the pages for a nonunique index.
a) Calculate the total leaf pages
i) Calculate the space per key
space per key is approximately 4 + k + (n × (r+1))
ii) Calculate the usable space per page
usable space per page is approximately FLOOR((100 - f )× S / 100)
iii) Calculate the key entries per page
key entries per page is approximately n× (usable space per page / space per key)
iv) Calculate the remaining space per page
Example
In the following example of the entire calculation, assume that an index is defined with these
characteristics:
• The index is unique.
• The table it indexes has 100000 rows.
• The key is a single column defined as CHAR(10) NOT NULL.
• The value of PCTFREE is 5.
• The value of FREEPAGE is 4.
• The page size is 4 KB.
Length of key k 10
Average number of duplicate keys n 1
PCTFREE f 5
FREEPAGE p 4
Procedure
To identify databases that might exceed the OBID limit, run the following query:
WITH MTTS_DB AS
( SELECT DBNAME,
DBID,
COUNT(DBNAME) AS NUMTS,
SUM(NTABLES) AS NUMTB
FROM SYSIBM.SYSTABLESPACE
WHERE NTABLES > 1
GROUP BY DBNAME, DBID
)
,
What to do next
If not enough OBIDs are available to accommodate an operation, take one of the following actions:
• Specify a different database.
Attention: The DROP statement has a cascading effect. Objects that are dependent on the
dropped object are also dropped. For example, all authorities for those objects disappear, and
packages that reference deleted objects are marked invalid by Db2.
2. Use the COMMIT statement to commit the changes to the object.
3. Use the CREATE statement to re-create the object.
The following table provides links to task and reference information for altering specific types of Db2
objects.
Table 26. Where to find information about altering Db2 database objects
Object type Task information SQL statement reference
Databases “Altering Db2 databases” on page 173 ALTER DATABASE (Db2 SQL)
Table spaces “Altering table spaces” on page 177 ALTER TABLESPACE (Db2 SQL)
Tables “Altering Db2 tables” on page 194 ALTER TABLE (Db2 SQL)
Views “Altering Db2 views” on page 242 ALTER VIEW (Db2 SQL)
Indexes “Altering Db2 indexes” on page 243 ALTER INDEX (Db2 SQL)
Storage groups “Altering Db2 storage groups” on page 174 ALTER STOGROUP (Db2 SQL)
Stored procedures “Altering stored procedures” on page 266 ALTER PROCEDURE (external)
(Db2 SQL)
ALTER PROCEDURE (SQL -
external) (deprecated) (Db2
SQL)
ALTER PROCEDURE (SQL -
native) (Db2 SQL)
Related concepts
Implementing your database design
For a list of Db2 catalog tables and descriptions of the information that they contain, see Db2 catalog
tables (Db2 SQL).
The information in the catalog is vital to normal Db2 operation. You can retrieve catalog information, but
changing it can have serious consequences. Therefore you cannot execute insert or delete operations that
affect the catalog, and only a limited number of columns exist that you can update. Exceptions to these
restrictions are the SYSIBM.SYSSTRINGS, SYSIBM.SYSCOLDIST, and SYSIBM.SYSCOLDISTSTATS catalog
tables, into which you can insert rows and proceed to update and delete rows.
To retrieve information from the catalog, you need at least the SELECT privilege on the appropriate
catalog tables.
Note: Some catalog queries can result in long table space scans.
GUPI
Procedure
Query the SYSIBM.SYSSTOGROUP and SYSIBM.SYSVOLUMES tables.
The following query shows what volumes are in a Db2 storage group, how much space is used, and when
that space was last calculated.
SELECT SGNAME,VOLID,SPACE,SPCDATE
FROM SYSIBM.SYSVOLUMES,SYSIBM.SYSSTOGROUP
WHERE SGNAME=NAME
ORDER BY SGNAME;
GUPI
Related reference
SYSSTOGROUP catalog table (Db2 SQL)
SYSVOLUMES catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSTABLES table.
The following example query displays all the information for the project activity sample table:
SELECT *
FROM SYSIBM.SYSTABLES
WHERE NAME = 'PROJACT'
AND CREATOR = 'DSN8C10';
GUPI
Related concepts
Adding and retrieving comments
After you create an object, you can provide explanatory information about it for future reference. For
example, you can provide information about the purpose of the object, who uses it, and anything unusual
about it.
Related reference
SYSTABLES catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSTABLEPART table.
The following statement displays information on partition order in ascending limit value order:
GUPI
Related reference
SYSTABLEPART catalog table (Db2 SQL)
You can use the SYSIBM.SYSTABLES table to find information about aliases by referencing the following
three columns:
• LOCATION contains your subsystem's location name for the remote system, if the object on which the
alias is defined resides at a remote subsystem.
• TBCREATOR contains the schema table or view.
• TBNAME contains the name of the table or the view.
You can also find information about aliases by using the following user-defined functions:
GUPI
Related reference
SYSTABLES catalog table (Db2 SQL)
TABLE_NAME (Db2 SQL)
TABLE_SCHEMA (Db2 SQL)
TABLE_LOCATION (Db2 SQL)
Procedure
Query the SYSIBM.SYSCOLUMNS table.
The following statement retrieves information about columns in the sample department table:
The result is shown below; for each column, the following information about each column is given:
• The column name
• The name of the table that contains it
• Its data type
• Its length attribute. For LOB columns, the LENGTH column shows the length of the pointer to the LOB.
• Whether it allows nulls
• Whether it allows default values
GUPI
Procedure
Query the SYSIBM.SYSINDEXES table.
For example, to retrieve a row about an index named XEMPL2:
SELECT *
FROM SYSIBM.SYSINDEXES
WHERE NAME = 'XEMPL2'
AND CREATOR = 'DSN8C10';
A table can have more than one index. To display information about all the indexes of a table:
SELECT *
FROM SYSIBM.SYSINDEXES
WHERE TBNAME = 'EMP'
AND TBCREATOR = 'DSN8C10';
GUPI
Related reference
SYSINDEXES catalog table (Db2 SQL)
The following actions occur in the catalog after the execution of CREATE VIEW:
• A row is inserted into the SYSIBM.SYSTABLES table.
• A row is inserted into the SYSIBM.SYSTABAUTH table to record the owner's privileges on the view.
• For each column of the view, a row is inserted into the SYSIBM.SYSCOLUMNS table.
• One or more rows are inserted into the SYSIBM.SYSVIEWS table to record the text of the CREATE VIEW
statement.
• A row is inserted into the SYSIBM.SYSVIEWDEP table for each database object on which the view is
dependent.
• A row is inserted into the SYSIBM.SYSDEPENDENCIES table for each database object on which the view
is dependent.
GUPI
Procedure
Query one or more catalog tables.
The following actions occur in the catalog after the execution of CREATE TABLE for a materialized query
table:
• A row is inserted into the SYSIBM.SYSTABLES table.
• A row is inserted into the SYSIBM.SYSTABAUTH table to record the owner's privileges on the
materialized query table.
• For each column of the materialized query table, a row is inserted into the SYSIBM.SYSCOLUMNS table.
• A row is inserted into the SYSIBM.SYSVIEWS table to record the text of the CREATE TABLE statement
that defines the materialized query table, and the attributes of the materialized query table.
• A row is inserted into the SYSIBM.SYSVIEWDEP table for each database object on which the
materialized query table is dependent.
• A row is inserted into the SYSIBM.SYSDEPENDENCIES table for each database object on which the
materialized query table is dependent.
GUPI
Procedure
Query one or more catalog tables.
Related reference
SYSTABLES catalog table (Db2 SQL)
SYSTABAUTH catalog table (Db2 SQL)
SYSCOLUMNS catalog table (Db2 SQL)
SYSVIEWS catalog table (Db2 SQL)
SYSVIEWDEP catalog table (Db2 SQL)
Procedure
GUPI Query the SYSIBM.SYSTABAUTH table.
The following query retrieves the names of all users who have been granted access to the DSN8C10.DEPT
table.
SELECT GRANTEE
FROM SYSIBM.SYSTABAUTH
WHERE TTNAME = 'DEPT'
GRANTEE is the name of the column that contains authorization IDs for users of tables. The TTNAME and
TCREATOR columns specify the DSN8C10.DEPT table. The clause GRANTEETYPE <> 'P' ensures that you
retrieve the names only of users (not application plans or packages) that have authority to access the
table.
GUPI
Related reference
SYSTABAUTH catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSCOLUMNS table.
To retrieve the creator, database, and names of the columns in the primary key of the sample project
activity table using SQL statements, execute:
The SYSIBM.SYSINDEXES table identifies the primary index of a table by the value P in column
UNIQUERULE. To find the name, creator, database, and index space of the primary index on the project
activity table, execute:
GUPI
Related reference
SYSCOLUMNS catalog table (Db2 SQL)
SYSINDEXES catalog table (Db2 SQL)
The SYSIBM.SYSRELS table contains information about referential constraints, and each constraint is
uniquely identified by the schema and name of the dependent table and the constraint name (RELNAME).
The SYSIBM.SYSFOREIGNKEYS table contains information about the columns of the foreign key that
defines the constraint.
To find information about the foreign keys of tables to which the project table is a parent:
GUPI
Related reference
SYSRELS catalog table (Db2 SQL)
SYSFOREIGNKEYS catalog table (Db2 SQL)
The SYSIBM.SYSTABLESPACE table indicates that a table space is in check-pending status by a value in
column STATUS: P if the entire table space has that status, S if the status has a scope of less than the
entire space.
Procedure
Query the SYSIBM.SYSTABLESPACE table.
To list all table spaces whose use is restricted for any reason, issue this command:
To retrieve the names of table spaces in check-pending status only, with the names of the tables they
contain, execute:
GUPI
Procedure
Query the SYSIBM.SYSCHECKS and SYSIBM.SYSCHECKDEP tables.
The following query shows all check constraints on all tables named SIMPDEPT and SIMPEMPL in order
by column name within table schema. It shows the name, authorization ID of the creator, and text for
each constraint. A constraint that uses more than one column name appears more than once in the result.
GUPI
Related reference
SYSCHECKS catalog table (Db2 SQL)
SYSCHECKDEP catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSAUXRELS table.
For example, this query returns information about the name of the LOB columns for the employee table
and its associated auxiliary table schema and name:
Information about the length of a LOB is in the LENGTH2 column of the SYSCOLUMNS table. You can
query information about the length of the column as it is returned to an application with the following
query:
GUPI
Related reference
SYSAUXRELS catalog table (Db2 SQL)
SYSCOLUMNS catalog table (Db2 SQL)
Procedure
GUPI Query the SYSIBM.SYSROUTINES table to obtain information about user-defined functions and
stored procedures.
You can use this example to find packages with stored procedures that were created prior to Version 6
and then migrated to the SYSIBM.SYSROUTINES table:
You can use this query to retrieve information about user-defined functions:
GUPI
Related tasks
Preparing a client program that calls a remote stored procedure (Db2 Application programming and SQL)
Related reference
SYSROUTINES catalog table (Db2 SQL)
Procedure
GUPI Query the SYSIBM.SYSTRIGGERS table to obtain information about the triggers defined in your
databases.
You can issue this query to find all the triggers defined on a particular table, their characteristics, and to
determine the order they are activated in:
GUPI
Related reference
SYSTRIGGERS catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSSEQUENCES or SYSIBM.SYSSEQUENCEAUTH table.
To retrieve the attributes of a sequence, issue this query:
SELECT *
FROM SYSIBM.SYSSEQUENCES
WHERE NAME = 'MYSEQ' AND SCHEMA = 'USER1B';
Issue this query to determine the privileges that user USER1B has on sequences:
GUPI
Related reference
SYSSEQUENCES catalog table (Db2 SQL)
SYSSEQUENCEAUTH catalog table (Db2 SQL)
You can create comments about tables, views, indexes, aliases, packages, plans, distinct types, triggers,
stored procedures, and user-defined functions. You can store a comment about the table or the view as a
whole, and you can also include a comment for each column. A comment must not exceed 762 bytes.
A comment is especially useful if your names do not clearly indicate the contents of columns or tables. In
that case, use a comment to describe the specific contents of the column or table.
Below are two examples of COMMENT:
SELECT REMARKS
FROM SYSIBM.SYSTABLES
WHERE NAME = 'EMP'
AND CREATOR = 'DSN8C10';
SELECT REMARKS
FROM SYSIBM.SYSCOLUMNS
WHERE NAME = 'PRSTDATE' AND TBNAME = 'PROJ'
AND TBCREATOR = 'DSN8C10';
GUPI
Procedure
Query the catalog tables to verify that your tables are in the correct table space, your table spaces are in
the correct storage group, and so on. GUPI
Related reference
Db2 catalog tables (Db2 SQL)
Example
For example, the data type of the CREATOR column in the SYSTABLES catalog table was converted from
CHAR(8) to VARCHAR(128) in DB2 version 8.
Assume that you issued the following CREATE TABLE statement in DB2 version 7:
However, assume that you issue the following CREATE TABLE statement in Db2 12:
• The following query returns the value 4 because T2 was created in Db2 12, when CREATOR was a
VARCHAR column.
• The following query also returns the value 4 because the STRIP function removes any trailing blanks
from the CREATOR column before the length is determined.
Related concepts
Storage assignment (Db2 SQL)
Fixed-length character strings (Db2 SQL)
Related reference
SYSTABLES catalog table (Db2 SQL)
Procedure
Issue the ALTER DATABASE statement.
It supports changing the following attributes of a database:
STOGROUP
Use this option to change the name of the default storage group to support disk space requirements
for table spaces and indexes within the database. The new default Db2 storage group is used only for
new table spaces and indexes; existing definitions do not change.
BUFFERPOOL
Use this option to change the name of the default buffer pool for table spaces and indexes within the
database. Again, it applies only to new table spaces and indexes; existing definitions do not change.
INDEXBP
Use this option to change the name of the default buffer pool for the indexes within the database. The
new default buffer pool is used only for new indexes; existing definitions do not change.
Related concepts
Db2 databases (Introduction to Db2 for z/OS)
Procedure
To alter a storage group:
1. Issue an ALTER STOGROUP statement.
2. Specify whether you want SMS to manage your Db2 storage groups, or to add or remove volumes from
a storage group.
What to do next
If you want to migrate to another device type or change the catalog name of the integrated catalog facility,
you need to move the data.
Related concepts
Moving Db2 data
Db2 provides several tools and options to make moving data easier.
Moving a Db2 data set
You can move Db2 data by using the RECOVER, REORG, or DSN1COPY utilities, or by using non-Db2
facilities, such as DFSMSdss.
Related reference
ALTER STOGROUP (Db2 SQL)
Related information
Implementing Db2 storage groups
A storage group is a set of storage objects on which Db2 for z/OS data can be stored. Db2 uses storage
groups to allocate storage for table spaces and indexes, and to define, extend, alter, and delete VSAM
data sets.
Procedure
GUPI To let SMS manage the storage needed for the objects that the storage group supports:
1. Issue an ALTER STOGROUP statement.
You can specify SMS classes when you alter a storage group.
2. Specify ADD VOLUMES ('*') and REMOVE VOLUMES (current-vols) where current-vols is the list of the
volumes that are currently assigned to the storage group.
For example,
Example
The following example shows how to alter a storage group to SMS-managed using the DATACLAS,
MGMTCLAS, or STORCLAS keywords.
What to do next
SMS manages every new data set that is created after the ALTER STOGROUP statement is executed. SMS
does not manage data sets that are created before the execution of the statement. GUPI
Related tasks
Adding or removing volumes from a Db2 storage group
When you add or remove volumes from a storage group, all the volumes in that storage group must be of
the same type.
Migrating existing data sets to a solid-state drive
You can migrate Db2-managed data sets from a hard disk drive (HDD) to a solid-state drive (SSD).
Migrating to DFSMShsm
If you decide to use DFSMShsm for your Db2 data sets, you should develop a migration plan with your
system administrator.
Related reference
ALTER STOGROUP (Db2 SQL)
Procedure
To add a new volume to a storage group:
1. Use the SYSIBM.SYSTABLEPART catalog table to determine which table spaces are associated with the
storage group.
GUPI For example, the following query indicates which table spaces use storage group DSN8G910:
GUPI
GUPI
Restriction: When a new volume is added, or when a storage group is used to extend a data set,
the volumes must have the same device type as the volumes that were used when the data set was
defined.
5. Start the table space with utility-only processing by using the Db2 command START DATABASE
(dbname) SPACENAM (tsname) ACCESS(UT).
6. Use the RECOVER utility or the REORG utility to move the data in each table space.
For example, issue RECOVER dbname.tsname.
7. Start the table space with the Db2 command START DATABASE (dbname) SPACENAM (tsname).
Related tasks
Letting SMS manage your Db2 storage groups
Using the SMS product Data Facility Storage Management Subsystem (DFSMS) to manage your data sets
can result in a reduced workload for Db2 database and storage administrators.
Migrating existing data sets to a solid-state drive
You can migrate Db2-managed data sets from a hard disk drive (HDD) to a solid-state drive (SSD).
Procedure
Use one of the following options.
• Use DSN1COPY to move data sets from an HDD to an SSD. The drive type is found by Db2 the next time
that you open the data set.
• Issue the ALTER STOGROUP statement. For data sets that are managed by SMS, use the ALTER
STOGROUP statement to change DATACLAS, MGMTCLAS, or STORCLAS to identify the new SSD
volumes. For data sets that are not managed by SMS, use the ALTER STOGROUP statement to add
volumes that contain SSD and drop volumes that contain HDD.
The storage group should be homogenous and contain either all SSDs or all HDDs. The data set is
moved to the new SSD volume at the next REORG after the alter operation.
Using the ALTER STOGROUP statement has an availability advantage over using the CREATE
STOGROUP statement, because the ALTER TABLESPACE USING STOGROUP statement must stop
the object before the alter operation can succeed. If you cannot make an existing storage group
homogenous, you must use the CREATE STOGROUP statement to define the storage groups to contain
SSD volumes.
For partition-by-range and partition-by-growth table spaces, most attributes can be changed with ALTER
TABLESPACE statements, often through pending definition changes.
Pending definition changes are changes that are not immediately materialized. For detailed information
about pending definition changes, how to materialize them, and related restrictions, see “Pending data
definition changes” on page 250
Immediate definition changes are changes that are materialized immediately. Most immediate definition
changes are restricted while pending definition changes exist for an object. For a list of such restrictions,
see “Restrictions for pending data definition changes” on page 259.
However, depending on the type of table space and the attributes that you want to change, you might
instead need to drop the table space, and create it again with the new attributes. Many fewer types of
changes are supported by ALTER TABLESPACE statements for the deprecated non-UTS table space types.
In such cases, it is best to first convert the table space to a partition-by-range or partition-by-growth table
space first and then use ALTER TABLESPACE statements with pending definition changes to make the
changes.
Procedure
To change the attributes of a table space, use any of the following approaches:
• Use the ALTER TABLESPACE statements to change the table space type and attributes, or to enable or
disable MEMBER CLUSTER.
For example, you might make the following changes:
• Use the MAXPARTITIONS clause to change the maximum partition size for partition-by-growth
table spaces. You can also use this attribute to convert a simple table space, or a single-table
segmented (non-UTS) table space to a partition-by-growth table space.
• Use the SEGSIZE clause to convert a partitioned (non-UTS) table space to a partition-by-range
table space. For more information, see “Converting partitioned (non-UTS) table spaces to partition-
by-range universal table spaces” on page 190.
• FL 508 Use the MOVE TABLE clause to move tables from a multi-table segmented (non-UTS)
table space to partition-by-growth table spaces. For more information, see “Moving tables from
multi-table table spaces to partition-by-growth table spaces” on page 186.
• Drop the table space and create it again with the new attributes, as described in “Dropping and
re-creating a table space to change its attributes” on page 180.
For example, some changes are not supported by ALTER TABLESPACE statements, such as the
following changes:
– Changing the CCSID to an incompatible value
– Moving the table space to a different database
What to do next
When ready, materialize any pending definition changes, as described in “Materializing pending definition
changes” on page 254.
You can also use the DROP PENDING CHANGES clause to drop all pending definition changes for the table
space and for any of the objects in the table space.
Related concepts
Table space types and characteristics in Db2 for z/OS
Db2 supports several different types of table spaces. The partitioning method and segmented
organization are among the main characteristics that define the table space type.
Pending data definition changes
Pending data definition changes are data definition changes that do not take effect immediately because
the object must be reorganized to apply the change. When you are ready to materialize pending data
definition changes, you run the REORG utility to apply the pending changes to the definition and data.
Objects that have pending definition changes remain available for use until it is convenient to apply the
changes.
Member affinity clustering (Db2 Data Sharing Planning and Administration)
Related tasks
Changing the logging attribute for a table space
You can use the ALTER TABLESPACE statement to set the logging attribute of a table space.
Related reference
ALTER TABLESPACE (Db2 SQL)
SYSPENDINGDDL catalog table (Db2 SQL)
Related information
Implementing Db2 table spaces
Db2 table spaces are storage structures that store one or more data sets, which store one or more tables.
Procedure
To change the logging attribute of a table space:
1. Issue an ALTER TABLESPACE statement.
2. Specify the LOGGED or NOT LOGGED attribute.
LOGGED
Specifies that changes made to data in this table space are to be recorded on the log.
NOT LOGGED
Specifies that changes made to data in this table space are not to be recorded on the log. The NOT
LOGGED attribute suppresses the logging of undo and redo information.
Results
The change in logging applies to all tables in this table space and also applies to all indexes on those
tables, as well as associated LOB and XML table spaces.
Altering the logging attribute of a table space from LOGGED to NOT LOGGED establishes a recoverable
point for the table space. Indexes automatically inherit the logging attribute of their table spaces. For
the index, the change establishes a recoverable point that can be used by the RECOVER utility. Each
subsequent image copy establishes another recoverable point for the table space and its associated
indexes if the image copy is taken as a set.
Altering the logging attribute of a table space from NOT LOGGED to LOGGED marks the table space as
COPY-pending (a recoverable point must be established before logging resumes). The indexes on the
tables in the table space that have the COPY YES attribute are unchanged.
Procedure
To change the space allocation for user-managed data sets, complete the following steps:
1. Run the REORG TABLESPACE utility, and specify the UNLOAD PAUSE option.
2. Make the table space unavailable with the STOP DATABASE command and the SPACENAM option after
the utility completes the unload and stops.
3. Delete and redefine the data sets.
4. Resubmit the utility job with the RESTART(PHASE) parameter specified on the EXEC statement.
What to do next
The job now uses the new data sets when reloading.
Use of the REORG utility to extend data sets causes the newly acquired free space to be distributed
throughout the table space rather than to be clustered at the end.
Procedure
To drop and re-recreate a table space:
1. Locate the original CREATE TABLE statement and all authorization statements for all tables in the
table space (for example, TA1, TA2, TA3, … in TS1).
Another way of unloading data from your old tables and loading the data into new tables is by using
the INCURSOR option of the LOAD utility. This option uses the Db2 cross-loader function.
4. Optional: Alternatively, instead of unloading the data, you can insert the data from your old tables into
the new tables by issuing an INSERT statement for each table.
For example:
If a table contains a ROWID column or an identity column and you want to keep the existing column
values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity
column is defined with GENERATED ALWAYS, and you want Db2 to generate new values for that
column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect.
5. Drop the table space.
For example, use a statement such as:
The compression dictionary for the table space is dropped, if one exists. All tables in TS1 are dropped
automatically.
6. Commit the DROP statement.
You must commit the DROP TABLESPACE statement before creating a table space or index with
the same name. When you drop a table space, all entries for that table space are dropped from
SYSIBM.SYSCOPY. This makes recovery for that table space impossible from previous image copies.
7. Create the new table space, TS1, and grant the appropriate user privileges. You can also create a
partitioned table space.
For example, use a statement such as:
If a table contains a ROWID column or an identity column and you want to keep the existing column
values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity
column is defined with GENERATED ALWAYS, and you want Db2 to generate new values for that
column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect. GUPI
11. Drop table space TS2.
If a table in the table space has been created with RESTRICT ON DROP, you must alter that table to
remove the restriction before you can drop the table space.
12. Re-create any dependent objects on the new tables TA1, TA2, TA3, ….
13. REBIND any packages that were invalidated as a result of dropping the table space.
Related concepts
Implications of dropping a table
Dropping a table has several implications that you should be aware of.
Procedure
To redistribute data in partitioned table spaces, use one of the following two methods:
• Change the partition boundaries.
• Redistribute the data across partitions by using the REORG TABLESPACE utility.
Example
GUPI
Assume that a table space contains product data that is partitioned by product ID as follows: The first
partition contains rows for product ID values 1 through 99. The second partition contains rows for values
100 to 199. The third partition contains rows for values 200 through 299. And the subsequent partitions
are empty.
Suppose that after some time, because of the popularity of certain products, you want to redistribute the
data across certain partitions. You want the third partition to contain values 200 through 249, the fourth
partition to contain values 250 through 279, and the fifth partition to contain values 280 through 299.
To change the boundary for these partitions, issue the following statements:
In this case, Db2 determines the appropriate limit key changes and redistributes the data accordingly.
GUPI
Related tasks
Increasing partition size
If a partition is full and redistributing the data across partitions is not practical, you might need to
increase the partition size.
Partitioning data in Db2 tables
All Db2 base tables that are created in universal table spaces use either partition-by growth or partition-
by-range data partitioning.
Related reference
Syntax and options of the REORG TABLESPACE control statement (Db2 Utilities)
ALTER INDEX (Db2 SQL)
ALTER TABLE (Db2 SQL)
Advisory or restrictive states (Db2 Utilities)
Procedure
To increase the maximum partition size of a partitioned table space:
1. If the table space uses index-based partitioning, convert it to table-based partitioning, as described in
“Converting table spaces to use table-controlled partitioning” on page 191.
2. If the table space is not a partition-by-range universal table space, convert it as described in
“Converting partitioned (non-UTS) table spaces to partition-by-range universal table spaces” on page
190.
3. Issue the ALTER TABLESPACE statement with the DSSIZE option to increase the maximum partition
size to 128 GB or 256 GB.
For table spaces that use relative page numbering, you can also specify the DSSIZE value at the
partition level.
4. Issue the ALTER TABLESPACE statement with the PRIQTY and SECQTY options to modify the primary
and secondary space allocation for each partition.
This change allows the partition to grow to its anticipated maximum size.
5. Run the REORG TABLESPACE utility with SHRLEVEL CHANGE or SHRLEVEL REFERENCE to materialize
the pending definition changes and convert the table space.
Procedure
To alter a page set to contain extents that are defined by Db2:
1. Issue the ALTER TABLESPACE SQL statement.
After you use the ALTER TABLESPACE statement, the new values take affect only when you use REORG
or LOAD REPLACE.
2. Enlarge the primary and secondary space allocation values for Db2-managed data sets.
What to do next
Using the RECOVER utility again does not resolve the extent definition.
For user-defined data sets, define the data sets with larger primary and secondary space allocation
values.
Related concepts
The RECOVER utility and the DFSMSdss RESTORE command
The RECOVER utility can run the DFSMSdss RESTORE command, which generally uses extensions that are
larger than the primary and secondary space allocation values of a data set.
Related reference
ALTER TABLESPACE (Db2 SQL)
Procedure
The procedure to use for the conversion depends on the type of existing table space:
Deprecated table space type Conversion procedure
Segmented (multi-table) FL 508 Move the tables to new PBG table spaces by following
the procedure in “Moving tables from multi-table table spaces to
partition-by-growth table spaces” on page 186.
Segmented (single-table) Convert the existing table space to PBG by issuing an ALTER
TABLESPACE statement with the MAXPARITIONS clause.
Range-partitioned Convert the existing table space to PBR by following the procedure in
“Converting partitioned (non-UTS) table spaces to partition-by-range
universal table spaces” on page 190.
Simple See the procedures for segmented table spaces.
Related tasks
Partitioning data in Db2 tables
All Db2 base tables that are created in universal table spaces use either partition-by growth or partition-
by-range data partitioning.
Creating partition-by-range table spaces
You can create a partition-by-range (PBR) table space to create partitions based on data value ranges and
use segmented space management capabilities within each partition.
Creating partition-by-growth table spaces
You can create a partition-by-growth table space so that Db2 manages partitions based on data growth
and uses segmented space management capabilities within each partition.
Related reference
ALTER TABLE (Db2 SQL)
Then use the following query to identify all tables in a multi-table table space:
In this formula, table-size-in-KB indicates the amount of space occupied by the data in the tables
that are being moved, in kilobytes. Choose the closest DSSIZE value that is greater than table-size-
in-KB. The DSSIZE value must be a power-of-two value that is within the range 1 GB - 256 GB.
If you create the target table space with different space attributes than the source table space, the
recommended DSSIZE values must be modified accordingly.
Determine the number of moved tables to materialize in a single REORG job
You can move only one table per ALTER TABLESPACE MOVE TABLE statement, but you can materialize
multiple pending MOVE TABLE operations in a single REORG. To help you decide how many tables to
move and materialize in a single REORG, consider the following issues:
• Generally, the total processing time for a data definition statement that is a pending definition
change for a table space includes the following constituent processing times:
– The processing time for that statement
– The time that it takes to process unmaterialized pending definition changes for the table space.
All unmaterialized pending definition changes for the table space are processed, regardless of
whether the changes were executed in a single unit of work or across multiple units of work.
Processing includes parsing of the statement, semantic validation, and updating and restoring
catalog rows. You can use the following formula to calculate the time that it takes to process a
subsequent ALTER TABLESPACE MOVE TABLE statement:
(n + 1) × t
Pending definition changes are reapplied during both the REORG UTILINIT phase and the REORG
SWITCH phase to become materialized. You can use the following formula to calculate the time that
it takes to reprocess all of the pending statements during the REORG:
2 × (n + 1) × t
where:
n
Is the number of unmaterialized pending MOVE TABLE operations
t
Is the processing time for a single ALTER TABLESPACE MOVE TABLE statement
Therefore, each table that is moved linearly increases the time that is required to process the ALTER
TABLESPACE statement and the time that is required for the REORG SWITCH phase to complete.
The SWITCH phase directly affects the total outage duration during materialization.
• Compared to a REORG on the non-partitioned source table space, this REORG creates one
additional shadow data set for each moved table. Generally, moving a large number of tables in
Table 27. Values to use in the query to identify invalidated packages from materialization of a MOVE
TABLE operation
Variable in the query Value
object_qualifier The schema of the table that is being moved
object_name The name of the table that is being moved
object_type Use one of the following values:
T
Normal table
M
Materialized query table
Tip: Unlike non-UTS table spaces, Db2 supports access to currently committed data in UTS. Applications
that use a sequence of FETCH, DELETE, and INSERT statements in the same commit scope instead of an
UPDATE statement might need to be modified to use UPDATE statements before the conversion to UTS.
The reason is that a row that has been logically updated using DELETE and INSERT can re-appear in the
FETCH result set before the application commits.
Procedure
Attention: Before you run the REORG utility to materialize one or more pending MOVE TABLE
operations, the following actions will cause the REORG to fail during the UTILINIT phase:
– Altering the target table space so that its attributes become invalid for a MOVE TABLE operation.
In this case, use the ALTER TABLESPACE statement to alter any invalid attributes to be valid
again.
– Dropping and re-creating the target table space, regardless of whether the table space
attributes are valid. In this case, complete the following steps to re-execute the MOVE TABLE
operation:
1. Issue the ALTER TABLESPACE statement with the DROP PENDING CHANGES option to drop
all pending definition changes for the target table space.
2. Re-create the target table space with the desired attributes.
3. Issue the ALTER TABLESPACE MOVE TABLE statement again.
To move tables from a source multi-table table space, take one of the following actions:
• Move all tables from the source table space to partition-by-growth table spaces, and then drop the
source table space.
a) Issue the ALTER TABLESPACE statement with the MOVE TABLE option to move each table to a
partition-by-growth table space.
b) Run the REORG TABLESPACE utility with the SHRLEVEL REFERENCE or SHRLEVEL CHANGE option
on the source table space to materialize the MOVE TABLE operations. All pending definition
changes for the source table space are materialized in the same REORG. The source table space
then becomes an empty table space.
c) Issue the DROP statement to drop the source table space.
Note: Db2 removes any associated history and recovery information for the source table space.
References to the source table space must be modified accordingly.
• Move all tables except the last table from the source table space to partition-by-growth table spaces,
and then convert the source table space to a partition-by-growth table space. This approach does not
require creating a target table space for the last table and saves two OBIDs.
a) Issue the ALTER TABLESPACE statement with the MOVE TABLE option to move each table to a
partition-by-growth table space. Leave the last table in the source table space.
b) Issue the ALTER TABLESPACE statement with a MAXPARTITIONS value of 1 to convert the source
table space to a partition-by-growth table space. This conversion is a pending definition change
that can be materialized with any previous pending definition changes for the source table space.
c) Run the REORG TABLESPACE utility with the SHRLEVEL REFERENCE or SHRLEVEL CHANGE option
on the source table space to materialize the MOVE TABLE operations. All pending definition
changes for the source table space are materialized in the same REORG.
Note: Db2 retains all associated history and recovery information for the source table space.
References to the source table space must be modified to consider that the table space is now a
partition-by-growth table space.
Results
If the moved tables remain empty after materialization, the REORG leaves the target table space as
DEFINE NO (that is, no VSAM data sets defined) but it updates the metadata definition of the target table
space to reflect the linkage to the moved tables. REORG does not insert any SYSCOPY records for target
table spaces that remain as DEFINE NO after materialization of pending MOVE TABLE operations. These
What to do next
Packages that depend on the moved tables are invalidated by materialization of the MOVE TABLE
operations. Existing table-level statistics still persist after you run the REORG, which results in minimal
risk for access path regression from rebinds and autobinds of the invalidated packages. Nevertheless,
you can take one of the following actions after you run the REORG to further mitigate risk of access path
regression:
• Run a stand-alone RUNSTATS job.
• When you rebind the invalidated packages, issue the REBIND command with the APREUSE(WARN) bind
option to minimize access path change. If enabled, an autobind also issues APREUSE(WARN).
After the MOVE TABLE operations are materialized, you can increase the MAXPARTITIONS value of
the target table space to any valid value as an immediate change by issuing an ALTER TABLESPACE
statement.
If you create partition-level image copies of the target table spaces for backup and recovery, and you use
a LISTDEF list with the PARTLEVEL option for the COPY utility, you must also use a LISTDEF list with the
PARTLEVEL option for the RECOVER utility.
Related tasks
Creating partition-by-growth table spaces
You can create a partition-by-growth table space so that Db2 manages partitions based on data growth
and uses segmented space management capabilities within each partition.
Related reference
ALTER TABLESPACE (Db2 SQL)
DROP (Db2 SQL)
Procedure
To convert a partitioned (non-UTS) table space to a partition-by-range table space, complete the following
steps:
1. If table space uses index-controlled partitioning, convert it to use table-controlled partitioning, as
described in “Converting table spaces to use table-controlled partitioning” on page 191.
2. Convert the table space to a partition-by-range universal table space by issuing an ALTER TABLESPACE
statement that specifies segment size for the table space.
For example, the following statement converts the ts-name table space to a partition-by-range
universal table space with a segment size of 16 pages per segment.
The conversion is also required before you can convert a partitioned (non-UTS) table space, to a partition-
by-range table space.
Learn more: For comprehensive background, how-to information, and examples for various paths for
converting your deprecated "classic" partitioned (non-UTS) table spaces to partition-by-range table
spaces, see the white paper Conversion from index-controlled partitioning to Universal Table Space (UTS).
Procedure
Certain index operations always cause Db2 to automatically convert an index-controlled partitioned
table space to a table-controlled partitioned table space. To convert a table that uses index-controlled
partitioning to use table-controlled partitioning, use one of the following actions:
• Issue an ALTER INDEX statement with the NOT CLUSTER clause on the partitioning index.
For example, you can issue the following two ALTER INDEX statements in the same commit scope to
convert a table to use table-controlled partitioning and reactivate clustering for the index.
Db2 issues SQLCODE +20272 to indicate that the associated table space no longer uses index-
controlled partitioning .
Results
After the conversion to table-controlled partitioning, Db2 changes the existing high-limit key value for
non-large table spaces to the highest value for the key. Db2 also starts enforcing the high-limit key values,
which is not the case for tables that use index-controlled partitioning.
Db2 also invalidates any packages that depend on the table in the converted table space.
Db2 places the last partition of the table space into a REORG-pending (REORP) state in the following
situations:
Adding or rotating a new partition
Db2 stores the original high-limit key value instead of the default high-limit key value. Db2 puts the
last partition into a REORP state, unless the high-limit key value is already being enforced, or the last
partition is empty.
Altering a partition results in the conversion to table-controlled partitioning
Db2 changes the existing high-limit key to the highest value that is possible for the data types of the
limit key columns. After the conversion to table-controlled partitioning, Db2 changes the high-limit
key value back to the user-specified value and puts the last partition into a REORP state.
Example
For example, assume that you have a very large transaction table named TRANS that contains one row for
each transaction. The table includes the following columns:
• ACCTID is the customer account ID
• POSTED is the date of the transaction
The table space that contains TRANS is divided into 13 partitions, each containing one month of data. Two
existing indexes are defined as follows:
GUPI
• A partitioning index is defined on the transaction date by the following CREATE INDEX statement with a
PARTITION ENDING AT clause: The partitioning index is the clustering index, and the data rows in the
table are in order by the transaction date. The partitioning index controls the partitioning of the data in
the table space.
• A nonpartitioning index is defined on the customer account ID:
GUPI
Db2 usually accesses the transaction table through the customer account ID by using the nonpartitioning
index IX2. The partitioning index IX1 is not used for data access and is wasting space. In addition, you
have a critical requirement for availability on the table, and you want to be able to run an online REORG
job at the partition level with minimal disruption to data availability.
GUPI
When you drop the partitioning index IX1, Db2 converts the table space from index-controlled
partitioning to table-controlled partitioning. Db2 changes the high limit key value that was originally
specified to the highest value for the key column.
2. Create a partitioned clustering index IX3 that matches the 13 data partitions in the table, as a
replacement for IX2.
When you create the index IX3, Db2 creates a partitioned index with 13 partitions that match the 13
data partitions in the table. Each index partition contains the account numbers for the transactions
during that month, and those account numbers are ordered within each partition. For example,
partition 11 of the index matches the table partition that contains the transactions for November,
2002, and it contains the ordered account numbers of those transactions.
3. Drop index IX2,which was replaced IX3.
GUPI
What to do next
The result of this procedure is a partitioned (non-UTS) table space, which is also a deprecated table
space type. For best results, also convert the table space to a partition-by-range table space, as described
in “Converting partitioned (non-UTS) table spaces to partition-by-range universal table spaces” on page
190.
Related tasks
Converting table spaces to use table-controlled partitioning
Before you can convert a partitioned (non-UTS) table space that uses index-controlled partitioning to a
partition-by-range table space, you must convert it to use table controlled partitioning. Table spaces that
use index-controlled partitioning, like all non-UTS table spaces are deprecated.
Related reference
CREATE INDEX (Db2 SQL)
CREATE TABLE (Db2 SQL)
Related information
+20272 (Db2 Codes)
Procedure
Issue the ALTER TABLE statement.
With the ALTER TABLE statement, you can:
• Add a new column
• Rename a column
• Drop a column
• Change the data type of a column, with certain restrictions
• Add or drop a parent key or a foreign key
• Add or drop a table check constraint
• Add a new partition to a table space, including adding a new partition to a partition-by-growth table
space, by using the ADD PARTITION clause
What to do next
You might need to rebind packages that depend on the altered table, and possibly other related objects
through cascading effects. For more information, see Changes that invalidate packages (Db2 Application
programming and SQL) and Changes that might require package rebinds (Db2 Application programming
and SQL).
Related concepts
Row and column access control (Managing Security)
Related reference
ALTER TABLE (Db2 SQL)
Procedure
To add a new column to a table, complete the following steps:
1. Issue an ALTER TABLE statement and specify the ADD COLUMN clause with the attributes for the new
column.
2. Consider running the REORG utility for the table space to materialize the values for the new column in
the physical data records. If the table space is placed in restrictive REORG-pending (REORP) status,
this step is required.
The REORG utility generates any required values for the new column in each existing row, physically
stores the generated values in the database, and removes the REORP status.
3. Check your applications and update static SQL statements to accept the new column. Then recompile
the programs and rebind the packages.
For example, the following situations might require application updates:
• Statements that use SELECT * start returning the value of the new column after the package is
rebound.
• INSERT statements with no list of column names imply that the statement specifies a list of every
column (unless defined as implicitly hidden) in left-to-right order. Such statements can continue to
run after the new column is added, but Db2 returns an error for this situation when the package is
rebound. To avoid this problem, it is best to always list the column names in INSERT statements.
4. If the new column is a DATE, TIME, or TIMESTAMP column with a default based on certain system
defaults, rebind any invalidated dependent packages.
For more information, see Changes that invalidate packages (Db2 Application programming and SQL).
GUPI
What to do next
If the table previously contained fixed-length records, adding a new column causes Db2 to treat them
as variable-length records, and access time can be affected immediately. To change the records back to
fixed-length, complete the following steps:
1. Run the REORG utility with the COPY option on the table space, using the inline copy.
2. Run the MODIFY utility with the DELETE option to delete records of all image copies that were made
before the REORG in the previous step.
Related concepts
Changes that invalidate packages (Db2 Application programming and SQL)
Row format conversion for table spaces
The row format of a table space is converted when you run the LOAD REPLACE or REORG TABLESPACE
utilities.
Related reference
ALTER TABLE (Db2 SQL)
INSERT (Db2 SQL)
Procedure
To alter the default value for a column, use one of the following approaches:
• To set the default value, issue the following statement:
You can use this statement to add a default value for a column that does not already have one, or to
change the existing default value.
Examples
For example, suppose that table MYEMP is defined as follows:
GUPI
Procedure
To alter the data type of a column:
1. Issue an ALTER TABLE statement.
2. Specify the data type change that you would like to make. Potential changes include:
• Altering the length of fixed-length or varying-length character data types, and the length of fixed-
length or varying-length graphic data types.
• Switching between fixed-length and varying-length types for character and graphic data.
• Switching between compatible numeric data types.
Results
When you change the data type of a column by using the ALTER TABLE statement, the new definition
of the column is stored in the catalog. When the changes are materialized depends on the value of the
DDL_MATERIALIZATION subsystem parameter.
Example
GUPI Assume that a table contains basic account information for a small bank. The initial account table
was created many years ago in the following manner:
ALTER TABLE ACCOUNTS ALTER COLUMN NAME SET DATA TYPE VARCHAR(40);
ALTER TABLE ACCOUNTS ALTER COLUMN ADDRESS SET DATA TYPE VARCHAR(60);
ALTER TABLE ACCOUNTS ALTER COLUMN BALANCE SET DATA TYPE DECIMAL(15,2);
ALTER TABLE ACCOUNTS ALTER COLUMN ACCTID SET DATA TYPE INTEGER;
COMMIT;
The NAME and ADDRESS columns can now handle longer values without truncation, and the shorter
values are no longer padded. The BALANCE column is extended to allow for larger dollar amounts. Db2
saves these new formats in the catalog and stores the inserted row in the new formats.
GUPI
Related concepts
Table space versions
Example
GUPI Assume that the following indexes are defined on the ACCOUNTS table:
When the data type of the ACCTID column is altered from DECIMAL(4,0) to INTEGER, the IX1 index is
placed in a REBUILD-pending (RBDP) state. GUPI
Related concepts
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
Indexes that are padded or not padded
The NOT PADDED and PADDED options of the CREATE INDEX and ALTER INDEX statements specify how
varying-length string columns are stored in an index.
Related tasks
Reorganizing table spaces for schema changes
After you commit a schema change, Db2 puts the affected table space in an advisory REORG-pending
(AREO*) state. The table space stays in this state until you reorganize the table space and apply the
schema changes.
Removing in-use table space versions
To prevent Db2 from running out of table space version numbers, and to prevent subsequent ALTER
statements from failing, you must remove unneeded, in-use table space versions regularly.
Recycling index version numbers
To prevent Db2 from running out of index version numbers (and to prevent subsequent ALTER statements
from failing), you must recycle unused index version numbers regularly.
Saving disk space by using non-Padded indexes (Db2 Performance)
Procedure
Run the REORG TABLESPACE utility. If the table space contains one table, REORG TABLESPACE updates
the data format for the table to the format of the current table space version. If the table space contains
more than one table, REORG TABLESPACE updates the data format for all tables that are not in version
0 format to the format of the current table space version. The current table space version is the value of
CURRENT_VERSION in the SYSIBM.SYSTABLESPACE catalog table.
Db2 uses table space versions to maximize data availability. Table space versions enable Db2 to keep
track of schema changes, and simultaneously, provide users with access to data in altered table spaces.
When users retrieve rows from an altered table, the data is displayed in the format that is described by
the most recent schema definition, even though the data is not currently stored in this format. The most
recent schema definition is associated with the current table space version.
Although data availability is maximized by the use of table space versions, performance might suffer
because Db2 does not automatically reformat the data in the table space to conform to the most recent
schema definition. Db2 defers any reformatting of existing data until you reorganize the table space with
the REORG TABLESPACE utility. The more ALTER statements that you commit between reorganizations,
the more table space versions Db2 must track, and the more performance can suffer.
Recommendation: Run the REORG TABLESPACE utility as soon as possible after a schema change to
correct any performance degradation that might occur and to keep performance at its highest level.
Related concepts
What happens to an index on altered columns when immediate column alteration is in effect
When immediate column alteration is in effect, altering the data type of a column that is contained in an
index has implications for the index.
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
Row format conversion for table spaces
The row format of a table space is converted when you run the LOAD REPLACE or REORG TABLESPACE
utilities.
Related tasks
Removing in-use table space versions
To prevent Db2 from running out of table space version numbers, and to prevent subsequent ALTER
statements from failing, you must remove unneeded, in-use table space versions regularly.
Recycling index version numbers
To prevent Db2 from running out of index version numbers (and to prevent subsequent ALTER statements
from failing), you must recycle unused index version numbers regularly.
Related reference
REORG TABLESPACE (Db2 Utilities)
ALTER TABLE ACCOUNTS ALTER COLUMN NAME SET DATA TYPE VARCHAR(40);
ALTER TABLE ACCOUNTS ALTER COLUMN ADDRESS SET DATA TYPE VARCHAR(60);
ALTER TABLE ACCOUNTS ALTER COLUMN BALANCE SET DATA TYPE DECIMAL(15,2);
COMMIT;
ALTER TABLE ACCOUNTS ALTER COLUMN ACCTID SET DATA TYPE INTEGER;
COMMIT;
GUPI
Procedure
To remove in-use table space versions:
1. Determine the range of version numbers that are currently in use for a table space by querying the
OLDEST_VERSION and CURRENT_VERSION columns of the SYSIBM.SYSTABLESPACE catalog table.
Doing this ensures that there are no image copies with in-use versions.
Related concepts
What happens to an index on altered columns when immediate column alteration is in effect
When immediate column alteration is in effect, altering the data type of a column that is contained in an
index has implications for the index.
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
The effect of MODIFY RECOVERY on version numbers (Db2 Utilities)
Effects of running REORG TABLESPACE (Db2 Utilities)
Related tasks
Reorganizing table spaces for schema changes
After you commit a schema change, Db2 puts the affected table space in an advisory REORG-pending
(AREO*) state. The table space stays in this state until you reorganize the table space and apply the
schema changes.
Recycling index version numbers
To prevent Db2 from running out of index version numbers (and to prevent subsequent ALTER statements
from failing), you must recycle unused index version numbers regularly.
Related reference
Syntax and options of the REPAIR control statement (Db2 Utilities)
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
When users retrieve rows from a table with an altered column, the data is displayed in the format that
is described by the most recent schema definition, even though the data is not currently stored in this
format. The most recent schema definition is associated with the current index version.
Db2 creates an index version each time you commit one of the following schema changes:
Add a new column to both a table and an index in A new index version for each index that is affected
the same commit operation. by this operation.
Exceptions: Db2 does not create an index version under the following circumstances:
• When the index was created with DEFINE NO
• When you extend the length of a varying-length character (VARCHAR data type) or varying-length
graphic (VARGRAPHIC data type) column that is contained in one or more indexes that are defined with
the NOT PADDED option
• When you specify the same data type and length that a column (which is contained in one or more
indexes) currently has, such that its definition does not actually change
Db2 creates only one index version if, in the same unit of work, you make multiple schema changes to
columns that are contained in the same index. If you make these same schema changes in separate units
of work, each change results in a new index version.
Related concepts
What happens to an index on altered columns when immediate column alteration is in effect
When immediate column alteration is in effect, altering the data type of a column that is contained in an
index has implications for the index.
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Related tasks
Reorganizing table spaces for schema changes
After you commit a schema change, Db2 puts the affected table space in an advisory REORG-pending
(AREO*) state. The table space stays in this state until you reorganize the table space and apply the
schema changes.
Removing in-use table space versions
To prevent Db2 from running out of table space version numbers, and to prevent subsequent ALTER
statements from failing, you must remove unneeded, in-use table space versions regularly.
Recycling index version numbers
To prevent Db2 from running out of index version numbers (and to prevent subsequent ALTER statements
from failing), you must recycle unused index version numbers regularly.
Reorganizing indexes
Procedure
To recycle unused index version numbers:
1. GUPI Determine the range of version numbers that are currently in use for an index by querying the
OLDEST_VERSION and CURRENT_VERSION columns of the SYSIBM.SYSINDEXES catalog table.
GUPI
2. Next, run the appropriate utility to recycle unused index version numbers.
• For indexes that are defined as COPY YES, run the MODIFY RECOVERY utility.
If all reusable version numbers (1 - 15) are currently in use, reorganize the index by running REORG
INDEX or REORG TABLESPACE before you recycle the version numbers.
• For indexes that are defined as COPY NO, run the REORG TABLESPACE, REORG INDEX, LOAD
REPLACE, or REBUILD INDEX utility. These utilities recycle the version numbers as they perform
their primary functions.
Related concepts
What happens to an index on altered columns when immediate column alteration is in effect
When immediate column alteration is in effect, altering the data type of a column that is contained in an
index has implications for the index.
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
Related tasks
Reorganizing table spaces for schema changes
After you commit a schema change, Db2 puts the affected table space in an advisory REORG-pending
(AREO*) state. The table space stays in this state until you reorganize the table space and apply the
schema changes.
Removing in-use table space versions
Procedure
To add a referential constraint to an existing table:
1. Create a unique index on the primary key columns for any table that does not already have one.
2. For each table, issue the ALTER TABLE statement to add its primary key.
In the next steps, you issue the ALTER TABLE statement to add foreign keys for each table, except for
the activity table. The table space remains in CHECK-pending status, which you can reset by running
the CHECK DATA utility with the DELETE(YES) option.
Deletions by the CHECK DATA utility are not bound by delete rules. The deletions cascade to all
descendents of a deleted row, which can be disastrous. For example, if you delete the row for
Examples
GUPI
1. Create the DEPT table and define its primary key on the DEPTNO column. The PRIMARY KEY clause of
the CREATE TABLE statement defines the primary key.
2. Create the EMP table and define its primary key as EMPNO and its foreign key as DEPT. The FOREIGN
KEY clause of the CREATE TABLE statement defines the foreign key.
3. Alter the DEPT table to add the definition of its foreign key, MGRNO.
GUPI
Related tasks
Adding parent keys and foreign keys
You can add primary parent keys, unique parent keys, and foreign keys to an existing table.
Dropping parent keys and foreign keys
Procedure
To add a key to a table:
1. Choose the type of key that you want to add.
2. GUPI Add the key by using the ALTER TABLE statement.
Option Description
Adding a To add a primary key to an existing table, use the PRIMARY KEY clause in an ALTER
primary key TABLE statement. For example, if the department table and its index XDEPT1 already
exist, create its primary key by issuing the following statement:
Adding a To add a unique key to an existing table, use the UNIQUE clause of the ALTER
unique key TABLE statement. For example, if the department table has a unique index defined on
column DEPTNAME, you can add a unique key constraint, KEY_DEPTNAME, consisting
of column DEPTNAME by issuing the following statement:
Adding a To add a foreign key to an existing table, use the FOREIGN KEY clause of the ALTER
foreign key TABLE statement. The parent key must exist in the parent table before you add
the foreign key. For example, if the department table has a primary key defined on
the DEPTNO column, you can add a referential constraint, REFKEY_DEPTNO, on the
DEPTNO column of the project table by issuing the following statement:
GUPI
Related tasks
Adding referential constraints to existing tables
You can use the ALTER TABLE statement to add referential constraints to existing tables.
Dropping parent keys and foreign keys
You can drop primary parent keys, unique parent keys, and foreign keys from an existing table.
Creating indexes to improve referential integrity performance for foreign keys (Db2 Performance)
Related reference
ALTER TABLE (Db2 SQL)
Procedure
GUPI To drop a key, complete the following steps:
1. Choose the type of key that you want to drop.
2. Drop the key by using the ALTER TABLE statement.
Option Description
Dropping a When you drop a foreign key using the DROP FOREIGN KEY clause of the ALTER
foreign key TABLE statement, Db2 drops the corresponding referential relationships. (You
must have the ALTER privilege on the dependent table and either the ALTER or
REFERENCES privilege on the parent table.) If the referential constraint references a
unique key that was created implicitly, and no other relationships are dependent on
that unique key, the implicit unique key is also dropped.
Dropping a When you drop a unique key using the DROP UNIQUE clause of the ALTER TABLE
unique key statement, Db2 drops all the referential relationships in which the unique key is a
parent key. The dependent tables no longer have foreign keys. (You must have the
ALTER privilege on any dependent tables.) The table's unique index that enforced
GUPI
Related tasks
Adding referential constraints to existing tables
You can use the ALTER TABLE statement to add referential constraints to existing tables.
Adding parent keys and foreign keys
You can add primary parent keys, unique parent keys, and foreign keys to an existing table.
Procedure
Issue the ALTER TABLE statement.
Option Description
Adding check You can define a check constraint on a table by using the ADD CHECK clause of the
constraints ALTER TABLE statement. If the table is empty, the check constraint is added to the
description of the table.
If the table is not empty, what happens when you define the check constraint depends
on the value of the CURRENT RULES special register, which can be either STD or Db2.
• If the value is STD, the check constraint is enforced immediately when it is defined. If
a row does not conform, the table check constraint is not added to the table and an
error occurs.
• If the value is Db2, the check constraint is added to the table description but its
enforcement is deferred. Because some rows in the table might violate the check
constraint, the table is placed in check-pending status.
The ALTER TABLE statement that is used to define a check constraint always fails if
the table space or partition that contains the table is in a CHECK-pending status, the
CURRENT RULES special register value is STD, and the table is not empty.
Dropping To remove a check constraint from a table, use the DROP CONSTRAINT or DROP CHECK
check clauses of the ALTER TABLE statement. You must not use DROP CONSTRAINT on the
constraints same ALTER TABLE statement as DROP FOREIGN KEY, DROP CHECK, DROP PRIMARY
KEY, or DROP UNIQUE.
GUPI
Related concepts
Check constraints (Introduction to Db2 for z/OS)
Related reference
ALTER TABLE (Db2 SQL)
DROP (Db2 SQL)
Procedure
To add partitions, use the following approaches:
• Add a partition after the last existing logical partition by issuing an ALTER TABLE statement. In the ADD
PARTITION clause, specify an ENDING AT value beyond the existing limit of the last logical partition.
In most cases, you can use the new partition immediately after the ALTER statement completes.
In this case, the partition is not placed in REORG-pending (REORP) status because it extends the
high-range values that were not previously used.
However, for non-large table spaces, the partition is placed in REORP status because the last partition
boundary was not previously enforced.
• Add a partition between existing logical partitions, by completing the following steps:
a) Issue an ALTER TABLE statement with an ADD PARTITION clause that specifies an ENDING AT
value between existing partition limits.
This method is supported only for partition-by-range table spaces.
Adding partitions between existing partitions results in a pending data definition change if the data
sets for the added partition already exist. The existing logical partition that previously contained
range of the new partition is also placed in advisory REORG-pending (AREOR) status.
b) Resolve the pending data definition change by taking one of the following actions:
– Run a partition-level REORG for each affected partition, including newly added partitions
and adjacent affected partitions. You can identify the affected partitions by querying the
SYSIBM.SYSPENDINGDDL catalog table for rows inserted when the ALTER TABLE statement
was run. The REORG_SCOPE_LOWPART and REORG_SCOPE_HIGHPART columns indicate the
existing boundaries of the affected partitions.
– Run a table space-level REORG and specify the SCOPE PENDING option.
Examples
GUPI For example, consider a table space that contains a transaction table named TRANS. The table
is divided into 10 partitions, and each partition contains one year of data. Partitioning is defined on
the transaction date, and the limit key value is the end of each year. The following table shows a
representation of the table space.
GUPI
The following table shows a representative excerpt of the table space after the partition for the year
2020 is added.
Table 31. An excerpt of the table space, showing the added partition 11
Limit value Physical partition Data set name that backs the partition
number
12/31/2018 9 catname.DSNDBx.dbname.psname.I0001.A009
12/31/2019 10 catname.DSNDBx.dbname.psname.I0001.A010
12/31/2020 11 catname.DSNDBx.dbname.psname.I0001.A011
Table 32. An excerpt of the table space, showing the added partition 12
Limit value Physical partition Data set name that backs the partition
number
12/31/2018 9 catname.DSNDBx.dbname.psname.I0001.A009
12/31/2019 10 catname.DSNDBx.dbname.psname.I0001.A010
06/30/2020 12 catname.DSNDBx.dbname.psname.I0001.A012
12/31/2020 11 catname.DSNDBx.dbname.psname.I0001.A011
GUPI
What to do next
After you add partitions, you might need to complete any of the following actions.
Alter the attributes of added partitions
You might need to alter the attributes of the added partition. The attributes of the new partition are
either inherited or calculated. If it is necessary to change specific attributes for the new partition, you
must issue separate ALTER TABLESPACE and ALTER INDEX statements after you add the partition.
Examine the catalog to determine whether the inherited values require changes.
The source used for each inherited attribute depend on the position of the new partition, and other
factors. However, with certain exceptions, the following general pattern is used:
• A partition that is added between existing logical partitions inherits most attribute values from the
table space.
• A partition that is added as the new last logical partition inherits most attribute values from the
previous last logical partition.
For a detailed description of how Db2 determines the attributes of the added partitions, see “How
Db2 determines attributes for added partitions” on page 217.
GUPI For example, if you want to specify the space attributes for a new partition, use the ALTER
TABLESPACE and ALTER INDEX statements. For example, suppose that the new partition is
PARTITION 11 for the table space and the index. Issue the following statements to specify quantities
for the PRIQTY, SECQTY, FREEPAGE, and PCTFREE attributes:
GUPI
STORNAME Table space or index space “1” on page Last logical partition
218
VCATNAME Table space or index space “1” on page Last logical partition
218
PCTFREE Table space or index space “1” on page Last logical partition
218
GBPCACHE Table space or index space “1” on page Last logical partition
218
Notes:
1. If the corresponding SYSTABLESPACE or SYSINDEXES value is NULL, for objects created or last altered
in Db2 12 at function level 100 or earlier, Db2 updates the space-level value based on the existing last
logical partition, and the new partition inherits this value.
2. If the corresponding SYSTABLESPACE or SYSINDEXES value is NULL, for objects created or last altered
in Db2 12 at function level 100 or earlier, the value for the new partition is NULL.
3. If DSSIZE=0 in the SYSTABLESPACE table and the table space is not large, the value for the table
partition is 4G.
4. If partitions are added by partitioning scheme alteration from PBG to PBR.
Related tasks
Adding partitions
You can use ALTER TABLE statements to add partitions to all types of partitioned table spaces.
Related reference
ALTER TABLE (Db2 SQL)
ALTER TABLESPACE (Db2 SQL)
SYSTABLESPACE catalog table (Db2 SQL)
SYSINDEXES catalog table (Db2 SQL)
Altering partitions
You can use the ALTER TABLE statement to alter the partitions of table spaces.
Procedure
To change the boundary between partitions:
1. Use an ALTER statement to modify the limit key value for each partition boundary that you want to
change.
If the partitioned table space uses table-controlled partitioning, use an ALTER TABLE statement with
the ALTER PARTITION clause to alter the limit key. If the partitioned table space uses index-controlled
partitioning, use an ALTER INDEX statement with the ALTER PARTITION clause.
Recommendation: If the table space uses index-controlled partitioning, alter it to use table-controlled
partitioning before you alter the limit key. You can follow the example in “Converting table spaces to
use table-controlled partitioning” on page 191.
If you attempt to alter a limit key by using ALTER TABLE, the statement fails if both of the following
conditions are true:
• The table space uses index-controlled partitioning.
• The PREVENT_ALTERTB_LIMITKEY subsystem parameter is set to YES.
You can change the limit key values of all or most of the partitions. You can apply the changes to one or
more partitions at a time.
The effect of altering the limit keys depends on the type of table space:
• For partition-by-range table spaces and partitioned (non-UTS) table spaces with table-controlled
partitioning, the data remains available after the limit keys are altered.
In most cases, altering the limit keys for those table spaces is a pending definition change that
causes the partitions on either side of the boundary to be placed in advisory REORG-pending
(AREOR) status.
In some cases, a change to a limit key value is immediately materialized, and AREOR status is not
set. Immediate materialization occurs when Db2 determines that both of the following conditions
are true:
– The alteration does not move any data between partitions
– No other pending definition change exists on the identified partition
– No pending definition changes exist on the adjacent logical partition toward which the alteration
moves the limit key of the identified partition.
• For partitioned (non-UTS) table spaces with index-controlled partitioning, altering the limit keys is an
immediate definition change. In these cases, the partitions on either side of the boundary are placed
in REORG-pending (REORP) status, and the data is unavailable until the affected range of partitions
are reorganized.
2. Run the REORG TABLESPACE utility to redistribute data in the partitioned table space based on the
new limit key values.
This example reorganizes a range of partitions and includes the STATISTICS keyword, which means
that REORG collects statistics about the specified range of partitions.
You can reorganize a range of partitions, even if the partitions are not in AREOR or REORP status.
However, you cannot reorganize only a subset of the range of partitions that are in AREOR or REORP
status. You must reorganize the entire range to reset the restrictive status and materialize any pending
limit key changes.
If you run REORG on partitions that are in REORP or advisory REORG-pending (AREOR) status,
consider the values that you set for the following options:
SHRLEVEL
You can specify SHRLEVEL REFERENCE or SHRLEVEL CHANGE when objects are in AREOR or
REORP status. REORG materializes any pending definition changes. If you specify SHRLEVEL
NONE, REORG does not materialize any pending limit key changes and any restrictive states are
not reset.
KEEPDICTIONARY
REORG ignores the KEEPDICTIONARY option for any partition that is in REORP or AREOR status.
REORG automatically rebuilds the dictionaries for the affected partitions. However, if you specify a
range of partitions that includes some partitions that are not in REORP status, REORG accepts the
KEEPDICTIONARY option for those nonrestricted partitions.
DISCARDDN and PUNCHDDN
Specify the DISCARDDN and PUNCHDDN data sets when the limit key for the last partition was
reduced for a table space that is defined as LARGE or DSSIZE. Otherwise, REORG terminates and
issues message DSNU035I and return code 8.
REORG writes SYSCOPY records as follows:
• If any partition is in REORP status when REORG runs, Db2 writes a SYSCOPY record with STYPE=A
for each partition that is specified on the REORG job.
• If you take an inline image copy of a range of partitions, Db2 writes one SYSCOPY record with
ICTYPE=F for each partition. Each record has the same data set name.
If REORG materialized any pending limit key changes, the related plans and packages are invalidated.
Related tasks
Rotating partitions
You can use the ALTER TABLE statement to rotate any logical partition to become the last partition.
Rotating partitions is supported for partitioned (non-UTS) table spaces and partition-by-range table
spaces, but not for partition-by-growth table spaces.
Extending the boundary of the last partition
You can extend the boundary of the last partition of a table that uses table-controlled partitioning without
impacting data availability.
Splitting the last partition into two
To allow for future growth, you can truncate the last partition of a table space and move some of the data
into a new partition.
Inserting rows at the end of a partition
To specify how you want Db2 to insert rows at the end of a partition, you can use the CREATE TABLE or
ALTER TABLE statement.
Partitioning data in Db2 tables
Rotating partitions
You can use the ALTER TABLE statement to rotate any logical partition to become the last partition.
Rotating partitions is supported for partitioned (non-UTS) table spaces and partition-by-range table
spaces, but not for partition-by-growth table spaces.
Procedure
GUPI To rotate a partition to be the last partition:
1. Issue the ALTER TABLE statement and specify the ROTATE PARTITION option.
2. Optional: Run the RUNSTATS utility. GUPI
Example
For example, assume that the partition structure of the table space is sufficient through the year 2006.
The following table shows a representation of the table space through the year 2006. When another
partition is needed for the year 2007, you determined that the data for 1996 is no longer needed. You
want to recycle the partition for the year 1996 to hold the transactions for the year 2007.
GUPI To rotate the first partition for table TRANS to be the last partition, issue the following statement:
Related tasks
Changing the boundary between partitions
You can change the boundary of a partition by explicitly specifying a new value for the limit key. The
limit key is the highest value of the partitioning key for a partition. The partitioning key is the column or
columns that are used to determine the partitions.
Extending the boundary of the last partition
You can extend the boundary of the last partition of a table that uses table-controlled partitioning without
impacting data availability.
Splitting the last partition into two
To allow for future growth, you can truncate the last partition of a table space and move some of the data
into a new partition.
Inserting rows at the end of a partition
Procedure
Issue the ALTER TABLE statement with the ALTER PARTITION clause to specify a new boundary for the
last partition. GUPI
For more details on this process, see “Changing the boundary between partitions” on page 219.
Example
The following table shows a representation of a table space through the year 2007. You rotated the first
partition to be the last partition. Now, you want to extend the last partition so that it includes the year
2008.
GUPI To extend the boundary of the last partition to include the year 2008, issue the following statement:
GUPI
You can use the partition immediately after the ALTER statement completes. The partition is not placed in
any restrictive status, because it extends the high-range values that were not previously used.
Related tasks
Changing the boundary between partitions
You can change the boundary of a partition by explicitly specifying a new value for the limit key. The
limit key is the highest value of the partitioning key for a partition. The partitioning key is the column or
columns that are used to determine the partitions.
Rotating partitions
You can reset the advisory REORG-pending or REORG-pending status in one of the following ways:
• Run REORG with the DISCARD option to reset the advisory REORG-pending status or REORG-pending
status, set the new partition boundary, and discard the data rows that fall outside of the new boundary.
Procedure
• To split a partition into two when the limit key of the last partition is less than MAXVALUE:
a) Suppose that p1 is the limit key for the last partition. Issue the ALTER TABLE statement with the
ADD PARTITION clause to add a partition with a limit key that is greater than p1.
b) Issue the ALTER TABLE statement with the ALTER PARTITION clause to specify a limit key that is
less than p1 for the partition that is now the second-to-last partition.
For more details on this process, see “Changing the boundary between partitions” on page 219.
c) Issue the ALTER TABLE statement with the ALTER PARTITION clause to specify p1 for the limit key
of the new last partition.
d) Issue the REORG TABLESPACE utility on the new second-to-last and last partitions to remove the
REORG-pending status on the last partition, and materialize the changes and remove the advisory
REORG-pending status on the second-to-last partition.
• To split a partition into two when the limit key of the last partition is MAXVALUE, and the last partition
and the previous partition have no pending definition changes:
a) Issue the ALTER TABLE statement with the ALTER PARTITION clause to specify limit key p1, which
is less than MAXVALUE, for the last partition.
For more details on this process, see “Changing the boundary between partitions” on page 219.
b) Issue the ALTER TABLE statement with the ADD PARTITION clause to add a new last partition, with
a limit key that is greater than p1.
c) Issue the REORG TABLESPACE utility on the new second-to-last and last partitions to remove the
REORG-pending status.
Example
GUPI
For example, the following table shows a representation of a table space through the year 2015, where
each year of data is saved in separate partitions. Assume that you want to split the data for 2015 into two
partitions.
You want to create a partition to include the data for the last six months of 2015 (from 07/01/2015
to 12/31/2015). You also want partition P001 to include only the data for the first six months of 2015
(through 06/30/2015).
Table 38. Table space with each year of data in a separate partition
Partition Limit value Data set name that backs the partition
P002 12/31/2005 catname.DSNDBx.dbname.psname.I0001.A002
P003 12/31/2006 catname.DSNDBx.dbname.psname.I0001.A003
P004 12/31/2007 catname.DSNDBx.dbname.psname.I0001.A004
P005 12/31/2008 catname.DSNDBx.dbname.psname.I0001.A005
P006 12/31/2009 catname.DSNDBx.dbname.psname.I0001.A006
P007 12/31/2010 catname.DSNDBx.dbname.psname.I0001.A007
P008 12/31/2011 catname.DSNDBx.dbname.psname.I0001.A008
To truncate partition P001 to include data only through 06/30/2015, issue the following statement:
To preserve the last partition key limit of 12/31/2015, issue the following statement:
The following table shows a portion of the table space and the modified partitions:
Table 39. Table space with one year split into two partitions
Partition Limit value Data set name that backs the partition
P011 12/31/2014 catname.DSNDBx.dbname.psname.I0001.A011
P001 06/30/2015 catname.DSNDBx.dbname.psname.I0001.A001
P012 12/31/2015 catname.DSNDBx.dbname.psname.I0001.A012
GUPI
Related tasks
Changing the boundary between partitions
You can change the boundary of a partition by explicitly specifying a new value for the limit key. The
limit key is the highest value of the partitioning key for a partition. The partitioning key is the column or
columns that are used to determine the partitions.
Rotating partitions
You can use the ALTER TABLE statement to rotate any logical partition to become the last partition.
Rotating partitions is supported for partitioned (non-UTS) table spaces and partition-by-range table
spaces, but not for partition-by-growth table spaces.
Extending the boundary of the last partition
You can extend the boundary of the last partition of a table that uses table-controlled partitioning without
impacting data availability.
Inserting rows at the end of a partition
To specify how you want Db2 to insert rows at the end of a partition, you can use the CREATE TABLE or
ALTER TABLE statement.
Related reference
ALTER TABLE (Db2 SQL)
Advisory or restrictive states (Db2 Utilities)
Procedure
Issue a CREATE TABLE or ALTER TABLE statement and specify the APPEND option.
The APPEND option has the following settings:
YES
Requests data rows to be placed into the table by disregarding the clustering during SQL INSERT and
online LOAD operations. Rather than attempting to insert rows in cluster-preserving order, rows are
appended at the end of the table or appropriate partition.
NO
Requests standard behavior of SQL INSERT and online LOAD operations, namely that they attempt to
place data rows in a well clustered manner with respect to the value in the row's cluster key columns.
NO is the default option.
After populating a table with the APPEND option in effect, you can achieve clustering by running the
REORG utility.
Restriction: You cannot specify the APPEND option for tables created in XML or work file table spaces.
Related tasks
Changing the boundary between partitions
You can change the boundary of a partition by explicitly specifying a new value for the limit key. The
limit key is the highest value of the partitioning key for a partition. The partitioning key is the column or
columns that are used to determine the partitions.
Rotating partitions
You can use the ALTER TABLE statement to rotate any logical partition to become the last partition.
Rotating partitions is supported for partitioned (non-UTS) table spaces and partition-by-range table
spaces, but not for partition-by-growth table spaces.
Extending the boundary of the last partition
You can extend the boundary of the last partition of a table that uses table-controlled partitioning without
impacting data availability.
Splitting the last partition into two
To allow for future growth, you can truncate the last partition of a table space and move some of the data
into a new partition.
Procedure
Issue the ALTER TABLE statement and specify the ADD column-name XML option.
Example
GUPI
Related tasks
Altering implicitly created XML objects
You can alter implicitly created XML objects; however, you can change only some of the properties for an
XML object.
Related reference
ALTER TABLE (Db2 SQL)
Procedure
To alter the size of the hash space for a table, use one of the following approaches:
• Run the REORG TABLESPACE utility on the table space and specify AUTOESTSPACE YES in the REORG
TABLESPACE statement.
Db2 automatically estimates a size for the hash space based on information from the real-time
statistics tables. If you specify AUTOESTSPACE NO in the REORG TABLESPACE statement, Db2 uses
the hash space that you explicitly specified for the table space.
What to do next
Monitor the real-time-statistics information about your table to ensure that the hash access path is used
regularly and that your disk space is used efficiently.
Related tasks
Managing space and page size for hash-organized tables (deprecated) (Db2 Performance)
Monitoring hash access (deprecated) (Db2 Performance)
Related reference
ALTER TABLE (Db2 SQL)
REORG TABLESPACE (Db2 Utilities)
Procedure
To add a system period to a table and define system-period data versioning:
1. Issue the ALTER TABLE statement on the base table to alter or add row-begin, row-end, and
transaction-start-ID columns, and to define the system period.
After you alter the table, it must have the following attributes:
• A row-begin column that is defined as TIMESTAMP(12) NOT NULL with the GENERATED ALWAYS AS
ROW BEGIN attribute.
Example
GUPI For example, consider that you created a table named policy_info by issuing the following CREATE
TABLE statement:
Issue the following ALTER TABLE statements to add the begin and end columns and a system period to
the table:
To define system-period data versioning between the system-period temporal table and the history table,
issue the following ALTER TABLE statement:
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related information
Managing Ever-Increasing Amounts of Data with IBM Db2 for z/OS: Using Temporal Data Management,
Archive Transparency, and the IBM Db2 Analytics Accelerator for z/OS (IBM Redbooks)
Procedure
Issue the ALTER TABLE statement with the ADD PERIOD BUSINESS_TIME clause.
The table becomes an application-period temporal table.
Example
For example, consider that you created a table named policy_info by issuing the following CREATE TABLE
statement:
You can add an application period to this table by issuing the following ALTER TABLE statement:
You also can add a unique index to the table by issuing the following CREATE INDEX statement:
Restriction: You cannot issue the ALTER INDEX statement with ADD BUSINESS_TIME WITHOUT
OVERLAPS. Db2 issues SQL error code -104 with SQLSTATE 20522.
Procedure
Issue INSERT, UPDATE, DELETE, or MERGE statements to make the changes that you want.
Timestamp information is stored in the timestamp columns, and historical rows are moved to the history
table.
Restriction: You cannot issue SELECT FROM DELETE or SELECT FROM UPDATE statements when the FOR
PORTION OF option is specified for either the UPDATE statement or the DELETE statement. Db2 issues an
error in both of these cases (SQL error code -104 with SQLSTATE 20522).
Example
GUPI
The following example shows how you can insert data in the POLICY_INFO table by specifying the
DEFAULT keyword in the VALUES clause for each of the generated columns:
GUPI
Related concepts
Temporal tables and data versioning
A temporal table is a table that records the period of time when a row is valid.
Related reference
CURRENT TEMPORAL SYSTEM_TIME (Db2 SQL)
SYSTIMESENSITIVE bind option (Db2 Commands)
Procedure
Issue an ALTER TABLE statement and specify the DROP MATERIALIZED QUERY option.
For example,
What to do next
After you issue this statement, Db2 can no longer use the table for query optimization, and you cannot
populate the table by using the REFRESH TABLE statement. GUPI
Related tasks
Changing the attributes of a materialized query table
You can use the ALTER TABLE statement to change the attributes of an existing materialized query table.
Changing the definition of a materialized query table
After you create a materialized query table, you can change the definition in one of two ways.
Procedure
GUPI To change the attributes of an existing materialized query table:
1. Issue the ALTER TABLE statement.
2. Decide which attributes to alter.
Option Description
Enable or disable By default, when you create or register a materialized query table,
automatic query rewrite. Db2 enables it for automatic query rewrite. To disable automatic query
rewrite, issue the following statement:
GUPI
Related tasks
Changing a materialized query table to a base table
You can use the ALTER TABLE statement to change a materialized query table into a base table.
Changing the definition of a materialized query table
Procedure
To change the definition of an existing materialized query table, use one of the following approaches:
• Optional: Drop and re-create the materialized query table with a different definition.
• Optional: Use ALTER TABLE statement to change the materialized query table into a base table. Then,
change it back to a materialized query table with a different but equivalent definition (that is, with a
different but equivalent SELECT for the query).
Related tasks
Changing a materialized query table to a base table
You can use the ALTER TABLE statement to change a materialized query table into a base table.
Changing the attributes of a materialized query table
You can use the ALTER TABLE statement to change the attributes of an existing materialized query table.
• Assign a new validation routine to the table using the VALIDPROC clause. (Only one validation routine
can be connected to a table at a time; so if a validation routine already exists, Db2 disconnects the old
one and connects the new routine.) Rows that existed before the connection of a new validation routine
are not validated. In this example, the previous validation routine is disconnected and a new routine is
connected with the program name EMPLNEWE:
GUPI
To ensure that the rows of a table conform to a new validation routine, you must run the validation routine
against the old rows. One way to accomplish this is to use the REORG and LOAD utilities.
Procedure
To ensure that the rows of a table conform to a new validation routine by using the REORG and LOAD
utilities:
1. Use REORG to reorganize the table space that contains the table with the new validation routine.
Specify UNLOAD ONLY, as in this example:
The EMPLNEWE validation routine validates all rows after the LOAD step has completed. Db2 copies
any invalid rows into the SYSDISC data set.
Procedure
• To enable replication products to capture data changes , complete the following steps:
a) Issue an ALTER TABLE statement with the DATA CAPTURE CHANGES clause.
b) Activate replication of the source table to the target table.
The data replication tool checks the Db2 catalog field SYSTABLES.DATACAPTURE to ensure that
DATA CAPTURE is enabled, otherwise the activation fails.
c) The data replication tool starts consuming the full log records of the source table through IFCID
0306 or some other method.
• To disable the capture of data changes by replication products, complete the following steps:
a) Deactivate replication of the source table to the target table.
b) Ensure that data replication tool stops consuming Db2 log records for the source table.
c) Issue an ALTER TABLE statement with the DATA CAPTURE NONE clause.
Results
As part of the DATA CAPTURE alteration processing, Db2 completes the following actions:
Procedure
To change an edit procedure or a field procedure for a table space in which the maximum record length is
less than 32 KB, use the following procedure:
1. Run the UNLOAD utility or run the REORG TABLESPACE utility with the UNLOAD EXTERNAL option to
unload the data and decode it using the existing edit procedure or field procedure.
These utilities generate a LOAD statement in the data set (specified by the PUNCHDDN option of the
REORG TABLESPACE utility) that you can use to reload the data into the original table space.
If you are using the same edit procedure or field procedure for many tables, unload the data from all
the table spaces that have tables that use the procedure.
2. Modify the code of the edit procedure or the field procedure.
3. After the unload operation is completed, stop Db2.
4. Link-edit the modified procedure, using its original name.
5. Start Db2.
6. Use the LOAD utility to reload the data. LOAD then uses the modified procedure or field procedure to
encode the data.
What to do next
To change an edit procedure or a field procedure for a table space in which the maximum record length is
greater than 32 KB, use the DSNTIAUL sample program to unload the data.
GUPI
Related reference
ALTER TABLE (Db2 SQL)
Procedure
To change the attributes of an identity column:
1. Issue an ALTER TABLE statement.
2. Specify the ALTER COLUMN option.
This clause changes all of the attributes of an identity column except the data type. However, if the
ALTER TABLE statement is rolled back, a gap in the sequence of identity column values can occur
because of unassigned cache values.
What to do next
Changing the data type of an identity column, like changing some other data types, requires that you drop
and then re-create the table.
Related concepts
Identity columns (Db2 Application programming and SQL)
Table space versions
Db2 creates a table space version each time that you commit one or more specific schema changes by
using the ALTER TABLE statement.
Related tasks
Altering the data type of a column
You can use the ALTER TABLE statement to change the data types of columns in existing tables in several
ways.
Changing data types by dropping and re-creating the table
Some changes to a table cannot be made with the ALTER TABLE statement.
Related reference
ALTER TABLE (Db2 SQL)
Procedure
To change data types:
1. Unload the table.
2. Drop the table.
Attention: Be very careful about dropping a table. In most cases, recovering a dropped table is
nearly impossible. If you decide to drop a table, remember that such changes might invalidate a
package.
You must alter tables that have been created with RESTRICT ON DROP to remove the restriction before
you can drop them.
3. Commit the changes.
4. Re-create the table.
GUPI If the table has an identity column:
• Choose carefully the new value for the START WITH attribute of the identity column in the CREATE
TABLE statement if you want the first generated value for the identity column of the new table to
resume the sequence after the last generated value for the table that was saved by the unload in
step 1.
• Define the identity column as GENERATED BY DEFAULT so that the previously generated identity
values can be reloaded into the new table. GUPI
5. Reload the table.
Related tasks
Altering the attributes of an identity column
You can change the attributes of an identity column by using the ALTER TABLE statement.
The statement deletes the row in the SYSIBM.SYSTABLES catalog table that contains information about
DSN8910.PROJ. This statement also drops any other objects that depend on the project table. This action
results in the following implications:
• The column names of the table are dropped from SYSIBM.SYSCOLUMNS.
• If the dropped table has an identity column, the sequence attributes of the identity column are removed
from SYSIBM.SYSSEQUENCES.
• If triggers are defined on the table, they are dropped, and the corresponding rows are removed from
SYSIBM.SYSTRIGGERS and SYSIBM.SYSPACKAGES.
• Any views based on the table are dropped.
• Packages that involve the use of the table are invalidated.
• Cached dynamic statements that involve the use of the table are removed from the cache.
• Synonyms for the table are dropped from SYSIBM.SYSSYNONYMS.
• Indexes created on any columns of the table are dropped, along with any pending changes that are
associated with the index.
GUPI
SELECT DNAME
FROM SYSIBM.SYSPLANDEP
WHERE BNAME = 'PROJ'
AND BCREATOR = 'DSN8910'
AND BTYPE = 'T';
GUPI
Re-creating a table
You can re-create a Db2 table to decrease the length attribute of a string column or the precision of a
numeric column.
Procedure
To re-create a Db2 table:
1. If you do not have the original CREATE TABLE statement and all authorization statements for the table
(for example, call the table T1), query the catalog to determine its description, the description of all
indexes and views on it, and all users with privileges on it.
2. Create a new table (for example, call the table T2) with the attributes that you want.
3. Copy the data from the old table T1 into the new table T2 by using one of the following methods:
a) Issue the following INSERT statement:
INSERT INTO T2
SELECT * FROM T1;
GUPI
b) Load data from your old table into the new table by using the INCURSOR option of the LOAD utility.
This option uses the Db2 UDB family cross-loader function.
4. Issue the statement DROP TABLE T1. If T1 is the only table in an explicitly created table space, and
you do not mind losing the compression dictionary, if one exists, you can drop the table space instead.
By dropping the table space, the space is reclaimed.
5. Commit the DROP statement.
6. Use the statement RENAME TABLE to rename table T2 to T1.
7. Run the REORG utility on the table space that contains table T1.
8. Notify users to re-create any synonyms, indexes, views, and authorizations they had on T1.
What to do next
If you want to change a data type from string to numeric or from numeric to string (for example, INTEGER
to CHAR or CHAR to INTEGER), use the CHAR and DECIMAL scalar functions in the SELECT statement to
do the conversion. Another alternative is to use the following method:
1. Use UNLOAD or REORG UNLOAD EXTERNAL (if the data to unload in less than 32 KB) to save the data
in a sequential file, and then
2. Use the LOAD utility to repopulate the table after re-creating it. When you reload the table, make sure
you edit the LOAD statement to match the new column definition.
This method is particularly appealing when you are trying to re-create a large table.
Related concepts
Implications of dropping a table
Dropping a table has several implications that you should be aware of.
Objects that depend on the dropped table
Before dropping a table, check to see what objects are dependent on the table. The Db2 catalog tables
SYSIBM.SYSVIEWDEP, SYSIBM.SYSPLANDEP, and SYSIBM.SYSPACKDEP indicate what views, application
plans, and packages are dependent on different Db2 objects.
Procedure
To move a table to a table space of a different page size:
1. Unload the table using UNLOAD FROM TABLE or REORG UNLOAD EXTERNAL FROM TABLE.
2. Use CREATE TABLE LIKE on the table to re-create it in the table space of the new page size.
3. Use Db2 Control Center, Db2 Administration Tool for z/OS, or catalog queries to determine the
dependent objects: views, authorization, plans, packages, synonyms, triggers, referential integrity, and
indexes.
4. Drop the original table.
5. Rename the new table to the name of the old table using RENAME TABLE.
6. Re-create all dependent objects.
7. Rebind plans and packages.
Procedure
To drop and re-create a view:
1. Issue the DROP VIEW SQL statement.
2. Commit the drop.
When you drop a view, Db2 also drops the dependent views.
3. Re-create the modified view using the CREATE VIEW SQL statement.
What to do next
Attention: When you drop a view, Db2 invalidates packages that are dependent on the view and
revokes the privileges of users who are authorized to use it. Db2 attempts to rebind the package
the next time it is executed, and you receive an error if you do not re-create the view.
To tell how much rebinding and reauthorizing is needed if you drop a view, see the following table.
Related tasks
Creating Db2 views
You can create a view on tables or on other views at the current server.
Dropping Db2 views
You can drop a Db2 view by removing the view at the current server.
Related reference
DROP (Db2 SQL)
COMMIT (Db2 SQL)
CREATE VIEW (Db2 SQL)
Procedure
Issue the CREATE TRIGGER statement and specify the INSTEAD OF trigger for insert, update, and delete
operations on the view.
Procedure
Specify a period clause for a BUSINESS_TIME period (FOR PORTION OF BUSINESS_TIME) following the
name of the target view in an UPDATE or DELETE statement.
Example
GUPI
The following example shows how you can create a view that references an application-period temporal
table (att), and then specify a period clause for an update operation on the view.
UPDATE v7
FOR PORTION OF BUSINESS_TIME FROM ‘2013-01-01' TO ‘2013-06-01'
SET col1 = col1 + 1.10;
GUPI
Procedure
Issue the ALTER INDEX statement.
The ALTER INDEX statement can be embedded in an application program or issued interactively.
Related concepts
Indexes that are padded or not padded
The NOT PADDED and PADDED options of the CREATE INDEX and ALTER INDEX statements specify how
varying-length string columns are stored in an index.
Implementing Db2 indexes
Indexes provide efficient access to table data, but can require additional processing when you modify
data in a table.
Related tasks
Designing indexes for performance (Db2 Performance)
Related reference
ALTER INDEX (Db2 SQL)
Procedure
To add a column to an existing index:
1. Issue the ALTER INDEX ADD COLUMN SQL statement when you add a column to a table.
2. Commit the alter procedure.
Results
If the column that is being added to the index is already part of the table on which the index is defined,
the index is left in a REBUILD-pending (RBDP) status. However, if you add a new column to a table and to
an existing index on that table within the same unit of work, the index is left in advisory REORG-pending
(AREO*) status and can be used immediately for data access.
If you add a column to an index and to a table within the same unit of work, this will cause table and index
versioning.
To add a ZIPCODE column to the table and the index, issue the following statements:
Because the ALTER TABLE and ALTER INDEX statements are executed within the same unit of work, Db2
immediately can use the new index with the key STATE, ZIPCODE for data access.
GUPI
Related reference
ALTER INDEX (Db2 SQL)
Procedure
To specify that additional columns be appended to the set of index key columns of a unique index:
1. Issue the ALTER INDEX statement with the INCLUDE clause.
Any column that is included with the INCLUDE clause is not used to enforce uniqueness. These
included columns might improve the performance of some queries through index only access. Using
this option might eliminate the need to access data pages for more queries and might eliminate
redundant indexes.
2. Commit the alter procedure.
As a result of this alter procedure, the index is placed into page set REBUILD-pending (PSRBD) status,
because the additional columns preexisted in the table.
3. To remove the PSRBD status from the index, complete one of the following options:
• Run the REBUILD INDEX utility on the index that you ran the alter procedure on.
• Run the REORG TABLESPACE utility on the index that you ran the alter procedure on, or you can
wait to run the alter procedure until just before the REORG TABLESPACE utility is scheduled to run.
4. Run the RUNSTATS utility.
Procedure
GUPI To alter how varying-length column values are stored in an index, complete the following steps:
1. Choose the padding attribute for the columns.
2. Issue the ALTER INDEX SQL statement.
• Specify the NOT PADDED clause if you do not want column values to be padded to their maximum
length. This clause specifies that VARCHAR and VARGRAPHIC columns of an existing index are
stored as varying-length columns.
• Specify the PADDED clause if you want column values to be padded to the maximum lengths of the
columns. This clause specifies that VARCHAR and VARGRAPHIC columns of an existing index are
stored as fixed-length columns.
3. Commit the alter procedure.
Results
The ALTER INDEX statement is successful only if the index has at least one varying-length column.
What to do next
When you alter the padding attribute of an index, the index is placed into a restricted REBUILD-pending
(RBDP) state. When you alter the padding attribute of a nonpartitioned secondary index (NPSI), the index
is placed into a page set REBUILD-pending (PSRBD) state. In both cases, the indexes cannot be accessed
until they are rebuilt from the data. GUPI
Related concepts
Alternative method for altering an index
You can minimize the potential for data outages by using the ALTER INDEX statement with the
BUFFERPOOL option.
Indexes that are padded or not padded
The NOT PADDED and PADDED options of the CREATE INDEX and ALTER INDEX statements specify how
varying-length string columns are stored in an index.
Related tasks
Adding columns to an index
You can add columns to an index in two ways. You can add a column to an index when you add the column
to a table, or you can specify that additional columns be appended to the set of index key columns of a
unique index.
Altering the clustering of an index
Procedure
GUPI To change the clustering option of an index:
1. Issue the ALTER INDEX statement.
2. Specify the clustering option.
Restriction: You can only specify CLUSTER if there is not already another clustering index. In addition,
an index on a table that is organized by hash cannot be altered to a clustering index.
• CLUSTER indicates that the index is to be used as the clustering index of the table. The change
takes effect immediately. Any subsequently inserted rows use the new clustering index. Existing
data remains clustered by the previous clustering index until the table space is reorganized.
• NOT CLUSTER indicates that the index is not to be used as the clustering index of the table.
However, if the index was previously defined as the clustering index, it continues to be used as the
clustering index until you explicitly specify CLUSTER for a different index.
If you specify NOT CLUSTER for an index that is not a clustering index, that specification is ignored.
3. Commit the alter procedure.
GUPI
Related concepts
Alternative method for altering an index
You can minimize the potential for data outages by using the ALTER INDEX statement with the
BUFFERPOOL option.
Related tasks
Adding columns to an index
You can add columns to an index in two ways. You can add a column to an index when you add the column
to a table, or you can specify that additional columns be appended to the set of index key columns of a
unique index.
Altering how varying-length index columns are stored
You can use the ALTER INDEX statement to change how varying-length column values are stored in an
index.
Dropping and redefining a Db2 index
Dropping an index does not cause Db2 to drop any other objects. The consequence of dropping indexes
is that Db2 invalidates packages that use the index and automatically rebinds them when they are next
used.
Reorganizing indexes
Procedure
GUPI To drop and re-create an index:
1. Issue a DROP INDEX statement.
2. Commit the drop procedure.
The index space associated with the index is also dropped.
3. Re-create the modified index by issuing a CREATE INDEX statement.
4. Rebind any application programs that use the dropped index. GUPI
If you drop and index and then run an application program using that index (and thereby automatically
rebound), that application program does not use the old index. If, at a later time, you re-create the
index and the application program is not rebound, the application program cannot take advantage of
the new index.
Related concepts
Alternative method for altering an index
You can minimize the potential for data outages by using the ALTER INDEX statement with the
BUFFERPOOL option.
Related tasks
Adding columns to an index
You can add columns to an index in two ways. You can add a column to an index when you add the column
to a table, or you can specify that additional columns be appended to the set of index key columns of a
unique index.
Altering how varying-length index columns are stored
You can use the ALTER INDEX statement to change how varying-length column values are stored in an
index.
Altering the clustering of an index
You can use the ALTER INDEX SQL statement to change the clustering index for a table.
Reorganizing indexes
A schema change that affects an index might cause performance degradation. In this case, you might
need to reorganize indexes to correct any performance degradation.
Creating Db2 indexes
Reorganizing indexes
A schema change that affects an index might cause performance degradation. In this case, you might
need to reorganize indexes to correct any performance degradation.
Procedure
Run the REORG INDEX utility as soon as possible after a schema change that affects an index.
You can also run the REORG TABLESPACE utility.
Related concepts
Alternative method for altering an index
You can minimize the potential for data outages by using the ALTER INDEX statement with the
BUFFERPOOL option.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2 to keep track of schema
changes and provides users with access to data in altered columns that are contained in indexes.
Related tasks
Adding columns to an index
You can add columns to an index in two ways. You can add a column to an index when you add the column
to a table, or you can specify that additional columns be appended to the set of index key columns of a
unique index.
Altering how varying-length index columns are stored
You can use the ALTER INDEX statement to change how varying-length column values are stored in an
index.
Altering the clustering of an index
You can use the ALTER INDEX SQL statement to change the clustering index for a table.
Dropping and redefining a Db2 index
Dropping an index does not cause Db2 to drop any other objects. The consequence of dropping indexes
is that Db2 invalidates packages that use the index and automatically rebinds them when they are next
used.
Related reference
REORG INDEX (Db2 Utilities)
REORG TABLESPACE (Db2 Utilities)
DSSIZE The data sets of the table space are already created, and any of the
following conditions are true:
• Pending definition changes already exist for the table space or for any
objects in the table space.
• The table space uses relative page numbering, and the DSSIZE value
that is specified at the table space level is smaller than the value that is
currently being used for one or more of the partitions in the table space.
• The table space uses absolute page numbering, and the specified
DSSIZE value is different than the value that is currently being used for
the table space.
FL 508 MOVE TABLE The data sets of the altered table space are already created.
PAGENUM The change to the PAGENUM attribute is a pending change to the definition
of the table space if the data sets of the table space are already created
and if one of the following conditions is true:
• Pending definition changes already exist for the table space or any
associated indexes.
• The specified PAGENUM attribute is different from the value that is
currently being used for the table space.
SEGSIZE The data sets of the table space are already created, and any of the
following conditions are true:
• Pending definition changes already exist for the definition of the table
space or any objects in the table space.
• The specified SEGSIZE value for a universal table space is different than
the existing value.
• The table space is converted from a partitioned (non-UTS) table space to
a partition-by-range table space.
When pending definition changes are specified for the BUFFERPOOL, DSSIZE, MAXPARTITIONS, or
SEGSIZE attributes of partition-by-growth (PBG) table spaces, the number of partitions is determined
based on the amount of existing data at the time the pending change is applied, and partition growth
can occur. If LOB columns exist, additional LOB table spaces and auxiliary objects are implicitly
created for the newly-created partitions independent of whether SQLRULES (DB2) or SQLRULES (STD)
is in effect or whether the table space was explicitly or implicitly created. The new LOB objects inherit
the buffer pool attribute and authorization from the existing LOB objects.
ALTER TABLE
The following table lists clauses and specific conditions that cause an ALTER TABLE statement to be
processed as a pending definition change, which is not reflected in the definition or data at the time
that the ALTER TABLE statement is issued. Instead, the table space or specific partitions are placed
in an advisory REORG-pending state (AREOR). A subsequent reorganization of the table space, or the
specific affected partitions, applies the pending definition changes to the definition and data of the
table. The definition of the containing table space must not be in an incomplete state.
ALTER PARTITION The statement changes the limit keys for the following types of partitioned
table spaces:
• Partition-by-range table spaces
• Partitioned (non-UTS) table spaces with table-controlled partitioning.
The alteration is normally a pending change, and the altered partition
is placed in advisory REORG-pending (AREOR) status. Unless integer
specifies the last logical partition, the next logical partition is also placed in
AREOR status. However, if no other pending definition changes exist on the
affected partitions, an immediate change can sometimes be used, possibly
with a restrictive status.
The change is immediate with no restrictive status if any of the following
conditions are true:
• The affected partition data sets never contained any data.
• There is no possibility of any data being discarded or moved between
partitions based only on the range of possible data values (not on the
actual data values). This situation can occur if the statement specifies
the same existing values for the limit key, or if the new limit key for the
last logical partition expands the range of possible data values.
ALTER INDEX
The following table lists clauses and specific conditions that cause an ALTER INDEX statement to be
processed as a pending definition change, which is not reflected in the current definition or data at the
time that the ALTER statement is issued. Instead, the index is placed in an advisory REORG-pending
(AREOR) state. A subsequent reorganization of the entire index with an appropriate utility materializes
the changes and applies the pending definition changes to the catalog and data.
If there are no pending definition changes for the table space, you can run the REORG INDEX utility
with SHRLEVEL CHANGE or the REORG TABLESPACE utility with SHRLEVEL CHANGE or REFERENCE to
materialize the changes to the definition of the index. If pending definition changes also exist for the
table space, you must run the REORG TABLESPACE utility with SHRLEVEL CHANGE or REFERENCE to
enable the changes to the definition of the index (and the pending table space definition).
COMPRESS The data sets of the index are created, and all of the following conditions
are true:
• The index is defined on a base table, or an associated XML table or
auxiliary table, where the table space for the base table is a universal
table space (UTS) or is being converted to a UTS by a pending definition
change.
• The compress attribute is changed, or the table space or objects in the
table space have pending definition changes.
Related concepts
Table space types and characteristics in Db2 for z/OS
Db2 supports several different types of table spaces. The partitioning method and segmented
organization are among the main characteristics that define the table space type.
Related tasks
Altering table spaces
Use the ALTER TABLESPACE statement to change the description of a table space at the current server.
Related reference
SYSPENDINGDDL catalog table (Db2 SQL)
Examples
Example
The following example provides a scenario that shows how you can use the ALTER TABLESPACE
statement to generate pending definition changes, and then use the REORG TABLESPACE utility with
SHRLEVEL REFERENCE to materialize pending definition changes at the table space level.
GUPI Consider the following scenario:
1. In Version 8, you created the simple table space TS1 in database DB1, such as:
2. In the current release of Db2, you issue the following ALTER TABLESPACE statement to convert
the simple table space to a partition-by-growth table space, and to change the buffer pool page
size. Those changes are pending definition changes. Suppose that the changes take place at time
2012-10-04-07.14.20.204010:
For each pending option in an ALTER statement, there is a corresponding entry in the
SYSPENDINGDDL table. If you specify multiple pending options in one ALTER statement, each
change has its own SYSPENDINGDDL entry, but the changes have the same create timestamp.
In addition, the same ALTER statement text is stored repeatedly with each pending option entry
that is specified with the ALTER statement. Therefore, issuing this ALTER TABLESPACE statement
results in the table space being placed in AREOR state, and two pending option entries are inserted
into the SYSPENDINGDDL table with OBJTYPE = 'S' for table space. This ALTER statement has not
changed the current definition or data, so the buffer pool in SYSTABLESPACE still indicates BP0,
and the table space is still a simple table space.
3. Later at the time of 2012-10-09-07.15.22.216020, you issue the following ALTER TABLESPACE
statement that has one pending option:
This statement results in the index being placed in AREOR state, and an entry is inserted into the
SYSPENDINGDDL table with OBJTYPE = 'I', for index. This ALTER statement has not changed the
current definition or data, so the buffer pool in SYSINDEXES still indicates BP0 for the index.
5. You issue another ALTER statement that is exactly the same as the previous one, at the time of
2012-12-20-04.10.10.605058. This statement results in another entry being inserted into the
SYSPENDINGDDL table with OBJTYPE = 'I', for index.
6. You run the following SELECT statement to query the SYSPENDINGDDL catalog table:
Table 41. Output from the SELECT statement for the SYSPENDINGDDL catalog
DBNAME TSNAME OBJSCHEMA OBJNAME OBJTYPE
DB1 TS1 DB1 TS1 S
DB1 TS1 DB1 TS1 S
DB1 TS1 DB1 TS1 S
DB1 TS1 USER1 IX1 I
DB1 TS1 USER1 IX1 I
Table 42. Continuation of output from the SELECT statement for the SYSPENDINGDDL catalog
OPTION_SEQNO OPTION_KEYWORD OPTION_VALUE CREATEDTS
2 MAXPARTITIONS 20 2012-10-04-
07.14.20.204010
1 SEGSIZE 64 2012-10-09-
07.15.22.216020
GUPI
7. Next, you run the REORG INDEX utility with SHRLEVEL CHANGE on the index. For example:
However, because pending definition changes exist for the table space, the REORG utility
proceeds without materializing the pending definition changes for the index, and issues warning
DSNU275I with RC = 4 to indicate that no materialization has been done on the index, because
there are pending definition changes for the table space. After the REORG utility runs, all the
SYSPENDINGDDL entries still exist, and the AREOR state remains the same.
8. Now, you run the REORG TABLESPACE utility with SHRLEVEL REFERENCE on the entire table
space. For example:
The REORG utility materializes all of the pending definition changes for the table space and the
associated index, applying the changes in the catalog and data. After the REORG utility runs,
the AREOR state is cleared and all entries in the SYSPENDINGDDL table for the table space and
the associated index are removed. The catalog and data now reflect a buffer pool of BP8K0,
MAXPARTITIONS of 20, and SEGSIZE of 64.
Example
The following example provides a scenario that shows how you can use the ALTER TABLE statement
to generate pending definition changes, and then use the REORG TABLESPACE utility with SHRLEVEL
REFERENCE to materialize pending definition changes in the table space that contains the table.
GUPI Consider the following scenario:
1. A table, the objects that contain the table, and an index on the table were previously defined as
follows:
Table 44. Output from the SELECT statement for the SYSPENDINGDDL catalog
DBNAME TSNAME OBJSCHEMA OBJNAME OBJTYPE
DB1 TS1 SC TB1 T
Table 45. Continuation of output from the SELECT statement for the SYSPENDINGDDL catalog
OPTION_SEQNO OPTION_KEYWORD OPTION_VALUE CREATEDTS
Table 46. Statement text output for the SELECT statement for the SYSPENDINGDDL catalog
STATEMENT_TEXT
ALTER TABLE SC.TB1 ALTER COLUMN COLUMN1 SET DATA TYPE BIGINT
GUPI
4. Assume that there are no other pending definition changes on table space TS1. You run the REORG
TABLESPACE utility with SHRLEVEL REFERENCE on the table space that contains table TB1. For
example:
The REORG utility materializes the pending definition change for the table, applying the changes
in the catalog and data. After the REORG utility runs, the AREOR state is cleared, and the entry
in the SYSPENDINGDDL table for table space TS1 is removed. The catalog and data now reflect a
COLUMN1 data type of BIGINT.
Related concepts
Reorganization with pending definition changes (Db2 Utilities)
Table space types and characteristics in Db2 for z/OS
Db2 supports several different types of table spaces. The partitioning method and segmented
organization are among the main characteristics that define the table space type.
Related reference
ALTER TABLE (Db2 SQL)
ALTER TABLESPACE (Db2 SQL)
ALTER INDEX (Db2 SQL)
Partition ALTER TABLE statements All restrictions that apply when ALTER TABLE ALTER
with any of the following the scope of change is table, plus PARTITION if table
options: ALTER TABLE ADD PARTITION to has pending definition
insert a partition if the last logical changes due to
• ALTER PARTITION
partition has pending definition insertion of a partition
• ADD PARTITION changes to alter the limit key value
Index ALTER INDEX statements • ALTER INDEX with ADD COLUMN None
with any of the following option
options:
• ALTER INDEX with COMPRESS
• BUFFERPOOL YES option
• COMPRESS • ALTER INDEX with DSSIZE option
at index level
• ALTER INDEX with PIECESIZE
option
• ALTER INDEX with REGENERATE
option
• ALTER INDEX with VCAT option
• ALTER TABLE with immediate
option(s) that are not KEYLABEL
• CREATE INDEX on table in table
space
• DROP INDEX of index
enforcing ROWID GENERATED
BY DEFAULT column in explicitly
created table space
• DROP TABLE of empty auxiliary
table if pending changes exist for
the base table space, the table it
contains, or indexes on the table
• RENAME INDEX
Situation that immediate or Behavior when immediate Behavior when pending alteration
pending behavior affects alteration is in effect is in effect
An ALTER TABLE statement All operations in the ALTER The ALTER statement fails.
contains some operations that statement are executed as
can be executed as pending or immediate changes.
immediate changes, and some
operations that can be executed
only as immediate changes.
The table on which the ALTER The ALTER statement fails. The ALTER statement is executed
is executed, or objects that as a pending definition change.
are related to the table have
unmaterialized pending definition
changes.
Which utility can be used to REORG or LOAD REPLACE can Only REORG with SHRLEVEL
materialize column alterations be used. REFERENCE or CHANGE can be
used.
Consider setting your subsystem to always use pending column alterations if you commonly encounter
these situations:
• Columns with indexes defined on them are altered in a way that causes the indexes to be placed in
restrictive states.
After an alter operation, non-unique indexes are unavailable until REBUILD INDEX is run. For unique
indexes, the tables are unavailable for insert or update operations until REBUILD INDEX is run.
In this situation, executing the column alterations as pending changes eliminates the need to run
REBUILD INDEX. The indexes are not in a restrictive state after the pending column alterations. When
REORG is run on the entire table space to materialize the column alterations, the containing table space
and indexes are unavailable for only a short time, during the SWITCH phase.
• Tables in which column alterations are performed are in table spaces on which pending alterations are
needed.
An immediate column alteration cannot be executed if other alterations to the containing table space
are pending. The table and table space operations must be done in one of the following ways:
– The immediate column alteration must be performed before pending table space alterations.
– The pending table space operations must be materialized before the immediate column alteration
can be done. This process requires that REORG TABLESPACE is run once to materialize the pending
table space operations, and once to convert the data in the altered column to the new format.
In this situation, executing the column alterations as pending changes allows you to group
materialization of all changes into the same REORG.
Related tasks
Altering the data type of a column
Procedure
To alter an existing stored procedure:
1. Follow the process for the type of change that you want to make:
• To alter the host language code for an external stored procedure, modify the source and prepare
the code again. (Precompile, compile, and link-edit the application, and then bind the DBRM into a
package.)
• FL 507 To alter the body of a native SQL procedure, issue the ALTER PROCEDURE statement with
the REPLACE clause or the CREATE PROCEDURE statement with the OR REPLACE clause.
• To alter the procedure options of any type of stored procedure, issue the ALTER PROCEDURE
statement with the options that you want. FL 507Or, for a native SQL procedure, or an external
procedure, you can issue the CREATE PROCEDURE statement with the OR REPLACE clause.
2. Refresh the WLM environment if either of the following situations applies:
• For external SQL procedures or external procedures, you changed the stored procedure logic or
parameters.
• You changed the startup JCL for the stored procedures address space.
Restriction: In some cases, refreshing the WLM environment might not be enough. For example, if
the change to the JCL is to the NUMTCB value, refreshing the WLM environment is not enough. The
refresh fails because it cannot start a new WLM address space that has a different NUMTCB from the
existing one. In this case, you need to do a WLM quiesce, followed by a WLM resume.
Tip: To refresh the WLM environment, use the Db2-supplied WLM_REFRESH stored procedure rather
than the REFRESH command. (The REFRESH command starts a new WLM address space and stops the
existing one.)
3. If you disabled automatic rebinds, rebind any plans or packages that refer to the stored procedure that
you altered.
GUPI
Related tasks
Implementing Db2 stored procedures
You might choose to use stored procedures for code that is used repeatedly. Other benefits of using
stored procedures include reducing network traffic, returning result sets to an application, or allowing
access to data without granting the privileges to the applications.
Related reference
WLM_REFRESH stored procedure (Db2 SQL)
ALTER PROCEDURE (SQL - native) (Db2 SQL)
ALTER PROCEDURE (external) (Db2 SQL)
ALTER PROCEDURE (SQL - external) (deprecated) (Db2 SQL)
Procedure
Issue the ALTER FUNCTION SQL statement.
Results
Changes to the user-defined function take effect immediately.
Example
GUPI
Example 1: In the following example, two functions named CENTER exist in the SMITH schema. The first
function has two input parameters with INTEGER and FLOAT data types, respectively. The specific name
for the first function is FOCUS1. The second function has three parameters with CHAR(25), DEC(5,2), and
INTEGER data types.
Example 2: The following example changes the second function when any arguments are null:
GUPI
Related concepts
User-defined functions (Db2 SQL)
Related tasks
Creating user-defined functions
The CREATE FUNCTION statement registers a user-defined function with a database server.
Creating a user-defined function (Db2 Application programming and SQL)
Related reference
ALTER FUNCTION (external) (Db2 SQL)
ALTER FUNCTION (compiled SQL scalar) (Db2 SQL)
ALTER FUNCTION (SQL table) (Db2 SQL)
Procedure
Determine the restrictions on the XML object that you want to change.
The following table provides information about the properties that you can or cannot change for a
particular XML object.
Option Description
XML table You can alter the following properties:
space
• BUFFERPOOL (16 KB buffer pools only)
• COMPRESS
• PRIQTY
• SECQTY
• GBPCACHE
• USING STOGROUP
• ERASE
• LOCKSIZE (The only possible values are XML and TABLESPACE.)
• SEGSIZE
• DSSIZE
• MAXPARTITIONS
XML table space attributes that are inherited from the base table space, such as LOG, are
implicitly altered if the base table space is altered.
GUPI
Related tasks
Adding XML columns
You can add XML columns to regular relational tables by using the ALTER TABLE statement.
Procedure
Issue the following access method services command:
Procedure
To specify the new qualifier:
1. Run the installation CLIST, and specify INSTALL TYPE=INSTALL and DATA SHARING
FUNCTION=NONE.
2. Enter new values for the fields shown in the following table.
Table 48. CLIST panels and fields to change to reflect new qualifier
Panel name Field name Comments
DSNTIPA1 INSTALL TYPE Specify INSTALL. Do not specify a new default
prefix for the input data sets listed on this panel.
DSNTIPA1 OUTPUT MEMBER
NAME
DSNTIPA2 CATALOG ALIAS
DSNTIPH COPY 1 NAME and These are the bootstrap data set names.
COPY 2 NAME
DSNTIPH COPY 1 PREFIX and These fields appear for both active and archive
COPY 2 PREFIX log prefixes.
DSNTIPT SAMPLE LIBRARY This field allows you to specify a field name
for edited output of the installation CLIST. Avoid
overlaying existing data sets by changing the
middle node, NEW, to something else. The only
members you use in this procedure are xxxxMSTR
and DSNTIJUZ in the sample library.
DSNTIPO PARAMETER MODULE Change this value only if you want to preserve the
existing member through the CLIST.
The output from the CLIST is a new set of tailored JCL with new cataloged procedures and a DSNTIJUZ
job, which produces a new member.
3. Run the first two job steps of DSNTIJUZ to update the subsystem parameter load module.
Unless you have specified a new name for the load module, make sure the output load module does
not go to the SDSNEXIT or SDSNLOAD library used by the active Db2 subsystem.
If you are changing the subsystem ID in addition to the system data set name qualifier, you should
run job steps DSNTIZP and DSNTIZQ to update the DSNHDECP or a user-specified application
Procedure
GUPI GUPI To stop Db2 when no activity is outstanding:
1. Stop Db2 by entering the following command:
3. Use the following commands to make sure the subsystem is in a consistent state.
-DISPLAY THREAD(*) TYPE(*)
-DISPLAY UTILITY (*)
-TERM UTILITY(*)
-DISPLAY DATABASE(*) RESTRICT
-DISPLAY DATABASE(*) SPACENAM(*) RESTRICT
-RECOVER INDOUBT
Correct any problems before continuing.
4. Stop Db2 by entering the following command:
GUPI
5. Run the print log map utility (DSNJU004) to identify the current active log data set and the last
checkpoint RBA.
6. Run DSN1LOGP with the SUMMARY (YES) option, using the last checkpoint RBA from the output of the
print log map utility you ran in the previous step.
The report headed DSN1157I RESTART SUMMARY identifies active units of recovery or pending
writes. If either situation exists, do not attempt to continue. Start Db2 with ACCESS(MAINT), use
the necessary commands to correct the problem, and repeat steps 4 through 6 until all activity is
complete.
Procedure
To rename the system data sets:
1. Using IDCAMS, change the names of the catalog and directory table spaces. Be sure to specify the
instance qualifier of your data set, y, which can be either I or J.
For example,
ALTER oldcat.DSNDBC.DSNDB01.*.y0001.A001 -
NEWNAME (newcat.DSNDBC.DSNDB01.*.y0001.A001)
ALTER oldcat.DSNDBD.DSNDB01.*.y0001.A001 -
NEWNAME (newcat.DSNDBD.DSNDB01.*.y0001.A001)
ALTER oldcat.DSNDBC.DSNDB06.*.y0001.A001 -
NEWNAME (newcat.DSNDBC.DSNDB06.*.y0001.A001)
ALTER oldcat.DSNDBD.DSNDB06.*.y0001.A001 -
NEWNAME (newcat.DSNDBD.DSNDB06.*.y0001.A001)
ALTER oldcat.LOGCOPY1.* -
NEWNAME (newcat.LOGCOPY1.*)
ALTER oldcat.LOGCOPY1.*.DATA -
NEWNAME (newcat.LOGCOPY1.*.DATA)
ALTER oldcat.LOGCOPY2.* -
NEWNAME (newcat.LOGCOPY2.*)
ALTER oldcat.LOGCOPY2.*.DATA -
NEWNAME (newcat.LOGCOPY2.*.DATA)
ALTER oldcat.BSDS01 -
NEWNAME (newcat.BSDS01)
ALTER oldcat.BSDS01.* -
NEWNAME (newcat.BSDS01.*)
ALTER oldcat.BSDS02 -
NEWNAME (newcat.BSDS02)
ALTER oldcat.BSDS02.* -
NEWNAME (newcat.BSDS02.*)
Procedure
To update the BSDS:
1. Run the change log inventory utility (DSNJU003).
Use the new qualifier for the BSDS because it has now been renamed. The following example
illustrates the control statements required for three logs and dual copy is specified for the logs. This is
NEWCAT VSAMCAT=newcat
DELETE DSNAME=oldcat.LOGCOPY1.DS01
DELETE DSNAME=oldcat.LOGCOPY1.DS02
DELETE DSNAME=oldcat.LOGCOPY1.DS03
DELETE DSNAME=oldcat.LOGCOPY2.DS01
DELETE DSNAME=oldcat.LOGCOPY2.DS02
DELETE DSNAME=oldcat.LOGCOPY2.DS03
NEWLOG DSNAME=newcat.LOGCOPY1.DS01,COPY1,STARTRBA=strtrba,ENDRBA=endrba
NEWLOG DSNAME=newcat.LOGCOPY1.DS02,COPY1,STARTRBA=strtrba,ENDRBA=endrba
NEWLOG DSNAME=newcat.LOGCOPY1.DS03,COPY1,STARTRBA=strtrba,ENDRBA=endrba
NEWLOG DSNAME=newcat.LOGCOPY2.DS01,COPY2,STARTRBA=strtrba,ENDRBA=endrba
NEWLOG DSNAME=newcat.LOGCOPY2.DS02,COPY2,STARTRBA=strtrba,ENDRBA=endrba
NEWLOG DSNAME=newcat.LOGCOPY2.DS03,COPY2,STARTRBA=strtrba,ENDRBA=endrba
During startup, Db2 compares the newcat value with the value in the system parameter load module,
and they must be the same.
2. Using the IDCAMS REPRO command, replace the contents of BSDS2 with the contents of BSDS01.
3. Run the print log map utility (DSNJU004) to verify your changes to the BSDS.
4. At a convenient time, change the DD statements for the BSDS in any of your offline utilities to use the
new qualifier.
Procedure
To establish a new ssnmMSTR cataloged procedure:
1. Update ssnmMSTR in SYS1.PROCLIB with the new BSDS data set names.
2. Copy the new system parameter load module to the active SDSNEXIT/SDSNLOAD library.
Procedure
To start Db2 with the new xxxxMSTR cataloged procedure and load module:
1. Issue a START DB2 command with the module name as shown in the following example.
2. Optional: If you stopped DSNDB01 or DSNDB06 in “Stopping Db2 when no activity is outstanding” on
page 271, you must explicitly start them in this step.
Procedure
To change your work database:
1. Reallocate the database by using the installation job DSNTIJTM from prefix.SDSNSAMP.
2. Modify your existing job by changing the job to remove the BIND step for DSNTIAD and renaming the
data set names in the DSNTTMP step to your new names.
Make sure that you include your current allocations.
Related tasks
Changing your work database for a migrated installation of Db2
You can change the high-level qualifier for your work database if you have a migrated installation of Db2
for z/OS.
Procedure
To change your work database:
1. Stop the database by using the following command (for a database named DSNDB07):
4. Define the clusters by using the following access method services commands. You must specify the
instance qualifier of your data set, y, which can be either I or J.
ALTER oldcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001
NEWNAME newcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001
ALTER oldcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001
NEWNAME newcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001
Repeat the preceding statements (with the appropriate table space name) for as many table spaces as
you use.
5. Create the table spaces in DSNDB07 by using the following commands:
Related tasks
Changing your work database for a new installation of Db2
You can change the high-level qualifier for your work database if you have a new installation of Db2 for
z/OS.
Procedure
To change user-managed objects:
1. Stop the table spaces and index spaces by using the following command:
2. Use the following SQL ALTER TABLESPACE and ALTER INDEX statements with the USING clause to
specify the new qualifier:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 -
NEWNAME (newcat.DSNDBC.dbname.*.y0001.A001)
4. Start the table spaces and index spaces, using the following command:
-DISPLAY DATABASE(dbname)
What to do next
Renaming the data sets can be done while Db2 is down. They are included here because the names must
be generated for each database, table space, and index space that is to change.
Procedure
To change Db2-managed objects:
1. Remove all table spaces and index spaces from the storage group by converting the data sets
temporarily to user-managed data sets.
a) Stop each database that has data sets you are going to convert, using the following command:
Restriction: Some databases must be explicitly stopped to allow any alterations. For these
databases, use the following command:
-STOP DATABASE(dbname)
b) Convert to user-managed data sets with the USING VCAT clause of the SQL ALTER TABLESPACE
and ALTER INDEX statements, as shown in the following statements.
The data sets are VSAM linear data sets cataloged in the integrated catalog facility catalog that
catalog-name identifies. For more information about catalog-name values, see Naming conventions
(Db2 SQL).
The DROP succeeds only if all the objects that referenced this STOGROUP are dropped or converted to
user-managed (USING VCAT clause).
3. Re-create the storage group using the correct volumes and the new alias, using the following
statement:
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the new high-level
qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 -
NEWNAME(newcat.DSNDBC.dbname.*.y0001.A001)
ALTER oldcat.DSNDBD.dbname.*.y0001.A001 -
NEWNAME(newcat.DSNDBD.dbname.*.y0001.A001)
If your table space or index space spans more than one data set, be sure to rename those data sets
also.
5. Convert the data sets back to Db2-managed data sets by using the new Db2 storage group. Use the
following SQL ALTER TABLESPACE and ALTER INDEX statements:
If you specify USING STOGROUP without specifying the PRIQTY and SECQTY clauses, Db2 uses the
default values.
6. Start each database, using the following command:
-DISPLAY DATABASE(dbname)
Copying your relational database involves not only copying data, but also finding or generating, and
executing, SQL statements to create storage groups, databases, table spaces, tables, indexes, views,
synonyms, and aliases.
You can copy a database by using the DSN1COPY utility. As with the other operations, DSN1COPY is
likely to execute faster than the other applicable tools. It copies directly from one data set to another,
while the other tools extract input for LOAD, which then loads table spaces and builds indexes. But again,
DSN1COPY is more difficult to set up. In particular, you must know the internal Db2 object identifiers,
which other tools translate automatically.
Copying a Db2 subsystem from one z/OS system to another involves the following:
• All the user data and object definitions
• The Db2 system data sets:
– The log
– The bootstrap data set
– Image copy data sets
– The Db2 catalog
– The integrated catalog that records all the Db2 data sets
Procedure
To move data without using the REORG or RECOVER utilities:
1. Stop the database by issuing a STOP DATABASE command.
GUPI
GUPI
3. GUPI Issue the ALTER INDEX or ALTER TABLESPACE statement to use the new integrated catalog
facility catalog name or Db2 storage group name.
4. Start the database by issuing a START DATABASE command.
Related tasks
Moving Db2-managed data with REORG, RECOVER, or REBUILD
You can create a storage group (possibly using a new catalog alias) and move the data to that new storage
group.
Related reference
DSN1COPY (Db2 Utilities)
Procedure
To create a new storage group that uses the correct volumes and the new alias:
1. Execute the CREATE STOGROUP SQL statement to create the new storage group.
GUPI For example:
GUPI
2. Issue the STOP DATABASE command on the database that contains the table spaces or index spaces
whose data sets you plan to move, to prevent access to those data sets.
GUPI
GUPI
3. Execute ALTER TABLESPACE or ALTER INDEX SQL statements to assign the table spaces or indexes to
the new storage group.
GUPI
GUPI
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the high-level
qualifier for the new storage group. Also, be sure to specify the instance qualifier of your data set, y,
which can be either I or J. If you have run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE
on any table spaces or index spaces, the fifth-level qualifier might be J0001.
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 -
NEWNAME newcat.DSNDBC.dbname.*.y0001.A001
ALTER oldcat.DSNDBD.dbname.*.y0001.A001 -
NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
5. Issue the START DATABASE command to start the database for utility processing only.
GUPI
6. Run the REORG utility or the RECOVER utility on the table space or index space, or run the REBUILD
utility on the index space.
7. Issue the START DATABASE command to start the database for full processing.
GUPI
GUPI
Related tasks
Moving data without REORG or RECOVER
You can move data that you do not want to reorganize or recover.
Symptoms
The IRLM waits, loops, or abends. The following message might be issued:
DXR122E irlmnm ABEND UNDER IRLM TCB/SRB IN MODULE xxxxxxxx
ABEND CODE zzzz
Environment
If the IRLM abends, Db2 terminates. If the IRLM waits or loops, the IRLM terminates, and Db2 terminates
automatically.
DSNC STRT
Related tasks
Connecting from CICS
You can start a connection to Db2 at any time after CICS initialization by using the CICS attachment
facility. The CICS attachment facility is a set of Db2-provided modules that are loaded into the CICS
address space.
Starting Db2
You must start Db2 to make it active and available to Db2 subsystem is active and available to TSO
applications, and other subsystems such as IMS and CICS.
Starting the IRLM
Symptoms
No processing is occurring.
Symptoms
No I/O activity occurs for the affected disk address. Databases and tables that reside on the affected unit
are unavailable.
VARY xxx,OFFLINE,FORCE
D U,DASD,ONLINE
The following console message is displayed after you force a volume offline:
SETIOS MIH,DEV=devnum,IOTIMING=mm:ss.
2. Issue (or request that an authorized operator issue) the following Db2 command to stop all databases
and table spaces that reside on the affected volume:
If the disk unit must be disconnected for repair, stop all databases and table spaces on all volumes in
the disk unit.
3. Select a spare disk pack, and use ICKDSF to initialize from scratch a disk unit with a different unit
address (yyy) and the same volume serial number (VOLSER).
// Job
//ICKDSF EXEC PGM=ICKDSF
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
REVAL UNITADDRESS(yyy) VERIFY(volser)
If you initialize a 3380 or 3390 volume, use REVAL with the VERIFY parameter to ensure that
you initialize the intended volume, or to revalidate the home address of the volume and record 0.
Alternatively, use ISMF to initialize the disk unit.
4. Issue the following z/OS console command, where yyy is the new unit address:
VARY yyy,ONLINE
D U,DASD,ONLINE
where nnn is the data set or partition number, left padded by 0 (zero).
catnam.DSNDBx.dbname.tsname.y0001.znnn
The y is I or J, the x is C (for VSAM clusters) or D (for VSAM data components), and znnn is the
data set or partition number, left padded by 0 (zero). For more information, see “Data set naming
conventions” on page 37.
8. For a user-defined table space, define the new data set before an attempt to recover it. You can
recover table spaces that are defined in storage groups without prior definition.
9. Issue the following Db2 command to start all the appropriate databases and table spaces that were
previously stopped:
10. Recover the table spaces by using the Db2 RECOVER utility.
Related reference
RECOVER (Db2 Utilities)
Related information
DFSMS Access Method Services Commands
Device Support Facilities (ICKDSF) Device Support Facilities (ICKDSF) User's Guide and Reference
z/OS MVS System Commands
z/OS MVS Initialization and Tuning Reference
Symptoms
Unexpected data is returned from an SQL SELECT statement, even though the SQLCODE that is associated
with the statement is 0.
Causes
An SQLCODE of 0 indicates that Db2 and SQL did not cause the problem, so the cause of the incorrect
data in the table is the application.
Procedure
To back out the incorrect changes:
1. Run the REPORT utility twice, once using the RECOVERY option and once using the TABLESPACESET
option. On each run, specify the table space that contains the inaccurate data.
If you want to recover to the last quiesce point, specify the option CURRENT when running REPORT
RECOVERY.
2. Examine the REPORT output to determine the RBA of the quiesce point.
3. Run RECOVER TOLOGPOINT with the RBA that you found, specifying the names of all related table
spaces.
Results
Recovering all related table spaces to the same quiesce point prevents violations of referential
constraints.
Procedure
To back out the incorrect changes:
1. Run the DSN1LOGP stand-alone utility on the log scope that is available at Db2 restart, using the
SUMMARY(ONLY) option.
2. Determine the RBA of the most recent checkpoint before the first bad update occurred, from one of the
following sources:
• Message DSNR003I on the operator's console, which looks similar to this message:
DSNR003I RESTART ..... PRIOR CHECKPOINT RBA=000007425468
Symptoms
Problems that occur in a Db2-IMS environment can result in a variety of symptoms:
• An IMS wait, loop, or abend is accompanied by a Db2 message that goes to the IMS console. This
symptom indicates an IMS control region failure.
• When IMS connects to Db2, Db2 detects one or more units of recovery that are indoubt.
• When IMS connects to Db2, Db2 detects that it has committed one or more units of recovery that IMS
indicates should be rolled back.
• Messages are issued to the IMS master terminal, to the logical terminal, or both to indicate that some
sort of IMS or Db2 abend has occurred.
Environment
Db2 can be used in an XRF (Extended Recovery Facility) recovery environment with IMS.
To resolve IMS-related problems, follow the appropriate procedure.
Related concepts
Plans for extended recovery facility toleration
Db2 can be used in an extended recovery facility (XRF) recovery environment with CICS or IMS.
Symptoms
• IMS waits, loops, or abends.
• Db2 attempts to send the following message to the IMS master terminal during an abend:
DSNM002 IMS/TM xxxx DISCONNECTED FROM SUBSYSTEM yyyy RC=RC
This message cannot be sent if the failure prevents messages from being displayed.
• Db2 does not send any messages for this problem to the z/OS console.
Environment
• Db2 detects that IMS has failed.
• Db2 either backs out or commits work that is in process.
• Db2 saves indoubt units of recovery, which need to be resolved at reconnection time.
Symptoms
If Db2 has indoubt units of recovery that IMS did not resolve, the following message is issued at the IMS
master terminal, where xxxx is the subsystem identifier:
DSNM004I RESOLVE INDOUBT ENTRY(S) ARE OUTSTANDING FOR SUBSYSTEM xxxx
Causes
When this message is issued, IMS was either cold started, or it was started with an incomplete log tape.
Message DSNM004I might also be issued if Db2 or IMS abnormally terminated in response to a software
error or other subsystem failure.
Environment
• The connection remains active.
• IMS applications can still access Db2 databases.
• Some Db2 resources remain locked out.
If the indoubt thread is not resolved, the IMS message queues might start to back up. If the IMS queues
fill to capacity, IMS terminates. Be aware of this potential difficulty, and monitor IMS until the indoubt
units of work are fully resolved.
If the command is rejected because of associated network IDs, use the same command again,
substituting the recovery ID for the network ID.
GUPI
Related concepts
Duplicate IMS correlation IDs
Under certain circumstances, two threads can have the same correlation ID.
Symptoms
The following messages are issued after a Db2 restart:
Causes
The reason that these messages are issued is that indoubt units of work exist for a Db2-IMS application,
and the way that Db2 and IMS handle these units of work differs.
At restart time, Db2 attempts to resolve any units of work that are indoubt. Db2 might commit some
units and roll back others. Db2 records the actions that it takes for the indoubt units of work. At the
next connect time, Db2 verifies that the actions that it took are consistent with the IMS decisions. If the
Db2 RECOVER INDOUBT command is issued prior to an IMS attempt to reconnect, Db2 might decide
to commit the indoubt units of recovery, whereas IMS might decide to roll back the units of recovery.
This inconsistency results in the DSNM005I message being issued. Because Db2 tells IMS to retain the
inconsistent entries, the DFS3602I message is issued when the attempt to resolve the indoubt units of
recovery ends.
Environment
• The connection between Db2 and IMS remains active.
• Db2 and IMS continue processing.
• No Db2 locks are held.
• No units of work are in an incomplete state.
Symptoms
The following messages are issued at the IMS master terminal and at the LTERM that entered the
transaction that is involved:
DFS555 - TRAN tttttttt ABEND (SYSIDssss);
MSG IN PROCESS: xxxx (up to 78 bytes of data) timestamp
DFS555A - SUBSYSTEM xxxx OASN yyyyyyyyyyyyyyyy STATUS COMMIT|ABORT
Causes
The problem might be caused by a usage error in the application or by a Db2 problem.
Environment
• The failing unit of recovery is backed out by both DL/I and Db2.
• The connection between IMS and Db2 remains active.
Symptoms
Db2 fails or is not running, and one of the following status situations exists:
• If you specified error option Q, the program terminates with a U3051 user abend completion code.
• If you specified error option A, the program terminates with a U3047 user abend completion code.
Symptoms
Problems that occur in a Db2-CICS environment can result in a variety of symptoms, such as:
• Messages that indicate an abend in CICS or the CICS attachment facility
• A CICS wait or a loop
• Indoubt units of recovery between CICS and Db2
Environment
Db2 can be used in an XRF (Extended Recovery Facility) recovery environment with CICS.
Symptoms
The following message is issued at the user's terminal:
DFH2206 TRANSACTION tranid ABEND abcode BACKOUT SUCCESSFUL
In this message, tranid represents the transaction that abnormally terminated, and abcode represents the
specific abend code.
Environment
• The failing unit of recovery is backed out in both CICS and Db2.
• The connection between CICS and Db2 remains active.
Symptoms
Any of the following symptoms might occur:
• CICS waits or loops.
• CICS abends, as indicated by messages or dump output.
Environment
Db2 performs each of the following actions:
• Detects the CICS failure.
• Backs out inflight work.
• Saves indoubt units of recovery that need to be resolved when CICS is reconnected.
Symptoms
Any of the possible symptoms can occur:
• CICS remains operational, but the CICS attachment facility abends.
• The CICS attachment facility issues a message that indicates the reason for the connection failure, or it
requests a X'04E' dump.
• The reason code in the X'04E' dump indicates the reason for failure.
• CICS issues message DFH2206 that indicates that the CICS attachment facility has terminated
abnormally with the DSNC abend code.
• CICS application programs that try to access Db2 while the CICS attachment facility is inactive are
abnormally terminated. The code AEY9 is issued.
Environment
CICS backs out the abnormally terminated transaction and treats it like an application abend.
Symptoms
One of the following messages is sent to the user-named CICS destination that is specified for the
MSGQUEUEn(name) attribute in the RDO (resource definition online): DSN2001I, DSN2034I, DSN2035I,
or DSN2036I.
Causes
For CICS, a Db2 unit of recovery might be indoubt if the forget entry (X'FD59') of the task-related
installation exit routine is absent from the CICS system journal. The indoubt condition applies only to
the Db2 unit of recovery in this case because CICS already committed or backed out any changes to its
resources.
A Db2 unit of recovery is indoubt for Db2 if an End Phase 1 is present and the Begin Phase 2 is absent.
Environment
The following table summarizes the situations that can exist when CICS units of recovery are indoubt.
GUPI
GUPI
CICS retains details of indoubt units of recovery that were not resolved during connection startup. An
entry is purged when it no longer shows up on the list that is presented by Db2 or, when the entry is
present in the list, when Db2 resolves it.
1. GUPI Obtain a list of the indoubt units of recovery from Db2 by issuing the following command:
Two threads can sometimes have the same correlation ID when the connection has been broken
several times and the indoubt units of recovery have not been resolved. In this case, use the network
ID (NID) instead of the correlation ID to uniquely identify indoubt units of recovery.
The network ID consists of the CICS connection name and a unique number that is provided by CICS at
the time that the syncpoint log entries are written. This unique number is an 8-byte store clock value
If the transaction is a pool thread, use the value of the correlation ID (corr_id) that is returned by
DISPLAY THREAD for thread#.tranid in the RECOVER INDOUBT command. In this case, the first letter
of the correlation ID is P. The transaction ID is in characters five through eight of the correlation ID.
If the transaction is assigned to a group (group is a result of using an entry thread), use
thread#.groupname instead of thread#.tranid. In this case, the first letter of the correlation ID is a
G, and the group name is in characters five through eight of the correlation ID. The groupname is the
first transaction that is listed in a group.
Where the correlation ID is not unique, use the following command:
When two threads have the same correlation ID, use the NID keyword instead of the ID keyword. The
NID value uniquely identifies the work unit.
To recover all threads that are associated with connection-name, omit the ID option.
The command results that are in either of the following messages indicate whether the thread is
committed or rolled back:
When you resolve indoubt units of work, note that CICS and the CICS attachment facility are not aware
of the commands to Db2 to commit or abort indoubt units of recovery because only Db2 resources are
affected. However, CICS keeps details about the indoubt threads that could not be resolved by Db2.
This information is purged either when the presented list is empty or when the list does not include a
unit of recovery that CICS remembers.
Investigate any inconsistencies that you found in the preceding steps. GUPI
Related reference
DSNJU003 (change log inventory) (Db2 Utilities)
Related information
Reading log streams using batch jobs (for example, DFHJUP) (CICS Transaction Server for z/OS)
Symptoms
The symptoms depend on whether the CICS attachment facility or one of its thread subtasks terminated:
• If the main CICS attachment facility subtask abends, an abend dump is requested. The contents of the
dump indicate the cause of the abend. When the dump is issued, shutdown of the CICS attachment
facility begins.
• If a thread subtask terminates abnormally, a X'04E' dump is issued, and the CICS application abends
with a DSNC dump code. The X'04E' dump generally indicates the cause of the abend. The CICS
attachment facility remains active.
Symptoms
One of the following QMF messages is issued:
• DSQ10202
• DSQ10205
• DSQ11205
• DSQ12105
• DSQ13005
• DSQ14152
• DSQ14153
• DSQ14154
• DSQ15805
• DSQ16805
• DSQ17805
• DSQ22889
• DSQ30805
• DSQ31805
• DSQ32029
• DSQ35805
• DSQ36805
Causes
Key QMF installation jobs were not run.
Symptoms
When a Db2 subsystem terminates, the specific failure is identified in one or messages. The following
messages might be issued at the z/OS console:
The following message might be issued to the CICS transient data error destination, which is defined in
the RDO:
Environment
• IMS and CICS continue.
• In-process IMS and CICS applications receive SQLCODE -923 (SQLSTATE '57015') when accessing Db2.
In most cases, if an IMS or CICS application program is running when a -923 SQLCODE is returned, an
abend occurs. This is because the application program generally terminates when it receives a -923
SQLCODE. To terminate, some synchronization processing occurs (such as a commit). If Db2 is not
operational when synchronization processing is attempted by an application program, the application
program abends. In-process applications can abend with an abend code X'04F'.
• IMS applications that begin to run after subsystem termination begins are handled according to the
error options.
– For option R, SQL return code -923 is sent to the application, and IMS pseudo abends.
– For option Q, the message is enqueued again, and the transaction abends.
– For option A, the message is discarded, and the transaction abends.
• CICS applications that begin to run after subsystem termination begins are handled as follows:
– If the CICS attachment facility has not terminated, the application receives a -923 SQLCODE.
Symptoms
Db2 issues messages for the access failure for each log data set. These messages provide information
that is needed to resolve the access error. For example:
Causes
Db2 might experience a problem when it attempts to allocate or open archive log data sets during the
rollback of a long-running unit of recovery. These temporary failures can be caused by:
• A temporary problem with DFHSM recall
• A temporary problem with the tape subsystem
• Uncataloged archive logs
• Archive tape mount requests being canceled
Symptoms
Most active log failures are accompanied by or preceded by error messages to inform you of out-of-space
conditions, write or read I/O errors, or loss of dual active logging.
Symptoms
The following warning message is issued when the last available active log data set is 5% full:
DSNJ110E - LAST COPY n ACTIVE LOG DATA SET IS nnn PERCENT FULL
The Db2 subsystem reissues the message after each additional 5% of the data set space is filled. Each
time the message is issued, the offload process is started. IFCID trace record 0330 is also issued if
statistics class 3 is active.
If the active log fills to capacity, after having switched to single logging, Db2 issues the following message,
and an offload is started.
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
Causes
The active log is out of space.
Environment
An out-of-space condition on the active log has very serious consequences. Corrective action is required
before Db2 can continue processing. When the active log becomes full, the Db2 subsystem cannot do any
work that requires writing to the log until an offload is completed. Until that offload is completed, Db2
waits for an available active log data set before resuming normal Db2 processing. Normal shutdown, with
either a QUIESCE or FORCE command, is not possible because the shutdown sequence requires log space
to record system events that are related to shutdown (for example, checkpoint records).
This command causes Db2 to restart the offload task. Issuing this command might solve the problem.
3. If issuing this command does not solve the problem, determine and resolve the cause of the
problem, and then reissue the command. If the problem cannot be resolved quickly, have the system
programmer define additional active logs until you can resolve the problem.
System programmer response: Define additional active log data sets so that Db2 can continue its normal
operation while the problem that is causing the offload failures is corrected.
1. Use the z/OS command CANCEL to stop Db2.
2. Use the access method services DEFINE command to define new active log data sets.
3. Run utility DSNJLOGF to initialize the new active log data sets.
Symptoms
The following message is issued:
Causes
Although this problem can be caused by several problems, one possible cause is a CATUPDT failure.
Environment
When a write error occurs on an active log data set, the following characteristics apply:
• Db2 marks the failing Db2 log data set TRUNCATED in the BSDS.
• Db2 goes on to the next available data set.
• If dual active logging is used, Db2 truncates the other copy at the same point.
• The data in the truncated data set is offloaded later, as usual.
• The data set is not stopped; it is reused on the next cycle. However, if a DSNJ104 message indicates a
CATUPDT failure, the data set is marked STOPPED.
Symptoms
The following message is issued:
Causes
This problem occurs when Db2 completes one active log data set and then finds that the subsequent copy
(COPY n) data sets have not been offloaded and are marked STOPPED.
Environment
Db2 continues in single mode until offloading completes and then returns to dual mode. If the data set is
marked STOPPED, however, intervention is required.
Symptoms
The following message is issued:
Environment
• If the error occurs during offload, offload tries to identify the RBA range from a second copy of the
active log.
– If no second copy of the active log exists, the data set is stopped.
– If the second copy of the active log also has an error, only the original data set that triggered the
offload is stopped. Then the archive log data set is terminated, leaving a discontinuity in the archived
log RBA range.
– The following message is issued:
Symptoms
Archive log failures can result in a variety of Db2 and z/OS messages that identify problems with archive
log data sets.
One specific symptom that might occur is message DSNJ104I, which indicates an open-close problem on
the archive log.
Symptoms
The following message is issued:
z/OS dynamic allocation provides the ERROR STATUS information. If the allocation is for offload
processing, the following message is also issued:
Causes
Archive log allocation problems can occur when various Db2 operations fail; for example:
• The RECOVER utility executes and requires an archive log. If neither archive log can be found or used,
recovery fails.
• The active log becomes full, and an offload is scheduled. Offload tries again the next time it is triggered.
The active log does not wrap around; therefore, if no more active logs are available, the offload fails, but
data is not lost.
• The input is needed for restart, which fails. If this is the situation that you are experiencing, see
“Recovering from BSDS or log failures during restart” on page 311
Symptoms
No specific Db2 message is issued for write I/O errors. Only a z/OS error recovery program message is
issued.
If Db2 message DSNJ128I is issued, an abend in the offload task occurred, in which case you should
follow the instructions for this message.
Recovering from read I/O errors on an archive data set during recovery
You can recover from read I/O errors that occur on an archive log during recovery.
Symptoms
No specific Db2 message is issued; only the z/OS error recovery program message is issued.
Environment
• If a second copy of the archive log exists, the second copy is allocated and used.
• If a second copy of the archive log does not exist, recovery fails.
Symptoms
Prior to the failure, z/OS issues abend message IEC030I, IEC031I, or IEC032I. Offload processing
terminates unexpectedly. Db2 issues the following message:
Environment
The archive data sets that are allocated to the offload task in which the error occurred are deallocated
and deleted. Another attempt to offload the RBA range of the active log data sets is made the next time
offload is invoked.
Symptoms
If a BSDS is damaged, Db2 issues one of the following message numbers: DSNJ126I, DSNJ100I, or
DSNJ120I.
Related concepts
Management of the bootstrap data set
Symptoms
The following message is issued:
Causes
A write I/O error occurred on a BSDS.
Environment
If Db2 is in a dual-BSDS mode and one copy of the BSDS is damaged by an I/O error, the BSDS mode
changes from dual-BSDS mode to single-BSDS mode. If Db2 is in a single-BSDS mode when the BSDS is
damaged by an I/O error, Db2 terminates until the BSDS is recovered.
Symptoms
The following message is issued:
Symptoms
The following message is issued:
Causes
Unequal timestamps can occur for the following reasons:
• One of the volumes that contains the BSDS has been restored. All information of the restored volume is
outdated. If the volume contains any active log data sets or Db2 data, their contents are also outdated.
The outdated volume has the lower timestamp.
• Dual BSDS mode has degraded to single BSDS mode, and you are trying to start without recovering the
bad copy of the BSDS.
• The Db2 subsystem abended after updating one copy of the BSDS, but prior to updating the second
copy.
Procedure
To recover the BSDS from a backup copy:
1. Locate the BSDS that is associated with the most recent archive log data set.
The data set name of the most recent archive log is displayed on the z/OS console in the last
occurrence of message DSNJ003I, which indicates that offloading has successfully completed. In
preparation for the rest of this procedure, keep a log of all successful archives that are noted by that
message.
• If archive logs are on disk, the BSDS is allocated on any available disk. The BSDS name is like the
corresponding archive log data set name; change only the first letter of the last qualifier, from A to B,
as in the following example:
Archive log name
DSN.ARCHLOG1.A0000001
BSDS copy name
DSN.ARCHLOG1.B0000001
• If archive logs are on tape, the BSDS is the first data set of the first archive log volume. The BSDS is
not repeated on later volumes.
2. If the most recent archive log data set has no copy of the BSDS (presumably because an error occurred
during its offload), locate an earlier copy of the BSDS from an earlier offload.
3. Rename or delete any damaged BSDS.
• To rename a damaged BSDS, use the access method services ALTER command with the NEWNAME
option.
• To delete a damaged BSDS, use the access method services DELETE command.
For each damaged BSDS, use access method services to define a new BSDS as a replacement data set.
Job DSNTIJIN contains access method services control statements to define a new BSDS. The BSDS is
a VSAM key-sequenced data set (KSDS) that has three components: cluster, index, and data. You must
rename all components of the data set. Avoid changing the high-level qualifier.
4. Use the access method services REPRO command to copy the BSDS from the archive log to one of the
replacement BSDSs that you defined in the prior step. Do not copy any data to the second replacement
BSDS; data is placed in the second replacement BSDS in a later step in this procedure.
a) Use the print log map utility (DSNJU004) to print the contents of the replacement BSDS.
You can then review the contents of the replacement BSDS before continuing your recovery work.
b) Update the archive log data set inventory in the replacement BSDS.
Examine the print log map output, and note that the replacement BSDS does not obtain a record of
the archive log from which the BSDS was copied. If the replacement BSDS is a particularly old copy,
it is missing all archive log data sets that were created later than the BSDS backup copy. Therefore,
you need to update the BSDS inventory of the archive log data sets to reflect the current subsystem
inventory.
Use the change log inventory utility (DSNJU003) NEWLOG statement to update the replacement
BSDS, adding a record of the archive log from which the BSDS was copied. Ensure that the
Even if the log is damaged, and Db2 is started by circumventing the damaged portion, the log is the most
important source for determining what work was lost and what data is inconsistent.
Bypassing a damaged portion of the log generally proceeds with the following steps:
Time
line
RBA: X Y
Where to start
The specific procedure depends on the phase of restart that was in control when the log problem was
detected. On completion, each phase of restart writes a message to the console. You must find the last of
those messages in the console log. The next phase after the one that is identified is the one that was in
control when the log problem was detected. Accordingly, start at:
• “Recovering from failure during log initialization or current status rebuild” on page 313
Another procedure (“Recovering from a failure resulting from total or excessive loss of log data” on
page 335) provides information to use if you determine (by using “Recovering from failure during log
initialization or current status rebuild” on page 313) that an excessive amount (or all) of Db2 log
information (BSDS, active, and archive logs) has been lost.
The last procedure,“Resolving inconsistencies resulting from a conditional restart” on page 339, can be
used to resolve inconsistencies introduced while using one of the restart procedures in this information.
If you decide to use “Recovering from unresolvable BSDS or log data set problem during restart” on page
333, you do not need to use “Resolving inconsistencies resulting from a conditional restart” on page 339.
Because of the severity of the situations described, the procedures identify "Operations management
action", rather than "Operator action". Operations management might not be performing all the steps in
the procedures, but they must be involved in making the decisions about the steps to be performed.
Related reference
DSN1LOGP (Db2 Utilities)
Symptoms
An abend was issued, indicating that restart failed. In addition, either the last restart message that was
received was a DSNJ001I message that indicates a failure during current status rebuild, or none of the
following messages was issued:
• DSNJ001I
• DSNR004I
• DSNR005I
Environment
What happens in the environment depends on whether the failure occurred during log initialization or
current status rebuild.
Failure during log initialization
Db2 terminates because a portion of the log is inaccessible, and Db2 cannot locate the end of the log
during restart.
Failure during current status rebuild
Db2 terminates because a portion of the log is inaccessible, and Db2 cannot determine the state of
the subsystem at the prior Db2 termination. Possible states include: outstanding units of recovery,
outstanding database writes, and exception database conditions.
Time
line
The portion of the log between log RBAs X and Y is inaccessible. For failures that occur during the log
initialization phase, the following activities occur:
1. Db2 allocates and opens each active log data set that is not in a stopped state.
2. Db2 reads the log until the last log record is located.
3. During this process, a problem with the log is encountered, preventing Db2 from locating the end of
the log. Db2 terminates and issues an abend reason code. Some of the abend reason codes that might
be issued include:
• 00D10261
• 00D10262
Because this field is updated frequently in the BSDS, the "highest RBA written" can be interpreted as
an approximation of the end of the log. The field is updated in the BSDS when any one of a variety of
internal events occurs. In the absence of these internal events, the field is updated each time a complete
cycle of log buffers is written. A complete cycle of log buffers occurs when the number of log buffers that
are written equals the value of the OUTPUT BUFFER field of installation panel DSNTIPL. The value in the
BSDS is, therefore, relatively close to the end of the log.
To find the actual end of the log at restart, Db2 reads the log forward sequentially, starting at the log RBA
that approximates the end of the log and continuing until the actual end of the log is located.
Because the end of the log is inaccessible in this case, some information is lost:
• Units of recovery might have successfully committed or modified additional page sets past point X.
• Additional data might have been written, including those that are identified with writes that are pending
in the accessible portion of the log.
• New units of recovery might have been created, and these might have modified data.
Because of the log error, Db2 cannot perceive these events.
A restart of Db2 in this situation requires truncation of the log.
Related tasks
Restarting Db2 by truncating the log
A portion of the log is inaccessible during the log initialization or current status rebuild phases of restart.
When the log is inaccessible, Db2 cannot identify precisely what units of recovery failed to complete,
what page sets had been modified, and what page sets have writes pending. You need to gather that
information, and restart Db2.
Time
line
Procedure
To find the RBA after the inaccessible part of the log, take the action that is associated with the message
number that you received (DSNJ007I, DSNJ012I, DSNJ103I, DSNJ104I, DSNJ106I, and DSNJ113E):
• When message DSNJ007I is issued:
The problem is that an operator canceled a request for archive mount. Reason code 00D1032B is
associated with this situation and indicates that an entire data set is inaccessible.
For example, the following message indicates that the archive log data set
DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a request for archive mount,
resulting in the following message:
To determine the value of X, run the print log map utility (DSNJU004) to list the log inventory
information. The output of this utility provides each log data set name and its associated log RBA
range, the values of X and Y.
• When message DSNJ012I is issued:
The problem is that a log record is logically damaged. Message DSNJ012I identifies the log RBA of
the first inaccessible log record that Db2 detects. The following reason codes are associated with this
situation:
– 00D10261
– 00D10262
– 00D10263
– 00D10264
– 00D10265
– 00D10266
– 00D10267
– 00D10268
– 00D10348
For example, the following message indicates a logical error in the log record at log RBA X'7429ABA'.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). Db2 stores logical records in blocks of physical
records to improve efficiency. When this type of an error on the log occurs during log initialization or
current status rebuild, all log records within the physical log record are inaccessible. Therefore, the
value of X is the log RBA that was reported in the message, rounded down to a 4-KB boundary. (For the
example message above, the rounded 4-KB boundary value would be X'7429000'.)
• When message DSNJ103I or DSNJ104I is issued:
To determine the value of X, run the print log map utility (DSNJU004) to list the log inventory
information. The output of the utility provides each log data set name and its associated log RBA
range, the values of X and Y.
Verify the accuracy of the information in the print log map utility output for the active log data set
with the lowest RBA range. For this active log data set only, the information in the BSDS is potentially
inaccurate for the following reasons:
– When an active log data set is full, archiving is started. Db2 then selects another active log data set,
usually the data set with the lowest RBA. This selection is made so that units of recovery do not
need to wait for the archive operation to complete before logging can continue. However, if a data
set has not been archived, nothing beyond it has been archived, and the procedure is ended.
– When logging begins on a reusable data set, Db2 updates the BSDS with the new log RBA range for
the active log data set and marks it as "Not Reusable." The process of writing the new information
to the BSDS might be delayed by other processing. Therefore, a possible outcome is for a failure to
occur between the time that logging to a new active log data set begins and the time that the BSDS
is updated. In this case, the BSDS information is not correct.
If the data set is marked "Not Reusable," the log RBA that appears for the active log data set with the
lowest RBA range in the print log map utility output is valid. If the data set is marked "Reusable," you
can assume for the purposes of this restart that the starting log RBA (X) for this data set is one greater
than the highest log RBA that is listed in the BSDS for all other active log data sets.
• When message DSNJ106I is issued:
The problem is that an I/O error occurred while a log record was being read. The message identifies
the log RBA of the first inaccessible log record that Db2 detects. Reason code 00D10329 is associated
with this situation.
For example, the following message indicates an I/O error in the log at RBA X'7429ABA'.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs
during log initialization or current status rebuild, all log records within the physical log record, and
beyond it to the end of the log data set, are inaccessible. This is due to the log initialization or current
status rebuild phase of restart. Therefore, the value of X is the log RBA that was reported in the
message, rounded down to a 4-KB boundary. (For the example message above, the rounded 4-KB
boundary value would be X'7429000'.)
• When message DSNJ113E is issued:
Use the print log map utility (DSNJU004) to list the contents of the BSDS.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs
during log initialization or current status rebuild, all log records within the physical log record are
inaccessible.
Using the print log map output, locate the RBA that is closest to, but less than, X'7429ABA' for the
value of X. If you do not find an RBA that is less than X'7429ABA', a considerable amount of log
information has been lost. If this is the case, continue with “Recovering from a failure resulting from
total or excessive loss of log data” on page 335. Otherwise, continue with the next topic.
Related concepts
Description of failure during current status rebuild
When a failure occurs during current status rebuild, certain characteristics of the situation are evident.
Failure during log initialization phase
When a failure occurs during the log initialization phase, certain characteristics of the situation are
evident.
Related reference
DSNJU004 (print log map) (Db2 Utilities)
Related information
DSNJ007I (Db2 Messages)
DSNJ012I (Db2 Messages)
DSNJ103I (Db2 Messages)
DSNJ104I (Db2 Messages)
DSNJ106I (Db2 Messages)
DSNJ113E (Db2 Messages)
Procedure
To identify lost work and inconsistent data:
1. Obtain available information to help you determine the extent of the loss.
Db2 cannot determine what units of recovery are not completed, what database state information
is lost, or what data is inconsistent in this situation. The log contains all such information, but the
information is not available. The steps below explain what to do to obtain the information that is
available within Db2 to help you determine the extent of the loss. The steps also explain how to start
Db2 in this situation.
After restart, data is inconsistent. Results of queries and any other operations on such data vary
from incorrect results to abends. Abends that occur either identify an inconsistency in the data or
incorrectly assume the existence of a problem in the Db2 internal algorithms.
Figure 55. Sample JCL for obtaining DSN1LOGP summary output for restart
3. Analyze the DSN1LOGP utility output.
Related reference
DSN1LOGP (Db2 Utilities)
The following message acts as a heading, which is followed by messages that identify the units of
recovery that have not yet completed and the page sets that they modified:
Following the summary of outstanding units of recovery is a summary of page sets that have database
writes that are pending.
In each case (units of recovery or databases with pending writes), the earliest required log record is
identified by the START information. In this context, START information is the log RBA of the earliest log
record that is required in order to complete outstanding writes for this page set.
Those units of recovery with a START log RBA equal to, or prior to, the point Y cannot be completed at
restart. All page sets that were modified by these units of recovery are inconsistent after completion of
restart when you attempt to identify lost work and inconsistent data.
All page sets that are identified in message DSN1160I with a START log RBA value equal to, or prior to,
the point Y have database changes that cannot be written to disk. As in the previously described case, all
of these page sets are inconsistent after completion of restart when you attempt to identify lost work and
inconsistent data.
At this point, you need to identify only the page sets in preparation for restart. After restart, you need to
resolve the problems in the page sets that are inconsistent.
Because the end of the log is inaccessible, some information is lost; therefore, the information is
inaccurate. Some of the units of recovery that appear to be inflight might have successfully committed, or
they might have modified additional page sets beyond point X. Additional data might have been written,
including those page sets that are identified as having pending writes in the accessible portion of the log.
New units of recovery might have been created, and these might have modified data. Db2 cannot detect
that these events occurred.
From this and other information (such as system accounting information and console messages), you
might be able to determine what work was actually outstanding and which page sets are likely to
be inconsistent after you start Db2. This is because the record of each event contains the date and
Procedure
To determine what system status information is lost:
1. If you already know what system status information is lost (such as in the case in which utilities are in
progress), you do not need to do anything. Continue with the next topic.
2. If you do not already know what system status information is lost, examine all relevant messages that
provide details about the loss of status information (such as in the cases of deferred restart pending or
write error ranges).
If the messages provide adequate information about what information is lost, you do not need to do
anything more. Continue with the next step.
3. If you find that all system status information is lost, try to reconstruct this information from recent
console displays, messages, and abends that alerted you to these conditions.
These page sets contain inconsistencies that you must resolve.
Procedure
Create a conditional restart control record (CRCR) in the BSDS by using the change log inventory utility.
Specify the following options:
ENDRBA=endrba
The endrba value is the RBA at which Db2 begins writing new log records. If point X is X'7429000',
specify ENDRBA=7429000 on the CRESTART control statement.
At restart, Db2 discards the portion of the log beyond X'7429000' before processing the log for
completing work (such as units of recovery and database writes). Unless otherwise directed, Db2
performs normal restart processing within the scope of the log. Because log information is lost, Db2
errors might occur. For example, a unit of recovery that has actually been committed might be rolled
back. Also, some changes that were made by that unit of recovery might not be rolled back because
information about data changes is lost.
FORWARD=NO
Terminates forward-log recovery before log records are processed. This option and the BACKOUT=NO
option minimize errors that might result from normal restart processing.
BACKOUT=NO
Terminates backward-log recovery before log records are processed. This option and the
FORWARD=NO option minimize errors that might result from normal restart processing.
Example
The following example is a CRESTART control statement for the ENDRBA value of X'7429000':
CRESTART CREATE,ENDRBA=7429000,FORWARD=NO,BACKOUT=NO
Procedure
To start Db2 and resolve data inconsistencies:
1. Start Db2 with the following command:
Symptoms
A Db2 abend occurred, indicating that restart had failed. In addition, the last restart message that was
received was a DSNR004I message, which indicates that log initialization completed; therefore, the
failure occurred during forward log recovery.
Environment
Db2 terminates because a portion of the log is inaccessible, and Db2 is therefore unable to guarantee the
consistency of the data after restart.
Time
line
The portion of the log between log RBA X and Y is inaccessible. The log initialization and current status
rebuild phases of restart completed successfully. Restart processing was reading the log in a forward
direction, beginning at some point prior to X and continuing to the end of the log. Because of the
inaccessibility of log data (between points X and Y), restart processing cannot guarantee the completion
of any work that was outstanding at restart prior to point Y.
Assume that the following work was outstanding at restart:
• The unit of recovery that is identified as URID1 was in-commit.
• The unit of recovery that is identified as URID2 was inflight.
• The unit of recovery that is identified as URID3 was in-commit.
• The unit of recovery that is identified as URID4 was inflight.
• Page set A had writes that were pending prior to the error on the log, continuing to the end of the log.
• Page set B had writes that were pending after the error on the log, continuing to the end of the log.
The earliest log record for each unit of recovery is identified on the log line in Figure 57 on page 325. In
order for Db2 to complete each unit of recovery, Db2 requires access to all log records from the beginning
point for each unit of recovery to the end of the log.
The error on the log prevents Db2 from guaranteeing the completion of any outstanding work that began
prior to point Y on the log. Consequently, database changes that are made by URID1 and URID2 might not
be fully committed or backed out. Writes that were pending for page set A (from points in the log prior to
Y) are lost.
Task 1: Find the log RBA after the inaccessible part of the log
The first task in restarting Db2 by limiting restart processing is to locate the log RBA that is after the
inaccessible part of the log.
To determine the value of X, run the print log map utility (DSNJU004) to list the log inventory
information. The output of this utility provides each log data set name and its associated log RBA
range, the values of X and Y.
• When message DSNJ012I is issued:
The problem is that a log record is logically damaged. Message DSNJ012I identifies the log RBA of
the first inaccessible log record that Db2 detects. The following reason codes are associated with this
situation:
– 00D10261
– 00D10262
– 00D10263
– 00D10264
– 00D10265
– 00D10266
– 00D10267
– 00D10268
– 00D10348
For example, the following message indicates a logical error in the log record at log RBA X'7429ABA'.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). Db2 stores logical records in blocks of physical
records to improve efficiency. When this type of an error on the log occurs during forward log recovery,
all log records within the physical log record are inaccessible. Therefore, the value of X is the log
RBA that was reported in the message, rounded down to a 4-KB boundary. (For the example message
above, the rounded 4-KB boundary value would be X'7429000'.)
• When message DSNJ103I or DSNJ104I is issued:
For message DSNJ103I, the underlying problem depends on the reason code that is issued:
– For reason code 00D1032B, an allocation error occurred for an archive log data set.
– For reason code 00E80084, an active log data set that is named in the BSDS could not be allocated
during log initialization.
For message DSNJ104I, the underlying problem is that an open error occurred for an archive and
active log data set.
To determine the value of X, run the print log map utility (DSNJU004) to list the log inventory
information. The output of the utility provides each log data set name and its associated log RBA
range, the values of X and Y.
• When message DSNJ106I is issued:
The problem is that an I/O error occurred while a log record was being read. The message identifies
the log RBA of the first inaccessible log record that Db2 detects. Reason code 00D10329 is associated
with this situation.
For example, the following message indicates an I/O error in the log at RBA X'7429ABA'.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs
during forward log recovery, all log records within the physical log record, and beyond it to the end of
the log data set, are inaccessible to the forward log recovery phase of restart. This is due to the log
initialization or current status rebuild phase of restart. Therefore, the value of X is the log RBA that
was reported in the message, rounded down to a 4-KB boundary. (For the example message above, the
rounded 4-KB boundary value would be X'7429000'.)
• When message DSNJ113E is issued:
The problem is that the log RBA could not be found in the BSDS. Message DSNJ113E identifies the log
RBA of the inaccessible log record. This log RBA is not registered in the BSDS. Reason code 00D1032B
is associated with this situation.
For example, the following message indicates that the log RBA X'7429ABA' is not registered in the
BSDS:
Use the print log map utility (DSNJU004) to list the contents of the BSDS.
A given physical log record is actually a set of logical log records (the log records that are generally
spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs
during forward log recovery, all log records within the physical log record are inaccessible.
Using the print log map output, locate the RBA that is closest to, but less than, X'7429ABA' for the
value of X. If you do not find an RBA that is less than X'7429ABA', the value of X is zero. Locate the RBA
that is closest to, by greater than, X'7429ABA'. This is the value of Y.
Related concepts
Forward-log recovery failure
When a failure occurs during the forward-log recovery phase of Db2 restart, certain characteristics of the
situation are evident.
Related reference
DSNJU004 (print log map) (Db2 Utilities)
Related information
DSNJ007I (Db2 Messages)
DSNJ012I (Db2 Messages)
Procedure
To identify incomplete units of recovery and inconsistent page sets:
1. Determine the location of the latest checkpoint on the log by looking at one of the following sources,
whichever is more convenient:
• The operator's console contains the following message, identifying the location of the start of the
last checkpoint on the log at log RBA X'876B355'. For example:
• The print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT RBA
2. Obtain a report of the outstanding work that is to be completed at the next restart of Db2 by running
the DSN1LOGP utility.
When you run the DSN1LOGP utility, specify the checkpoint RBA as the STARTRBA and the
SUMMARY(ONLY) option. In order to obtain complete information, be sure to include the last complete
checkpoint from running DSN1LOGP.
3. Analyze the output of the DSN1LOGP utility.
The summary report that is placed in the SYSSUMRY file contains two sections of information: a
complete summary of completed events and a restart summary.
Related concepts
DSN1LOGP summary report
The DSN1LOGP utility generates a summary report, which is placed in the SYSSUMRY file. The report
includes a summary of completed events and a restart summary. You can use the information in this
report to identify lost work and inconsistent data that needs to be resolved.
Related reference
DSN1LOGP (Db2 Utilities)
Task 3: Restrict restart processing to the part of the log after the damage
The third task in restarting Db2 by limiting restart processing is to restrict restart processing to the part of
the log that is after the damage.
Procedure
To restrict restart processing to the part of the log after the damage:
1. Create a conditional restart control record (CRCR) in the BSDS by using the change log inventory utility.
2. Identify the accessible portion of the log beyond the damage by using the STARTRBA specification,
which will be used at the next restart.
3. Specify the value Y+1 (that is, if Y is X'7429FFF', specify STARTRBA=742A000).
CRESTART CREATE,STARTRBA=742A000
Procedure
To start Db2 and resolve data inconsistencies:
1. Start Db2 with the following command:
At the end of restart, the conditional restart control record (CRCR) is marked "Deactivated" to prevent
its use on a later restart. Until the restart completes successfully, the CRCR is in effect. Until data is
consistent or page sets are stopped, start Db2 with the ACCESS (MAINT) option.
2. Resolve all data inconsistency problems.
Related tasks
Resolving inconsistencies resulting from a conditional restart
When a conditional restart of the Db2 subsystem is done, several problems might occur. Recovery from
these problems is possible and varies based on the specific situation.
Symptoms
An abend is issued to indicate that restart failed because of a log problem. In addition, the last restart
message that is received is a DSNR005I message, indicating that forward log recovery completed and
that the failure occurred during backward log recovery.
Environment
Because a portion of the log is inaccessible, Db2 needs to roll back some database changes during
restart.
Time
line
RBA: X Y Checkpoint
The portion of the log between log RBA X and Y is inaccessible. The restart process was reading the log
in a backward direction, beginning at the end of the log and continuing backward to the point marked by
Begin URID5 in order to back out the changes that were made by URID5, URID6, and URID7. You can
assume that Db2 determined that these units of recovery were inflight or in-abort. The portion of the
log from point Y to the end of the log has been processed. However, the portion of the log from Begin
URID5 to point Y has not been processed and cannot be processed by restart. Consequently, database
changes that were made by URID5 and URID6 might not be fully backed out. All database changes made
by URID7 have been fully backed out, but these database changes might not have been written to disk. A
subsequent restart of Db2 causes these changes to be written to disk during forward recovery.
Related concepts
Recommendations for changing the BSDS log inventory
You do not need to take special steps to keep the BSDS updated with records of logging events. Db2 does
that automatically.
Procedure
To bypass backout before recovery:
1. Determine the units of recovery that cannot be backed out and the page sets that will be inconsistent
after the completion of restart.
a) Determine the location of the latest checkpoint on the log by looking at one of the following
sources, whichever is more convenient:
• The operator's console contains message DSNR003I, which identifies the location of the start of
the last checkpoint on the log at log RBA X'7425468'.
• The print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT
RBA.
That message is followed by other messages that identify completed events, such as completed
units of recovery. That section of the output does not apply to this procedure.
The heading of the second section of the output is the following message:
That message is followed by others that identify units of recovery that are not yet completed and
the page sets that they modified. After the summary of outstanding units of recovery is a summary
of page sets with database writes that are pending.
The restart processing that failed was able to complete all units of recovery processing within the
accessible scope of the log after point Y. Database writes for these units of recovery are completed
during the forward recovery phase of restart on the next restart. Therefore, do not bypass the
forward recovery phase. All units of recovery that can be backed out have been backed out.
All remaining units of recovery that are to be backed out (DISP=INFLIGHT or DISP=IN-ABORT) are
bypassed on the next restart because their STARTRBA values are less than the RBA of point Y.
Therefore, all page sets that were modified by those units of recovery are inconsistent after restart.
This means that some changes to data might not be backed out. At this point, you only need to
identify the page sets in preparation for restart.
2. Use the change log inventory utility to create a conditional restart control record (CRCR) in the BSDS,
and direct restart to bypass backward recovery processing during the subsequent restart by using the
BACKOUT specification.
At restart, all units of recovery that require backout are declared complete by Db2, and log records are
generated to note the end of the unit of recovery.
The change log inventory utility control statement is:
CRESTART CREATE,BACKOUT=NO
3. Start Db2.
At the end of restart, the CRCR is marked "Deactivated" to prevent its use on a subsequent restart.
Until the restart is complete, the CRCR is in effect. Use START DB2 ACCESS(MAINT) until data is
consistent or page sets are stopped.
4. Resolve all inconsistent data problems. After the successful start of Db2, resolve all data inconsistency
problems. “Resolving inconsistencies resulting from a conditional restart” on page 339 describes how
to do this. At this time, make all other data available for use.
Related concepts
DSN1LOGP summary report
Symptoms
Abend code 00D1032A is issued, and message DSNJ113E is displayed:
DSNJ113E RBA log-rba NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=aaaaaaaa, CORRELATION-ID=aaaaaaaa
Causes
The BSDS is wrapping around too frequently when log RBA read requests are submitted; when the last
archive log data sets were added to the BSDS, the maximum allowable number of log data sets in the
BSDS was exceeded. This caused the earliest data sets in the BSDS to be displaced by the new entry.
Subsequently, the requested RBA containing the dropped log data set cannot be read after the wrap
occurs.
Symptoms
The following messages are issued:
• DSNJ100I
• DSNJ107I
• DSNJ119I
Causes
Any of the following problems might cause problems with the BSDS or log data sets during restart:
• A log data set is physically damaged.
• Both copies of a log data set are physically damaged in the case of dual logging mode.
• A log data set is lost.
• An archive log volume was reused even though it was still needed.
• A log data set contains records that are not recognized by Db2 because they are logically broken.
Environment
Db2 cannot be started until this procedure is performed.
Procedure
To fallback to a prior shutdown point:
1. Use the print log map utility on the most current copy of the BSDS.
Even if you are not able to do this, continue with the next step. (If you are unable to do this, an error
message is issued.)
2. Use the access method services IMPORT command to restore the backed-up versions of the BSDS
and active log data sets.
3. Use the print log map utility on the copy of the BSDS with which Db2 is to be restarted.
Recovering from a failure resulting from total or excessive loss of log data
If a situation occurs that causes the entire log or an excessive amount of log data to be lost or destroyed,
operations management needs to recover from that situation.
Symptoms
This situation is generally accompanied by messages or abend reason codes that indicate that an
excessive amount of log information, or the entire log, has been lost.
Procedure
To restart Db2 when the entire log is lost:
1. Define and initialize the BSDSs by recovering the BSDS from a backup copy.
2. Define the active log data sets by using the access method services DEFINE command. Run utility
DSNJLOGF to initialize the new active log data sets.
• Determine the highest possible log RBA of the prior log. From previous console logs that were
written when Db2 was operational, locate the last DSNJ001I message. When Db2 switches to a
new active log data set, this message is written to the console, identifying the data set name and
the highest potential log RBA that can be written for that data set. Assume that this is the value
X'8BFFF'. Add one to this value (X'8C000'), and create a conditional restart control record that
specifies the following change log inventory control statement:
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
When Db2 starts, all phases of restart are bypassed, and logging begins at log RBA X'8C000'. If you
choose this method, you do not need to use the RESET option of the DSN1COPY utility, and you can
save a lot of time.
• Run the DSNJU003 utility, specifying the DELETE and NEWLOG options to delete and create new
logs for all active log data sets.
• Run the DSN1COPY utility, specifying the RESET option to reset the log RBA in every data and index
page. Depending on the amount of data in the subsystem, this process might take quite a long
time. Because the BSDS has been redefined and reinitialized, logging begins at log RBA 0 when Db2
starts.
If the BSDS is not reinitialized, you can force logging to begin at log RBA 0 by constructing a
conditional restart control record (CRCR) that specifies a STARTRBA and ENDRBA that are both
equal to 0, as the following command shows:
CRESTART CREATE,STARTRBA=0,ENDRBA=0
4. Start Db2. Use the START DB2 ACCESS(MAINT) command until data is consistent or page sets are
stopped.
5. After restart, resolve all inconsistent data as described in “Resolving inconsistencies resulting from a
conditional restart” on page 339.
Related tasks
Deferring restart processing
When a specific object is causing problems, you can defer its restart processing by starting Db2 and
preventing the problem object from going through restart processing.
Recovering the BSDS from a backup copy
In some situations, the bootstrap data set (BSDS) becomes damaged, and you need to recover the BSDS
from a backup copy.
Related reference
DSNJLOGF (preformat active log) (Db2 Utilities)
Procedure
To recover by creating a gap in the active log:
1. Use the print log map utility (DSNJU004) on the copy of the BSDS with which Db2 is to be restarted.
2. Use the print log map output to obtain the data set names of all active log data sets. Use the access
method services LISTCAT command to determine which active log data sets are no longer available or
usable.
3. Use the access method services DELETE command to delete all active log data sets that are no longer
usable.
4. Use the access method services DEFINE command to define new active log data sets. Run the
DSNJLOGF utility to initialize the new active log data sets. Define one active log data set for each one
that is found to be no longer available or usable in step “2” on page 337. Use the active log data
set name that is found in the BSDS as the data set name for the access method services DEFINE
command.
5. Refer to the print log map utility (DSNJU004) output, and note whether an archive log data set exists
that contains the RBA range of the redefined active log data set.
To do this, note the starting and ending RBA values for the active log data set that was recently
redefined, and look for an archive log data set with the same starting and ending RBA values.
If no such archive log data sets exist:
a) Use the change log inventory utility (DSNJU003) DELETE statement to delete the recently redefined
active log data sets from the BSDS active log data set inventory.
b) Use the change log inventory utility (DSNJU003) NEWLOG statement to add the active log data set
to the BSDS active log data set inventory. Do not specify RBA ranges on the NEWLOG statement.
If the corresponding archive log data sets exist, two courses of action are available:
• If you want to minimize the number of potential read operations on the archive log data sets, use
the access method services REPRO command to copy the data from each archive log data set into
the corresponding active log data set. Ensure that you copy the proper RBA range into the active log
data set.
Ensure that the active log data set is large enough to hold all the data from the archive log data
set. When Db2 does an archive operation, it copies the log data from the active log data set to the
archive log data set, and then pads the archive log data set with binary zeros to fill a block. In order
for the access method services REPRO command to be able to copy all of the data from the archive
log data set to a recently defined active log data set, the new active log data set might need to be
larger than the original one.
For example, if the block size of the archive log data set is 28 KB, and the active log data set
contains 80 KB of data, Db2 copies the 80 KB and pads the archive log data set with 4 KB of nulls
to fill the last block. Thus, the archive log data set now contains 84 KB of data instead of 80 KB. In
order for the access method services REPRO command to complete successfully, the active log data
set must be able to hold 84 KB, rather than just 80 KB of data.
• If you are not concerned about read operations against the archive log data sets, complete the two
steps that appear in the steps “5.a” on page 337 and “5.b” on page 337 (as though the archive data
sets did not exist).
6. Choose the appropriate point for Db2 to start logging.
To do this, determine the highest possible log RBA of the prior log. From previous console logs that
were written when Db2 was operational, locate the last DSNJ001I message. When Db2 switches to a
new active log data set, this message is written to the console, identifying the data set name and the
highest potential log RBA that can be written for that data set. Assume that this is the value X'8BFFF'.
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
When Db2 starts, all phases of restart are bypassed, and logging begins at log RBA X'8C000'. If you
choose this method, you do not need to use the RESET option of the DSN1COPY utility, and you can
save a lot of time.
7. To restart Db2 without using any log data, create a conditional restart control record for the change log
inventory utility (DSNJU003).
8. Start Db2. Use the START DB2 ACCESS(MAINT) command until data is consistent or page sets are
stopped.
9. After restart, resolve all inconsistent data as described in “Resolving inconsistencies resulting from a
conditional restart” on page 339.
Results
This procedure causes all phases of restart to be bypassed and logging to begin at the point in the log
RBA that you identified in step “6” on page 337 (X'8C000' in the example given in this procedure). This
procedure creates a gap in the log between the highest RBA kept in the BSDS and, in this example,
X'8C000', and that portion of the log is inaccessible.
What to do next
Because no Db2 process can tolerate a gap, including RECOVER, you need to take image copies of all data
after a cold start, even data that you know is consistent.
Related reference
DSNJU003 (change log inventory) (Db2 Utilities)
Procedure
To recover without creating a gap in the active log:
1. Locate the last valid log record by using the DSN1LOGP utility to scan the log.
Message DSN1213I identifies the last valid log RBA.
2. Identify the last RBA that is known to be valid by examining message DSN1213I.
For example, if message DSN1213I indicates that the last valid log RBA is X'89158', round this value
up to the next 4-KB boundary, which in this example is X'8A000'.
3. Create a conditional restart control record (CRCR).
For example:
CRESTART CREATE,STARTRBA=8A000,ENDRBA=8A000
4. Start Db2 with the START DB2 ACCESS(MAINT) command until data is consistent or page sets are
stopped.
5. Take image copies of all data for which data modifications were recorded beyond the log RBA that
was used in the CRESTART statement (in this example, X'8A000'). If you do not know what data was
modified, take image copies of all data.
If you do not take image copies of data that has been modified beyond the log RBA that was used in
the CRESTART statement, future RECOVER utility operations might fail or result in inconsistent data.
Resolving inconsistencies
In some problem situations, you need to determine what you must do in order to resolve any data
inconsistencies that exist.
Procedure
To resolve inconsistencies:
1. Determine the scope of any inconsistencies that are introduced by the situation.
a) If the situation is either a cold start that is beyond the current end of the log or a conditional restart
that skips backout or forward log recovery, use the DSN1LOGP utility to determine what units of
work have not been backed out and which objects are involved.
For a cold start that is beyond the end of the log, you can also use DSN1LOGP to help identify any
restrictive object states that have been lost.
Procedure
To restore the table space:
1. Decide whether you can reload the table space or must drop and re-create it.
• If you can reload the table space, run the appropriate LOAD utility jobs to do so; specify the
REPLACE option. After you load the content of the table space, skip to step “6” on page 341.
• If you cannot reload the table space, continue with step “2” on page 341.
2. Issue an SQL DROP TABLESPACE statement for the table space that is suspected of being involved in
the problem.
3. Re-create the table space, tables, indexes, synonyms, and views by using SQL CREATE statements.
4. Grant access to these objects the same way that access was granted prior to the time of the error.
5. Reconstruct the data in the tables.
6. Run the RUNSTATS utility on the data.
7. Use the COPY utility to acquire a full image copy of all data.
8. Use the REBIND command on all plans that use the tables or views that are involved in this activity.
Procedure
To use the REPAIR utility to resolve the inconsistency:
1. Issue the following command to start Db2 and allow access to data:
Symptoms
The symptoms vary based on whether the failure was an allocation or an open problem:
Allocation problem
The following message indicates an allocation problem:
Environment
When this type of problem occurs:
• The table space is automatically stopped.
• Programs receive a -904 SQLCODE (SQLSTATE '57011').
• If the problem occurs during restart, the table space is marked for deferred restart, and restart
continues. The changes are applied later when the table space is started.
Procedure
To insert data into a table, determine the point in time that you want to recover to, and then recover the
Db2 subsystem to a prior point in time:
1. Issue the START DB2 command to start Db2 and all quiesced members of the data sharing group.
Quiesced members are ones that you removed from the data sharing group either temporarily or
permanently. Quiesced members remain dormant until you restart them.
2. Issue SQL statements to create a database, a table space, and two tables with one index for each
table.
3. Issue the BACKUP SYSTEM DATA ONLY utility control statement to create a backup copy of only the
database copy pool for a Db2 subsystem or data sharing group.
4. Issue an SQL statement to first insert rows into one of the tables, and then update some of the rows.
5. Use the LOAD utility with the LOG NO attribute to load the second table.
6. Issue SQL statements to create an additional table space, table, and index in an existing database.
Db2 will re-create the additional table space and table during the log-apply phase of the restore
process.
7. Issue the SET LOG SUSPEND command or the SET LOG RESUME command to obtain a log truncation
point, logpoint1, which is the point you want to recover to.
For a non-data sharing group, use the RBA value. For a data sharing group, use the lowest log record
sequence number (LRSN) from the active members.
The following example shows sample output for the SET LOG SUSPEND command:
8. Issue an SQL statement to first insert rows into one of the tables and then to update and delete some
rows.
9. Issue the STOP DB2 command to stop Db2 and all active members of the data sharing group.
10. Run the DSNJU003 change log inventory utility to create a SYSPITR CRCR record (CRESTART
CREATE SYSPITR=logpoint1).
-DIS UTIL(*)
17. Stop all of the active utilities that you identified in the previous step.
18. Recover any objects that are in RECOVER-pending status or REBUILD-pending status from the table
that you created in step “6” on page 345.
Symptoms
The following message is issued:
The message also contains the level ID of the data set, the level ID that Db2 expects, and the name of the
data set.
Causes
A down-level page set can be caused by:
• A Db2 data set is inadvertently replaced by an incorrect or outdated copy. Usually this happens in
conjunction with use of a stand-alone or non-Db2 utility, such as DSN1COPY or DFSMShsm.
• A cold start of Db2 occurs.
• A VSAM high-used RBA of a table space becomes corrupted.
Db2 associates a level ID with every page set or partition. Most operations detect a down-level ID, and
return an error condition, when the page set or partition is first opened for mainline or restart processing.
The exceptions are the following operations, which do not use the level ID data:
• LOAD REPLACE
• RECOVER
Environment
• If the error was reported during mainline processing, Db2 sends a "resource unavailable" SQLCODE and
a reason code to the application to explain the error.
• If the error was detected while a utility was processing, the utility generates a return code 8.
Attention: If you accept a down-level data set or disable down-level detection, your data
might be inconsistent.
Related system programmer actions:
Consider taking the following actions, which might help you minimize or deal with down-level page set
problems in the future:
• To control how often the level ID of a page set or partition is updated, specify a value between 0 and
32767 on the LEVELID UPDATE FREQ field of panel DSNTIPL.
• To disable down-level detection, specify 0 in the LEVELID UPDATE FREQ field of panel DSNTIPL.
• To control how often level ID updates are taken, specify a value between 1 and 32767.
Related reference
LEVELID UPDATE FREQ field (DLDFREQ subsystem parameter) (Db2 Installation and Migration)
DSN1COPY (Db2 Utilities)
DSN1PRNT (Db2 Utilities)
LOAD (Db2 Utilities)
RECOVER (Db2 Utilities)
REPAIR (Db2 Utilities)
Procedure
To recover LOB data from a LOB table space that is defined with LOG NO:
1. Run the RECOVER utility as you do for other table spaces:
If changes were made after the image copy, Db2 puts the table space in auxiliary warning status,
which indicates that some of your LOBs are invalid. Applications that try to retrieve the values of those
LOBs receive SQLCODE -904. Applications can still access other LOBs in the LOB table space.
2. Get a report of the invalid LOBs by running CHECK LOB on the LOB table space:
3. GUPI Fix the invalid LOBs, by updating the LOBs or setting them to the null value.
For example, suppose that you determine from the CHECK LOB utility that the row of the
EMP_PHOTO_RESUME table with ROWID X'C1BDC4652940D40A81C201AA0A28' has an invalid value
for column RESUME. If host variable hvlob contains the correct value for RESUME, you can use this
statement to correct the value:
GUPI
Symptoms
The following message is issued, where dddddddd is a table space name:
Any table spaces that are identified in DSNU086I messages must be recovered. Follow the steps later in
this topic.
If you receive message DSNU086I again to indicate that the error range recovery cannot be
performed, continue with step “2” on page 349.
d. Issue the command START DATABASE to start the table space for RO or RW access, whichever
is appropriate. If the table space is recovered, you do not need to continue with the following
procedure.
2. If error range recovery fails because of a hardware problem:
a. Use the command STOP DATABASE to stop the table space or table space partition that contains
the error range. As a result of this command, all in-storage data buffers that are associated with the
data set are externalized to ensure data consistency during the subsequent steps.
b. Use the INSPECT function of the IBM Device Support Facility, ICKDSF, to check for track defects
and to assign alternate tracks as necessary. Determine the physical location of the defects by
analyzing the output of messages DSNB224I, DSNU086I, and IOS000I. These messages are
displayed on the system operator's console at the time that the error range was created. If
damaged storage media is suspected, request assistance from IBM Hardware Support before
proceeding.
c. Use the command START DATABASE to start the table space with ACCESS(UT) or ACCESS(RW).
d. Run the RECOVER utility with the ERROR RANGE option. Specify an error range that, from image
copies, locates, allocates, and applies the pages within the tracks that are affected by the error
ranges.
Related information
Device Support Facilities (ICKDSF) Device Support Facilities (ICKDSF) User's Guide and Reference
Symptoms
The following message is issued, where dddddddd is the name of the table space from the catalog or
directory that failed (for example, SYSIBM.SYSCOPY):
This message can indicate either read or write errors. You might also receive a DSNB224I or DSNB225I
message, which indicates an input or output error for the catalog or directory.
DSNC111.DSNDBC.DSNDB06.dddddddd.I0001.A001
DSNC111.DSNDBC.DSNDB01.dddddddd.I0001.A001
Symptoms
The symptoms for integrated catalog facility problems vary according to the underlying problems.
Symptoms
Db2 sends the following message to the master console:
DSNP012I - DSNPSCT0 - ERROR IN VSAM CATALOG LOCATE FUNCTION
FOR data_set_name
CTLGRC=50
CTLGRSN=zzzzRRRR
CONNECTION-ID=xxxxxxxx,
CORRELATION-ID=yyyyyyyyyyyy
LUW-ID=logical-unit-of-work-id=token
In this VSAM message, yy is 28, 30, or 32 for an out-of-space condition. Any other values for yy indicate a
damaged VVDS.
Environment
Your program is terminated abnormally, and one or more messages are issued.
Symptoms
The symptoms vary based on the specific situation. The following messages and codes might be issued:
• DSNP007I
• DSNP001I
• -904 SQL return code (SQLSTATE '57011')
Environment
For a demand request failure during restart, the object that is supported by the data set (an index space
or a table space) is stopped with deferred restart pending. Otherwise, the state of the object remains
unchanged.
Procedure
To extend a user-defined data set:
1. If possible, delete unneeded data on the current volume.
2. If deleting data from the volume does not solve the problem, add volumes to the data set in one of the
following ways:
• If the data set is defined in a Db2 storage group, add more volumes to the storage group by using the
SQL ALTER STOGROUP statement.
• If the data set is not defined in a Db2 storage group, add volumes to the data set by using the access
method services ALTER ADDVOLUMES command.
Procedure
To enlarge a user-managed data set:
1. To allow for recovery in case of failure during this procedure, ensure that you have a recent full image
copy of your table spaces and indexes.
Use the DSNUM option to identify the data set for table spaces or partitioning indexes.
2. Issue the command STOP DATABASE SPACENAM for the last data set of the supported object.
3. Delete the last data set by using access method services.
4. Redefine the data set, and enlarge it as necessary.
The object must be a user-defined linear data set. The limit is 32 data sets if the underlying table
space is not defined as LARGE or with a DSSIZE parameter, and the limit is 4096 for objects with
greater than 254 parts. For a nonpartitioned index on a table space that is defined as LARGE or with a
DSSIZE parameter, the maximum is MIN(4096, 232 / (index piece size/index page size)).
5. Issue the command START DATABASE ACCESS (UT) to start the object for utility-only access.
6. To recover the data set that was redefined, use the RECOVER utility on the table space or index, and
identify the data set by the DSNUM option (specify this DSNUM option for table spaces or partitioning
indexes only).
The RECOVER utility enables you to specify a single data set number for a table space. Therefore, you
need to redefine and recover only the last data set (the one that needs extension). This approach can
be better than using the REORG utility if the table space is very large and contains multiple data sets,
and if the extension must be done quickly.
If you do not copy your indexes, use the REBUILD INDEX utility.
7. Issue the command START DATABASE to start the object for either RO or RW access, whichever is
appropriate.
Procedure
To enlarge a Db2-managed data set:
1. Use the SQL statement ALTER TABLESPACE or ALTER INDEX with a USING clause.
(You do not need to stop the table space before you use ALTER TABLESPACE.) You can give new values
of PRIQTY and SECQTY in either the same or a new Db2 storage group.
2. Use one of the following procedures.
No movement of data occurs until this step is completed.
• For indexes: If you have taken full image copies of the index, run the RECOVER INDEX utility.
Otherwise, run the REBUILD INDEX utility.
• For table spaces other than LOB table spaces: Run one of the following utilities on the table space:
REORG, RECOVER, or LOAD REPLACE.
• For LOB table spaces that are defined with LOG YES: Run the RECOVER utility on the table space.
• For LOB table spaces that are defined with LOG NO:
a. Start the table space in read-only (RO) mode to ensure that no updates are made during this
process.
b. Make an image copy of the table space.
c. Run the RECOVER utility on the table space.
d. Start the table space in read-write (RW) mode.
Procedure
To add another data set:
1. Use access method services to define another data set.
The name of the new data set must follow the naming sequence of the existing data sets that support
the object. The last four characters of each name are a relative data set number: If the last name ends
with A001, the next name must end with A002, and so on. Also, be sure to add either the character "I"
or the character "J" to the name of the data set.
If the object is defined in a Db2 storage group, Db2 automatically tries to create an additional data
set. If that fails, access method services messages are sent to an operator to indicate the cause of the
problem.
2. If necessary, correct the problem (identified in the access method services messages) to obtain
additional space.
Procedure
To redefine the partitions in an index-based partitioning environment:
1. Use the ALTER INDEX ALTER PARTITION statement to alter the key range values of the partitioning
index.
2. Use the REORG utility with inline statistics on the partitions that are affected by the change in key
range.
3. Use the RUNSTATS utility on the nonpartitioned indexes.
4. Rebind the dependent packages and plans.
Procedure
To redefine the partitions in a table-based partitioning environment:
1. Use the SQL statement ALTER TABLE ALTER PARTITION to alter the partition boundaries.
2. Use the REORG utility with inline statistics on the partitions that are affected by the change in partition
boundaries.
3. Use the RUNSTATS utility on the indexes.
4. Rebind the dependent packages and plans.
Enlarging a fully extended data set for the work file database
If you have an out-of-disk-space or extent limit problem with the work file database (DSNDB07), you need
to add space to the data set.
Procedure
Add space to the Db2 storage group, choosing one of the following approaches:
• Use SQL to create more table spaces in database DSNDB07.
• Execute these steps:
a. Use the command STOP DATABASE(DSNDB07) to ensure that no users are accessing the
database.
b. Use SQL to alter the storage group, adding volumes as necessary.
c. Use the command START DATABASE(DSNDB07) to allow access to the database.
Symptoms
One of the following messages is issued at the end of utility processing, depending on whether the table
space is partitioned:
Causes
Db2 detected one or more referential constraint violations.
Environment
The table space is still generally available. However, it is not available to the COPY, REORG, and QUIESCE
utilities, or to SQL select, insert, delete, or update operations that involve tables in the table space.
Symptoms
The symptoms for DDF failures vary based on the precise problems. The symptoms include messages,
SQL return codes, and apparent wait states.
Symptoms
VTAM or TCP/IP returns a resource-unavailable condition along with the appropriate diagnostic reason
code and message. A DSNL500 or DSNL511 (conversation failed) message is sent to the console for the
first failure to a location for a specific logical unit (LU) mode or TCP/IP address. All other threads that
detect a failure from that LU mode or IP address are suppressed until communications to the LU that uses
that mode are successful.
Db2 returns messages DSNL501I and DSNL502I. Message DSNL501I usually means that the other
subsystem is not operational. When the error is detected, it is reported by a console message, and the
application receives an SQL return code.
If you use application-directed access or DRDA as the database protocols, SQLCODE -30080 is returned
to the application. The SQLCA contains the VTAM diagnostic information, which contains only the
RCPRI and RCSEC codes. For SNA communications errors, SQLCODE -30080 is returned. For TCP/IP
connections, SQLCODE -30081 is returned.
Environment
The application can choose to request rollback or commit, both of which deallocate all but the first
conversation between the allied thread and the remote database access thread. A commit or rollback
message is sent over this remaining conversation.
Errors during the deallocation process of the conversation are reported through messages, but they do
not stop the commit or rollback processing. If the conversation that is used for the commit or rollback
message fails, the error is reported. If the error occurred during a commit process and if the remote
database access was read-only, the commit process continues. Otherwise the commit process is rolled
back.
Symptoms
A DSNL700I message, which indicates that a resource-unavailable condition exists, is sent to the console.
Other messages that describe the cause of the failure are also sent to the console.
Symptoms
A DSNL701I, DSNL702I, DSNL703I, DSNL704I, or DSNL705I message is issued to identify the problem.
Other messages that describe the cause of the failure are also sent to the console.
Environment
DDF fails to start. Db2 continues to run.
Symptoms
In the event of a failure of a database access thread, the Db2 server terminates the database access
thread only if a unit of recovery exists. The server deallocates the database access thread and then
deallocates the conversation with an abnormal indication (a negative SQL code), which is subsequently
returned to the requesting application. The returned SQL code depends on the type of remote access:
• DRDA access
For a database access thread or non-Db2 server, a DDM error message is sent to the requesting site,
and the conversation is deallocated normally. The SQL error status code is a -30020 with a resource
type 1232 (agent permanent error received from the server).
Symptoms
VTAM messages and Db2 messages are issued to indicate that distributed data facility (DDF) is
terminating and to explain why.
Causes
Environment
DDF terminates. An abnormal VTAM failure or termination causes DDF to issue a STOP DDF
MODE(FORCE) command. The VTAM commands Z NET,QUICK and Z NET,CANCEL cause an abnormal
VTAM termination. A Z NET,HALT causes a STOP DDF MODE(QUIESCE) to be issued by DDF.
Symptoms
Db2 messages, such as DSNL013I and DSNL004I, are issued to indicate the problem.
DSNL013I
Contains the error field value that is returned from the VTAM ACB OPEN. For information about
possible values, see OPEN macroinstruction error fields(z/OS Communications Server: IP and SNA
Codes).
DSNL004I
Normally specifies a fully qualified LU name, network-name.luname. The absence of the network-name
indicates a problem.
Symptoms
TCP/IP messages and Db2 messages are issued to indicate that TCP/IP is unavailable.
Environment
Distributed data facility (DDF) periodically attempts to reconnect to TCP/IP. If the TCP/IP listener fails,
DDF automatically tries to re-establish the TCP/IP listener for the DRDA SQL port or the resync port every
three minutes. TCP/IP connections cannot be established until the TCP/IP listener is re-established.
Symptoms
Message DSNL501I is issued when a CNOS request to a remote LU fails. The CNOS request is the first
attempt to connect to the remote site and must be negotiated before any conversations can be allocated.
Consequently, if the remote LU is not active, message DSNL501I is displayed to indicate that the CNOS
request cannot be negotiated. Message DSNL500I is issued only once for all the SQL conversations that
fail as a result of a remote LU failure.
Message DSNL502I is issued for system conversations that are active to the remote LU at the time of the
failure. This message contains the VTAM diagnostic information about the cause of the failure.
Environment
Any application communications with a failed LU receives a message to indicate a resource-unavailable
condition. Any attempt to establish communication with such an LU fails.
Symptoms
An application is in an indefinitely long wait condition. This can cause other Db2 threads to fail due
to resources that are held by the waiting thread. Db2 sends an error message to the console, and the
application program receives an SQL return code.
Environment
Db2 does not respond.
Symptoms
Message DSNL500I is issued at the requester for VTAM conversations (if it is a Db2 subsystem) with
return codes RTNCD=0, FDBK2=B, RCPRI=4, and RCSEC=5. These return codes indicate that a security
violation has occurred. The server has deallocated the conversation because the user is not allowed to
access the server. For conversations that use DRDA access, LU 6.2 communications protocols present
specific reasons for why the user access failed, and these reasons are communicated to the application.
If the server is a Db2 database access thread, message DSNL030I is issued to describe what caused the
user to be denied access into Db2 through DDF. No message is issued for TCP/IP connections.
If the server is a Db2 subsystem, message DSNL030I is issued. Otherwise, the system programmer needs
to refer to the documentation of the server. If the application uses DRDA access, SQLCODE –30082 is
returned.
Causes
This problem is caused by a remote user who attempts to access Db2 through DDF without the necessary
security authority.
Symptoms
The specific symptoms of a disaster that affects your local system hardware vary, but when this happens,
the affected Db2 subsystem is not operational.
Procedure
For a remote site recovery procedure where tape volumes that contain system data are sent from
the production site, specify the dump class that is available at the remote site by using the following
installation options on installation panel DSNTIP6:
• Either RESTORE FROM DUMP or RECOVER FROM DUMP
• DUMP CLASS NAME
Procedure
To recover from a disaster in a non-data sharing environment by using image copies and archive logs:
1. If an integrated catalog facility catalog does not already exist, run job DSNTIJCA to create a user
catalog.
2. Use the access method services IMPORT command to import the integrated catalog facility catalog.
3. Restore Db2 libraries.
Some examples of libraries that you might need to restore include:
• Db2 SMP/E libraries
• User program libraries
• User DBRM libraries
• Db2 CLIST libraries
• Db2 libraries that contain customized installation jobs
• JCL for creating user-defined table spaces
4. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects.
• CRESTART CREATE,ENDRBA=nnnnnnnnn000
The nnnnnnnnn000 equals a value that is one more than the ENDRBA of the latest archive log.
• CRESTART CREATE,ENDTIME=nnnnnnnnnnnn
The nnnnnnnnnnnn is the end time of the log record. Log records with a timestamp later than
nnnnnnnnnnnn are truncated.
• Index spaces:
To allocate user-managed table spaces or index spaces, use the access method services DEFINE
CLUSTER command. To find the correct IPREFIX for the DEFINE CLUSTER command, perform the
following queries for table spaces and index spaces.
• Table spaces:
• Index spaces:
Now you can perform the DEFINE CLUSTER command with the correct IPREFIX (I or J) in the
data set name:
catname.DSNDBx.dbname.psname.y0001.znnn
What to do next
Determine what to do about any utilities that were in progress at the time of failure.
Related concepts
Preparations for disaster recovery
If a Db2 computing center is totally lost, you can recover on another Db2 subsystem at a recovery site.
To do this, you must regularly back up the data sets and the log for the recovery subsystem. As with all
data recovery operations, the objectives of disaster recovery are to minimize the loss of data, workload
processing (updates), and time.
What to do about utilities that were in progress at time of failure
After you restore data from image copies and archives, you might need to take some additional steps. For
example, you need to determine what to do about any utilities that were in progress at the time of the
failure.
Related tasks
Defining your own user-managed data sets
You can use Db2 storage groups to let Db2 manage the VSAM data sets. However, you can also define
your own user-managed data sets. With user-managed data sets, Db2 checks whether you have defined
your data sets correctly.
Migration step 1: Actions to complete before migration (Db2 Installation and Migration)
Recovering catalog and directory objects (Db2 Utilities)
Related reference
DSN1LOGP (Db2 Utilities)
Procedure
To recover from a disaster by using image copies and archive logs:
1. If you have information in your coupling facility from practice startups, remove old information from
the coupling facility.
D XCF,STRUCTURE,STRNAME=grpname*
b) For group buffer pools, enter the following command to force off the connection of those
structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
Connections for the SCA are not held at termination; therefore you do not need to force off any
SCA connections.
c) Delete all the Db2 coupling facility structures that have a STATUS of ALLOCATED by using the
following command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
This step is necessary to remove old information that exists in the coupling facility from your
practice startup when you installed the group.
2. If an integrated catalog facility catalog does not already exist, run job DSNTIJCA to create a user
catalog.
3. Use the access method services IMPORT command to import the integrated catalog facility catalog.
4. Restore Db2 libraries.
Some examples of libraries that you might need to restore include:
• Db2 SMP/E libraries
• User program libraries
• User DBRM libraries
• Db2 CLIST libraries
• Db2 libraries that contain customized installation jobs
• JCL for creating user-defined table spaces
5. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects.
(Because step “3” on page 369 imports a user ICF catalog, the catalog reflects data sets that do not
exist on disk.)
6. Obtain a copy of the installation job DSNTIJIN, which creates Db2 VSAM and non-VSAM data sets, for
the first data sharing member. Change the volume serial numbers in the job to volume serial numbers
that exist at the recovery site. Comment out the steps that create Db2 non-VSAM data sets, if these
data sets already exist. Run DSNTIJIN on the first data sharing member.
However, do not run DSNTIJID.
For subsequent members of the data sharing group, run the DSNTIJIN that defines the BSDS and
logs.
7. Recover the BSDS by following these steps for each member in the data sharing group:
a) Use the access method services REPRO command to restore the contents of one BSDS data set
(allocated in step “6” on page 369) on each member.
You can find the most recent BSDS image in the last file (archive log with the highest number) on
the latest archive log tape.
b) Determine the RBA and LRSN ranges for this archive log by using the print log map utility
(DSNJU004) to list the current BSDS contents. Find the most recent archive log in the BSDS listing,
and add 1 to its ENDRBA value. Use this as the STARTRBA. Find the active log in the BSDS listing
that starts with this RBA, and use its ENDRBA as the ENDRBA. Use the STARTLRSN and ENDLRSN
of this active log data set as the LRSN range (STARTLRSN and ENDLRSN) for this archive log.
c) Delete the oldest archive log from the BSDS.
The following sample DSNJU004 output shows the (partial) information for the archive log
member DB2G.
e) Adjust the active logs in the BSDS by using the change log inventory utility (DSNJU003), as
necessary:
i) To delete all active logs in the BSDS, use the DELETE option of DSNJU003. Use the BSDS listing
that is produced in step “7.d” on page 370 to determine the active log data set names.
ii) To add the active log data sets to the BSDS, use the NEWLOG statement of DSNJU003. Do not
specify a STARTRBA or ENDRBA in the NEWLOG statement. This specification indicates to Db2
that the new active logs are empty.
f) If you are using the Db2 distributed data facility, update the LOCATION and the LUNAME values in
the BSDS by running the change log inventory utility with the DDF statement.
g) List the new BSDS contents by using the print log map utility (DSNJU004). Ensure that the BSDS
correctly reflects the active and archive log data set inventories.
In particular, ensure that:
• All active logs show a status of NEW and REUSABLE.
• The archive log inventory is complete and correct (for example, the start and end RBAs are
correct).
h) If you are using dual BSDSs, make a copy of the newly restored BSDS data set to the second BSDS
data set.
8. Optional: Restore archive logs to disk for each member.
• CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn
The nnnnnnnnnnnn is the LRSN of the last log record that is to be used during restart.
• CRESTART CREATE,ENDTIME=nnnnnnnnnnnn
The nnnnnnnnnnnn is the end time of the log record. Log records with a timestamp later than
nnnnnnnnnnnn are truncated.
• Index spaces:
To allocate user-managed table spaces or index spaces, use the access method services DEFINE
CLUSTER command. To find the correct IPREFIX for the DEFINE CLUSTER command, perform the
following queries for table spaces and index spaces.
• Table spaces:
• Index spaces:
Now you can perform the DEFINE CLUSTER command with the correct IPREFIX (I or J) in the
data set name:
catname.DSNDBx.dbname.psname.y0001.znnn
The y can be either I or J, x is C (for VSAM clusters) or D (for VSAM data components), and spname
is either the table space or index space name.
b) If your user table spaces or index spaces are STOGROUP-defined, and if the volume serial
numbers at the recovery site are different from those at the local site, use the SQL statement
ALTER STOGROUP to change them in the Db2 catalog.
c) Recover all user table spaces and index spaces from the appropriate image copies.
If you do not copy your indexes, use the REBUILD INDEX utility to reconstruct the indexes.
d) Start all user table spaces and index spaces for read-write processing by issuing the command
START DATABASE with the ACCESS(RW) option.
e) Resolve any remaining CHECK-pending states that would prevent COPY execution.
f) Run queries for which the results are known.
25. Make full image copies of all table spaces and indexes with the COPY YES attribute.
What to do next
Determine what to do about any utilities that were in progress at the time of failure.
Related concepts
Preparations for disaster recovery
If a Db2 computing center is totally lost, you can recover on another Db2 subsystem at a recovery site.
To do this, you must regularly back up the data sets and the log for the recovery subsystem. As with all
data recovery operations, the objectives of disaster recovery are to minimize the loss of data, workload
processing (updates), and time.
What to do about utilities that were in progress at time of failure
After you restore data from image copies and archives, you might need to take some additional steps. For
example, you need to determine what to do about any utilities that were in progress at the time of the
failure.
Recovering data in data sharing (Db2 Data Sharing Planning and Administration)
Related tasks
Migration step 1: Actions to complete before migration (Db2 Installation and Migration)
Recovering catalog and directory objects (Db2 Utilities)
Related reference
DSN1LOGP (Db2 Utilities)
LOG NO and copy-spec If the RELOAD phase completed, the table space is complete after
you recover it to the current time. Recover the indexes.
If the RELOAD phase did not complete, recover the table space to a
prior point in time. Recover the indexes.
LOG NO, copy-spec, and If the BUILD or SORTBLD phase completed, recover to the current
SORTKEYS integer time, and recover the indexes.
If the BUILD or SORTBLD phase did not complete, recover to a
prior point in time. Recover the indexes.
LOG NO Recover the table space to a prior point in time. You can use the
TOCOPY option of the RECOVER utility to do this.
To avoid extra loss of data in a future disaster situation, run the QUIESCE utility on table spaces before
invoking the LOAD utility. This enables you to recover a table space by using the TOLOGPOINT option
instead of TOCOPY.
REORG
For a user table space, find the options that you specified in the following table, and perform the
specified actions.
Recommendation: Make full image copies of the catalog and directory before you run REORG on
them.
SHRLEVEL CHANGE or If the SWITCH phase completed, terminate the utility. Recover the
SHRLEVEL REFERENCE table space to the current time. Recover the indexes.
If the SWITCH phase did not complete, recover the table space to
the current time. Recover the indexes.
Procedure
To set up the tracker site:
1. Create a mirror image of your primary Db2 subsystem or data sharing group.
This process is described in steps 1 through 4 of the normal disaster recovery procedure, which
includes creating catalogs and restoring Db2 libraries.
2. Modify the subsystem parameters as follows:
• Set the TRKRSITE subsystem parameter to YES.
• Optionally, set the SITETYP parameter to RECOVERYSITE if the full image copies that this site is to
receive are created as remote site copies.
3. Use the access method services DEFINE CLUSTER command to allocate data sets for all user-
managed table spaces that you plan to send over from the primary site.
4. Optional: Allocate data sets for user-managed indexes that you want to rebuild during recovery cycles.
The main reason that you rebuild indexes during recovery cycles is for running efficient queries on the
tracker site. If you do not require indexes, you do not need to rebuild them for recovery cycles. For
nonpartitioning indexes on very large tables, you can include indexes for LOGONLY recovery during the
recovery cycle, which can reduce the amount of time that it takes to bring up the disaster site. Be sure
that you define data sets with the proper prefix (either I or J) for both indexes and table spaces.
5. Send full image copies of all Db2 data at the primary site to the tracker site. Optionally, you can use the
BACKUP SYSTEM utility with the DATA ONLY option and send copies of the database copy pool to the
tracker site.
If you send copies that the BACKUP SYSTEM utility creates, this step completes the tracker site setup
procedure.
6. If you did not use the BACKUP SYSTEM utility in the prior, tailor installation job DSNTIJIN to create
Db2 catalog data sets.
What to do next
Important: Do not attempt to start the tracker site when you are setting it up. You must follow the
procedure described in “Establishing a recovery cycle by using RESTORE SYSTEM LOGONLY” on page
379.
Related reference
BACKUP SYSTEM (Db2 Utilities)
Procedure
To establish a recovery cycle at your tracker site by using the RESTORE SYSTEM utility:
1. While your primary site continues its usual workload, send a copy of the primary site active log,
archive logs, and BSDS to the tracker site.
Send full image copies for the following objects:
• Table spaces or partitions that are reorganized, loaded, or repaired with the LOG NO option after the
latest recovery cycle
• Objects that, after the latest recovery cycle, have been recovered to a point in time
Recommendation: If you are taking incremental image copies, run the MERGECOPY utility at the
primary site before sending the copy to the tracker site.
2. At the tracker site, restore the BSDS that was received from the primary site by following these steps:
a) Locate the BSDS in the latest archive log that is now at the tracker site.
b) Register this archive log in the archive log inventory of the new BSDS by using the change log
inventory utility (DSNJU003).
c) Register the primary site active log in the new BSDS by using the change log inventory utility
(DSNJU003).
3. Use the change log inventory utility (DSNJU003) with the following CRESTART control statement:
In this control statement, nnnnnnnnn equals the RBA at which the latest archive log record ends +1.
Do not specify the RBA at which the archive log begins because you cannot cold start or skip logs in
tracker mode.
Data sharing
If you are recovering a data sharing group, you must use the following CRESTART control
statement on all members of the data sharing group. The ENDLRSN value must be the same
for all members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnnnnn is the lowest LRSN of all the members that are to be
read during restart. Specify one of the following values for the ENDLRSN:
• If you receive the ENDLRSN from the output of the print log map utility (DSNJU004) or from
the console logs using message DSNJ003I, you must use ENDLRSN -1 as the input to the
conditional restart.
• If you receive the ENDLRSN from the output of the DSN1LOGP utility (message DSN1213I), you
can use the displayed value.
Procedure
To establish a recovery cycle by using the RECOVER utility:
1. While your primary site continues its usual workload, send a copy of the primary site active log,
archive logs, and BSDS to the tracker site.
Send full image copies for the following objects:
• Table spaces or partitions that are reorganized, loaded, or repaired with the LOG NO option after the
latest recovery cycle.
• Objects that, after the latest recovery cycle, have been recovered to a point in time.
In this control statement, nnnnnnnnn000 equals the value of the ENDRBA of the latest archive log
plus 1. Do not specify STARTRBA because you cannot cold start or skip logs in a tracker system.
Data sharing
If you are recovering a data sharing group, you must use the following CRESTART control
statement on all members of the data sharing group. The ENDLRSN value must be the same
for all members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnnnnn is the lowest ENDLRSN of all the members that are to
be read during restart. Specify one of the following values for the ENDLRSN:
• If you receive the ENDLRSN from the output of the print log map utility (DSNJU004) or from
message DSNJ003I at the console logs use ENDLRSN -1 as the input to the conditional restart.
• If you receive the ENDLRSN from the output of the DSN1LOGP utility (DSN1213I message), use
the displayed value.
The ENDLRSN or ENDRBA value indicates the end log point for data recovery and for truncating the
archive log. With ENDLRSN, the missing log records between the lowest and highest ENDLRSN values
for all the members are applied during the next recovery cycle.
4. If the tracker site is a data sharing group, delete all Db2 coupling facility structures before restarting
the tracker members.
5. At the tracker site, restart Db2 to begin a tracker site recovery cycle.
Data sharing
For data sharing, restart every member of the data sharing group.
6. At the tracker site, submit RECOVER utility jobs to recover database objects. Run the RECOVER utility
with the LOGONLY option on all database objects that do not require recovery from an image copy.
You must recover database objects as the following procedure specifies:
a) Restore the full image copy or DSN1COPY of SYSUTILX.
If you are doing a LOGONLY recovery on SYSUTILX from a previous DSN1COPY backup, make
another DSN1COPY copy of that table space after the LOGONLY recovery is complete and before
recovering any other catalog or directory objects.
Procedure
To maintain a tracker site:
1. Keep the tracker site and primary site at the same maintenance level to avoid unexpected problems.
2. Between recovery cycles, apply maintenance as you normally do, by stopping and restarting the Db2
subsystem or a Db2 data sharing member.
3. If a tracker site fails, restart it as you normally do.
4. Save your complete tracker site prior to testing a takeover site.
This step is necessary because bringing up a tracker site as the takeover site destroys the tracker site
environment. After testing the takeover site, you can restore the tracker site and resume the recovery
cycles.
Results
When restarting a data sharing group, the first member that starts during a recovery cycle puts the
ENDLRSN value in the shared communications area (SCA) of the coupling facility. If an SCA failure occurs
during a recovery cycle, you must go through the recovery cycle again, using the same ENDLRSN value for
your conditional restart.
Procedure
To make the tracker site be the takeover site by using the RESTORE SYSTEM utility with the LOGONLY
option:
1. If log data for a recovery cycle is en route or is available but has not yet been used in a recovery
cycle, perform the procedure in “Establishing a recovery cycle by using RESTORE SYSTEM LOGONLY”
on page 379.
2. Ensure that the TRKSITE NO subsystem parameter is specified.
3. For scenarios other than data sharing, continue with step “4” on page 384.
Data sharing
If this is a data sharing system, delete the coupling facility structures.
4. Start Db2 at the same RBA or ENDLRSN that you used in the most recent tracker site recovery
cycle. Specify FORWARD=YES and BACKOUT=YES in the CRESTART statement; this takes care of
uncommitted work.
5. Restart the objects that are in GRECP or LPL status by issuing the START DATABASE(*)
SPACENAM(*) command.
6. If you used the DSN1COPY utility to create a copy of SYSUTILX in the last recovery cycle, use
DSN1COPY to restore that copy.
7. Terminate any in-progress utilities by using the following procedure:
a) Enter the DISPLAY UTILITY(*) command .
b) Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of objects on which utilities are
being run.
c) Terminate in-progress utilities in the correct order by using the TERM UTILITY(*) command.
8. Rebuild indexes, including IBM and user-defined indexes on the Db2 catalog and user-defined indexes
on table spaces.
Related tasks
Restoring data from image copies and archive logs
Follow the appropriate procedure for restoring from image copies and archive logs, depending on whether
you are in a data sharing environment. Both procedures assume that all logs, copies, and reports are
available at the recovery site.
Recovering at a tracker site that uses the RECOVER utility
Procedure
To make the tracker site be the takeover site by using the RECOVER utility:
1. Restore the BSDS, and register the archive log from the last archive log that you received from the
primary site.
2. For environments that do not use data sharing, continue with step “3” on page 385.
Data sharing
If this is a data sharing system, delete the coupling facility structures.
3. Ensure that the DEFER ALL and TRKSITE NO subsystem parameters are specified.
4. Take the appropriate action, which depends on whether you received more logs from the primary site.
If this is a non-data-sharing Db2 subsystem, the log truncation point varies depending on whether you
have received more logs from the primary site since the last recovery cycle:
• If you did not receive more logs from the primary site:
Start Db2 using the same ENDRBA that you used on the last tracker cycle. Specify FORWARD=YES
and BACKOUT=YES; this takes care of uncommitted work. If you have fully recovered the objects
during the previous cycle, they are current except for any objects that had outstanding units of
recovery during restart. Because the previous cycle specified NO for both FORWARD and BACKOUT
and you have now specified YES, affected data sets are placed in the LPL. Restart the objects that
are in LPL status by using the following command:
After you issue the command, all table spaces and indexes that were previously recovered are now
current. Remember to rebuild any indexes that were not recovered during the previous tracker cycle,
including user-defined indexes on the Db2 catalog.
• If you received more logs from the primary site:
Start Db2 using the truncated RBA nnnnnnnnn000, which equals the value of the ENDRBA of the
latest archive log plus 1. Specify FORWARD=YES and BACKOUT=YES. Run your recoveries as you did
during recovery cycles.
Data sharing
You must restart every member of the data sharing group; use the following CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
In this statement, nnnnnnnnnnnn is the LRSN of the last log record that is to be used during
restart. Specify one of the following values for the ENDLRSN:
• If you receive the ENDLRSN from the output of the print log map utility (DSNJU004) or from
message DSNJ003I at the console logs use ENDLRSN -1 as the input to the conditional restart.
• If you receive the ENDLRSN from the output of the DSN1LOGP utility (DSN1213I message), use
the displayed value.
The ENDLRSN or ENDRBA value indicates the end log point for data recovery and for truncating
the archive log. With ENDLRSN, the missing log records between the lowest and highest ENDLRSN
values for all the members are applied during the next recovery cycle.
The takeover Db2 sites must specify conditional restart with a common ENDLRSN value to allow all
remote members to logically truncate the logs at a consistent point.
Disk fails
at 12:00
Database Database
Device Device
Example
In a rolling disaster, the following events at the primary site cause data inconsistency at your recovery
site. This data inconsistency example follows the same scenario that the preceding figure depicts.
1. Some time prior to 12:00: A table space is updated in the buffer pool.
2. 12:00 The log record is written to disk on logical storage subsystem 1.
3. 12:01: Logical storage subsystem 2 fails.
4. 12:02: The update to the table space is externalized to logical storage subsystem 2 but is not written
because subsystem 2 failed.
5. 12:03: The log record that indicates that the table space update was made is written to disk on logical
storage subsystem 1.
6. 12:03: Logical storage subsystem 1 fails.
Because the logical storage subsystems do not fail at the same point in time, they contain inconsistent
data. In this scenario, the log indicates that the update is applied to the table space, but the update is not
applied to the data volume that holds this table space.
Important: Any disaster recovery solution that uses data mirroring must guarantee that all volumes at the
recovery site contain data for the same point in time.
Procedure
To recover at the secondary site after a disaster:
D XCF,STRUCTURE,STRNAME=grpname*
b. For group buffer pools and the lock structure, enter the following command to force off the
connections in those structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
c. Delete all the Db2 coupling facility structures by using the following command for each
structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
3. If you are using the distributed data facility, set LOCATION and LUNAME in the BSDS to values that
are specific to your new primary site.
To set LOCATION and LUNAME, run the stand-alone change log inventory utility (DSNJU003) with the
following control statement:
4. Start all Db2 members by using local DSNZPARM data sets and perform a normal restart.
Data sharing
For data sharing groups, Db2 performs group restart. Shared data sets are set to GRECP (group
buffer pool RECOVER-pending) status, and pages are added to the LPL (logical page list).
5. For scenarios other than data sharing, continue to step “6” on page 389.
Data sharing
For objects that are in GRECP status, Db2 automatically recovers the objects during restart.
Message DSNI049I is issued when the recovery for all objects that are in GRECP status is
complete. A message is issued for each member, even if the member did not perform GRECP
recovery.
After message DSNI049I is issued:
a. Display all data sets with GRECP or LPL status by issuing the following Db2 command:
-DISPLAY UTILITY(*)
If utilities are pending, record the output from this command, and continue to the next step. You
cannot restart utilities at a recovery site. You will terminate these utilities in step “8” on page 390. If
no utilities are pending, continue to step number “9” on page 390.
7. Use the DIAGNOSE utility to access the SYSUTIL directory table.
You cannot access this directory table by using normal SQL statements (as you can with most other
directory tables). You can access SYSUTIL only by using the DIAGNOSE utility, which is normally
intended to be used under the direction of IBM Software Support.
Use the following control statement to run the DIAGNOSE utility job:
END DIAGNOSE
Examine the output. Record the phase in which each pending utility was interrupted, and record the
object on which each utility was operating.
-TERM UTILITY(*)
9. For environments that do not use data sharing, continue to step “10” on page 390.
Data sharing
For data sharing groups, use the following START DATABASE command on each database that
contains objects that are in LPL status:
When you use the START DATABASE command to recover objects, you do not need to provide
Db2 with image copies.
Tip: Use up to 10 START DATABASE commands for each Db2 subsystem to increase the speed at
which Db2 completes this operation. Multiple commands that run in parallel complete faster than
a single command that specifies the same databases.
10. Start all remaining database objects with the following START DATABASE command:
11. For each object that the LOAD utility places in a restrictive status, take one of the following actions:
• If the object was a target of a LOAD utility control statement that specified SHRLEVEL CHANGE,
restart the LOAD utility on this object at your convenience. This object contains valid data.
• If the object was a target of a LOAD utility control statement that specified SHRLEVEL NONE and
the LOAD job was interrupted before the RELOAD phase, rebuild the indexes on this object.
• If the object was a target of a LOAD utility control statement that specified SHRLEVEL NONE and
the LOAD job was interrupted during or after the RELOAD phase, recover this object to a point in
time that is before this utility ran.
• Otherwise, recover the object to a point in time that is before the LOAD job ran.
12. For each object that the REORG utility places in a restrictive status, take one of the following actions:
• When the object was a target of a REORG utility control statement that specified SHERLEVEL
NONE:
– If the REORG job was interrupted before the RELOAD phase, no further action is required. This
object contains valid data, and the indexes on this object are valid.
– If the REORG job was interrupted during the RELOAD phase, recover this object to a point in
time that is before this utility ran.
– If the REORG job was interrupted after the RELOAD phase, rebuild the indexes on the object.
• When the object was a target of a REORG utility control statement that does not specify SHRLEVEL
NONE:
– If the REORG job was interrupted before the SWITCH phase, no further action is required. This
object contains valid data, and the indexes on this object are valid.
– If the REORG job was interrupted during the SWITCH phase, no further action is required. This
object contains valid data, and the indexes on this object are valid.
– If the REORG job was interrupted after the SWITCH phase, you might need to rebuild non-
partitioned secondary indexes.
Procedure
Issue the DFSMShsm FRBACKUP PREPARE command.
• To set the DFSMShsm defaults for the BACKUP SYSTEM utility, the RESTORE SYSTEM utility, and the
RECOVER utility, issue the following command:
• To override the DFSMShsm defaults for the RESTORE SYSTEM utility or the RECOVER utility, specify the
FLASHCOPY_PPRCP utility option or issue the following command:
Related concepts
Considerations for using the BACKUP SYSTEM utility and DFSMShsm
If you plan to use the BACKUP SYSTEM utility to take volume-level copies of data and logs, all of the Db2
data sets must reside on volumes that are managed by DFSMSsms. You can take volume-level copies of
the data and logs of a data sharing group or a non-data-sharing Db2 subsystem.
Related reference
FRBACKUP command: Requesting a fast replication backup or dump version DFSMShsm Storage
Administration Reference
Syntax and options of the RECOVER control statement (Db2 Utilities)
Syntax and options of the RESTORE SYSTEM control statement (Db2 Utilities)
Procedure
To recover at an XRC secondary site after a disaster:
1. Issue the TSO command XEND XRC to end the XRC session.
2. Issue the TSO command XRECOVER XRC. This command changes your secondary site to your primary
site and applies the XRC journals to recover data that was in transit when your primary site failed.
3. Complete the procedure in “Recovering in a data mirroring environment” on page 388.
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
The results of issuing the DISPLAY THREAD TYPE(ACTIVE) command to display the status of
threads at all Db2 locations are summarized in the boxes of the preceding figure. The logical unit of
work IDs (LUWIDs) have been shortened for readability, as follows:
• LUWID=15 is IBM.SEADB21.15A86A876789.0010.
• LUWID=16 is IBM.SEADB21.16B57B954427.0003.
For the purposes of procedures that are based on this configuration, assume that both applications
have updated data at all Db2 locations. In the following problem scenarios, the error occurs after the
coordinator has recorded the commit decision, but before the affected participants have recorded the
commit decision. These participants are therefore indoubt.
Read one or more of the scenarios to learn how best to handle problems with indoubt threads in your own
environment.
Symptoms
A communication failure occurred between Seattle (SEA) and Los Angeles (LA) after the database access
thread (DBAT) at LA completed phase 1 of commit processing. At SEA, the TSO thread, LUWID=16 and
TOKEN=2 B, cannot complete the commit with the DBAT at LA4.
At SEA, NetView alert A006 is generated, and message DSNL406 is displayed, indicating that an indoubt
thread at LA because of a communication failure. At LA, alert A006 is generated, and message DSNL405
is displayed, to indicate that a thread is in an indoubt state because of a communication failure with SEA.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
At SEA, an IFCID 209 trace record is written. After the alert is generated and the message is displayed,
the thread completes the commit, which includes the DBAT at SJ 3. Concurrently, the thread is added to
the list of threads for which the SEA Db2 subsystem has an indoubt resolution responsibility. The thread
shows up in a DISPLAY THREAD report for indoubt threads. The thread also shows up in a DISPLAY
THREAD report for active threads until the application terminates.
The TSO application is informed that the commit succeeded. If the application continues and processes
another SQL request, it is rejected with an SQL code to indicate that it must roll back before any more SQL
requests can be processed. This is to ensure that the application does not proceed with an assumption
based on data that is retrieved from LA, or with the expectation that cursor positioning at LA is still intact.
At LA, an IFCID 209 trace record is written. After the alert is generated and the message displayed, the
DBAT 4 is placed in the indoubt state. All locks remain held until resolution occurs. The thread shows up
in a DISPLAY THREAD report for indoubt threads.
The Db2 subsystems, at both SEA and LA, periodically attempt to reconnect and automatically resolve
the indoubt thread. If the communication failure affects only the session that is being used by the TSO
application, and other sessions are available, automatic resolution occurs in a relatively short time. At this
time, message DSNL407 is displayed by both Db2 subsystems.
Symptoms
In this scenario, an indoubt thread at Los Angeles (LA) holds database resources that are needed by other
applications. The organization makes a heuristic decision about whether to commit or abort an indoubt
thread.
Many symptoms are possible, including:
• Message DSNL405 to indicate a thread in the indoubt state
• A DISPLAY THREAD report of active threads showing a larger-than-normal number of threads
• A DISPLAY THREAD report of indoubt threads continuing to show the same thread
• A DISPLAY DATABASE LOCKS report that shows a large number of threads that are waiting for the
locks that are held by the indoubt thread
• Some threads terminating due to timeout
• IMS and CICS transactions not completing
Environment
The following figure illustrates the environment for this scenario.
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
Symptoms
When IMS is cold started and later reconnects with the SEA Db2 subsystem, IMS is not able to resolve the
indoubt thread with Db2. Message DSNM004I is displayed at the IMS master terminal.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
The abnormal termination of IMS has left one allied thread A at the SEA Db2 subsystem indoubt. This is
the thread whose LUWID=15. Because the SEA Db2 subsystem still has effective communication with the
Db2 subsystem at SJ, the LUWID=15 DBAT 1 at this subsystem is waiting for the SEA Db2 to communicate
the final decision and is not aware that IMS has failed. Also, the LUWID=15 DBAT at LA 2, which is
connected to SJ, is also waiting for SJ to communicate the final decision. This cannot be done until SEA
communicates the decision to SJ.
• The connection remains active.
• IMS applications can still access Db2 databases.
• Some Db2 resources remain locked out.
If the indoubt thread is not resolved, the IMS message queues can start to back up. If the IMS queues fill
to capacity, IMS terminates. Therefore, users must be aware of this potential difficulty and must monitor
IMS until the indoubt units of work are fully resolved.
If the command is rejected because network IDs are associated, use the same command again,
substituting the recovery ID for the network ID.
Related concepts
Duplicate IMS correlation IDs
Under certain circumstances, two threads can have the same correlation ID.
Symptoms
The Db2 subsystem at SEA is started with a conditional restart record in the BSDS to indicate a cold start:
• When the IMS subsystem reconnects, it attempts to resolve the indoubt thread that is identified in IMS
as NID=A5. IMS has a resource recovery element (RRE) for this thread. The SEA Db2 subsystem informs
IMS that it has no knowledge of this thread. IMS does not delete the RRE, and the RRE can be displayed
by using the IMS DISPLAY OASN command. The SEA Db2 subsystem also:
– Generates message DSN3005 for each IMS RRE for which Db2 has no knowledge
– Generates an IFCID 234 trace event
• When the Db2 subsystems at SJ and LA reconnect with SEA, each detects that the SEA Db2 subsystem
has cold started. Both the SJ Db2 and the LA Db2 subsystem:
– Display message DSNL411
– Generate alert A001
– Generate an IFCID 204 trace event
• A DISPLAY THREAD report of indoubt threads at both the SJ and LA Db2 subsystems shows the
indoubt threads and indicates that the coordinator has cold started.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
The abnormal termination of the SEA Db2 subsystem has left the two DBATs at SJ 1, 3, and the
LUWID=16 DBAT at LA 4 indoubt. The LUWID=15 DBAT at LA 2, connected to SJ, is waiting for the
SJ Db2 subsystem to communicate the final decision.
The IMS subsystem at SEA is operational and has the responsibility of resolving indoubt units with the
SEA Db2 subsystem.
The Db2 subsystems at both SJ and LA accept the cold start connection from SEA. Processing continues,
waiting for the heuristic decision to resolve the indoubt threads.
connection-name.correlation-id
pst#.psbname
Related concepts
Scenario: What happens when the wrong Db2 subsystem is cold started
When one Db2 subsystem, instead of another Db2 subsystem, is cold started, threads are left indoubt. An
organization that faces this situation can recover.
Scenario: What happens when the wrong Db2 subsystem is cold started
When one Db2 subsystem, instead of another Db2 subsystem, is cold started, threads are left indoubt. An
organization that faces this situation can recover.
The following figure illustrates the environment for this scenario.
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
If the Db2 subsystem at SJ is cold started instead of the Db2 at SEA, the LA Db2 subsystem has the
LUWID=15 2 thread indoubt. The administrator can see that this thread did not originate at SJ, but that
it did originate at SEA. To determine the commit or abort action, the LA administrator requests that
DISPLAY THREAD TYPE(INDOUBT) be issued at the SEA Db2 subsystem, specifying LUWID=15. IMS
does not have any indoubt status for this thread because it completes the two-phase commit process
with the SEA Db2 subsystem.
The Db2 subsystem at SEA tells the application that the commit succeeded.
When a participant cold starts, a Db2 coordinator continues to include in the display of information about
indoubt threads all committed threads where the cold starting participant was believed to be indoubt.
These entries must be explicitly purged by issuing the RESET INDOUBT command. If a participant has an
indoubt thread that cannot be resolved because of coordinator cold start, the administrator can request a
display of indoubt threads at the Db2 coordinator to determine the correct action.
Related information
Scenario: Recovering from communication failure
A communication failure can cause an indoubt thread.
Scenario: Recovering from a Db2 outage at a requester that results in a Db2 cold start
Symptoms
When the Db2 subsystem at SEA reconnects with the Db2 at LA, indoubt resolution occurs for LUWID=16.
Both systems detect heuristic damage, and both generate alert A004; each writes an IFCID 207 trace
record. Message DSNL400 is displayed at LA, and message DSNL403 is displayed at SEA.
Causes
This scenario is based on the conditions described in “Scenario: Recovering from communication failure ”
on page 393.
The LA administrator is called to make an heuristic decision and decides to abort the indoubt thread with
LUWID=16. The decision is made without communicating with SEA to determine the proper action. The
thread at LA is aborted, whereas the threads at SEA and SJ are committed. Processing continues at all
systems. The Db2 subsystem at SEA has indoubt resolution responsibility with LA for LUWID=16.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15, TOKEN=8
Db2 at SEA
DBAT 3
IBMSEADB20001
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSO APP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15, TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSO APP01 CORRID=xyz
LUWID=16, TOKEN=2 PLAN=IMSAPP01
LUWID=15, TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSO APP01
LUWID=16, TOKEN=5
Notes:
1. Does not apply to START DB2. Commands that are issued from IMS must have the prefix /SSR.
Commands that are issued from CICS must have the prefix DSNC.
2. This applies when using outstanding WTOR.
3. The "Attachment facility unsolicited output" does not include "Db2 unsolicited output."
4. Use the z/OS command MODIFY jobname CICS command. The z/OS console must already be defined
as a CICS terminal.
5. Specify the output destination for the unsolicited output of the CICS attachment facility in the RDO.
You can issue many commands from the background within batch programs, such as the following types
of programs:
• z/OS application programs
• Authorized CICS programs
• IMS programs
• APF-authorized programs, such as a terminal monitor program (TMP)
• IFI application programs
GUPI
Related tasks
Submitting work to Db2
Application programs that run under TSO, IMS, or CICS can use Db2 resources by executing embedded
SQL statements or Db2 and related commands.
Related reference
Executing the terminal monitor program (TSO/E Customization)
Writing JCL for command execution (TSO/E Customization)
Procedure
Specify the command prefix for the Db2 subsystem before the command.
For example, the following command starts the Db2 subsystem that is uses -DSN1 for the command
prefix:
GUPI
Related reference
COMMAND PREFIX field (Db2 Installation and Migration)
Related information
About Db2 and related commands (Db2 Commands)
Introductory concepts
Common ways to interact with Db2 for z/OS (Introduction to Db2 for z/OS)
TSO attachment facility (Introduction to Db2 for z/OS)
Procedure
To issue commands from a TSO terminal take one of the following actions:
• Issue a DSN command to start an explicit DSN session.
The DSN command can be issued in the foreground or background, when running under the TSO
terminal monitor program (TMP).
Invoking a DSN session with five retries at 30-second intervals
For example, the following TSO command invokes a DSN session, requesting five retries at 30-
second intervals:
READY
2. You enter:
DSN
4. You enter:
-DISPLAY THREAD
Figure 22. The ISPF panel for the DB2I Primary Option Menu
When you complete operations by using the DB2I panels, DB2I invokes CLISTs, which start the DSN
session and invoke appropriate subcommands. GUPI
Related concepts
DSN command processor (Db2 Application programming and SQL)
Related tasks
Running TSO application programs
You use the DSN command and a variety of DSN subcommands to run TSO applications.
Controlling TSO connections
z/OS does not provide commands for controlling or monitoring a TSO connection to Db2.
Related reference
DSN (TSO) (Db2 Commands)
The DB2I primary option menu (Introduction to Db2 for z/OS)
Related information
About Db2 and related commands (Db2 Commands)
Procedure
Use the DSNC transaction.
CICS can attach to only one Db2 subsystem at a time, so it does not use the Db2 command prefix.
Instead, each command that is entered through the CICS attachment facility must be preceded by a
hyphen (-).
The CICS attachment facility routes the commands to the connected Db2 subsystem and obtains the
command responses.
Example
For example, you enter the following command:
GUPI
Related information
Issuing commands to Db2 using the DSNC transaction (CICS Transaction Server for z/OS)
Example
You enter the following command:
GUPI
Related reference
COMMAND PREFIX field (Db2 Installation and Migration)
Related information
IMS commands
APF-authorized programs
As with IMS, Db2 commands (including START DB2) can be passed from an APF-authorized program
to multiple Db2 subsystems by the MGCRE (SVC 34) z/OS service. Thus, the value of the command
prefix identifies the particular subsystem to which the command is directed. The subsystem command
prefix is specified, as in IMS, when Db2 is installed (in the SYS1.PARMLIB member IEFSSNxx). Db2
supports the z/OS WTO command and response token (CART) to route individual Db2 command
response messages to the invoking application program. Use of the CART is necessary if multiple Db2
commands are issued from a single application program.
For example, to issue DISPLAY THREAD to the default Db2 subsystem from an APF-authorized
program that runs as a batch job, use the following code:
MODESUPV DS 0H
MODESET MODE=SUP,KEY=ZERO
SVC34 SR 0,0
MGCRE CMDPARM
EJECT
CMDPARM DS 0F
CMDFLG1 DC X'00'
CMDLENG DC AL1(CMDEND-CMDPARM)
CMDFLG2 DC X'0000'
CMDDATA DC C'-DISPLAY THREAD'
CMDEND DS 0C
Related concepts
Submitting commands from monitor programs (Db2 Performance)
If a Db2 command is entered from an IMS or CICS terminal, the response messages can be directed to
different terminals. If the response includes more than one message, the following cases are possible:
• If the messages are issued in a set, the entire set of messages is sent to the IMS or CICS terminal that
entered the command. For example, DISPLAY THREAD issues a set of messages.
• If the messages are issued one after another, and not in a set, only the first message is sent to the
terminal that entered the command. Subsequent messages are routed to one or more z/OS consoles
using the WTO function. For example, START DATABASE issues several messages one after another.
You can choose alternative consoles to receive the subsequent messages by assigning them the routing
codes that are placed in the DSNZPxxx module when Db2 is installed. If you want to have all of the
messages available to the person who sent the command, route the output to a console near the IMS or
CICS master terminal operator (MTO).
For APF-authorized programs that run in batch jobs, command responses are returned to the master
console and to the system log if hardcopy logging is available. Hardcopy logging is controlled by the z/OS
system command VARY. GUPI
Related reference
z/OS VARY command (MVS System Commands)
-DISPLAY THREAD (Db2) (Db2 Commands)
-START DATABASE (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Starting Db2
You must start Db2 to make it active and available to Db2 subsystem is active and available to TSO
applications, and other subsystems such as IMS and CICS.
Procedure
Issue the START DB2 command by using one of the following methods:
• Issue the START DB2 command from a z/OS console that is authorized to issue system control
commands (z/OS command group SYS).
The command must be entered from the authorized console and cannot be submitted through JES or
TSO.
Starting Db2 by a JES batch job or a z/OS START command is impossible. The attempt is likely to start
an address space for Db2 that will abend (most likely with reason code X'00E8000F').
• Start Db2 from an APF-authorized program by passing a START DB2 command to the MGCRE (SVC 34)
z/OS service. GUPI
Related tasks
Installation step 15: Start the Db2 subsystem (Db2 Installation and Migration)
Migration step 18: Start Db2 12 (Db2 Installation and Migration)
Starting the Db2 subsystem (Db2 Installation and Migration)
Related reference
-START DB2 (Db2) (Db2 Commands)
If any of the nnnn values in message DSNR004I are not zero, message DSNR007I is issued to provide the
restart status table. GUPI
Procedure
Issue the START DB2 command with one of the following options:
ACCESS(MAINT)
To limit access to users who have installation SYSADM or installation SYSOPR authority.
Users with those authorities can do maintenance operations such as recovering a database or taking
image copies. To restore access to all users, stop Db2 and then restart it, either omitting the ACCESS
keyword or specifying ACCESS(*).
ACCESS(*)
To allow all authorized users to connect to Db2.
GUPI
Procedure
Cancel the system services address space and the distributed data facility address space from the
console.
What to do next
After Db2 stops, check the start procedures of all three Db2 address spaces for correct JCL syntax.
To accomplish this check, compare the expanded JCL in the SYSOUT output with the correct JCL provided
in MVS JCL Reference. Then, take the member name of the erroneous JCL procedure, which is also
provided in the SYSOUT data set, to the system programmer who maintains your procedure libraries. After
finding out which PROCLIB contains the JCL in question, locate the procedure and correct it.
Stopping Db2
Before Db2 stops, all Db2-related write to operator with reply (WTOR) messages must receive replies.
Procedure
Issue one of the following STOP DB2 commands:
If the STOP DB2 command is not issued from a z/OS console, messages DSNY002I and DSN9022I are not
sent to the IMS or CICS master terminal operator (MTO). They are routed only to the z/OS console that
issued the START DB2 command.
What to do next
Before restarting Db2, the following message must also be returned to the z/OS console that is authorized
to enter the START DB2 command:
GUPI
Related concepts
Normal termination
In a normal termination, Db2 stops all activity in an orderly way.
Related reference
-START DB2 (Db2) (Db2 Commands)
-STOP DB2 (Db2) (Db2 Commands)
Procedure
To submit work by using DB2I:
1. Log on to TSO by following your local procedures.
2. Enter ISPF.
3. Enter parameters to control operations.
Related concepts
DSN command processor (Db2 Application programming and SQL)
Related tasks
Issuing commands from TSO terminals
Procedure
GUPI To run TSO application programs:
1. Log on to TSO.
2. Enter the DSN command.
3. Respond to the prompt by entering the RUN subcommand.
Results
The terminal monitor program (TMP) attaches the Db2-supplied DSN command processor, which in turn
attaches the application program.
Example
The following example runs application program DSN8BC3. The program is in library prefix.RUNLIB.LOAD,
which is the name that is assigned to the load module library.
GUPI
Example
The following example shows a TMP job:
In this example:
• IKJEFT01 identifies an entry point for TSO TMP invocation. Alternative entry points that are defined
by TSO are also available to provide additional return code and abend termination processing options.
These options permit the user to select the actions to be taken by the TMP on completion of command
or program execution.
Because invocation of the TSO TMP using the IKJEFT01 entry point might not be suitable for all
user environments, refer to the TSO publications to determine which TMP entry point provides the
termination processing options that are best suited to your batch execution environment.
• USER=SYSOPR identifies the user ID (SYSOPR in this case) for authorization checks.
• DYNAMNBR=20 indicates the maximum number of data sets (20 in this case) that can be dynamically
allocated concurrently.
• z/OS checkpoint and restart facilities do not support the execution of SQL statements in batch programs
that are invoked by the RUN subcommand. If batch programs stop because of errors, Db2 backs out any
changes that were made since the last commit point.
• (ssid) is the subsystem name or group attachment name.
Related tasks
Backing up and recovering your data
Db2 supports recovering data to its current state or to an earlier state. You can recover table spaces,
indexes, index spaces, partitions, data sets, and the entire system. Developing backup and recovery
procedures at your site is critical in order to avoid costly and time-consuming loss of data.
Related reference
Executing the terminal monitor program (TSO/E Customization)
Writing JCL for command execution (TSO/E Customization)
Procedure
Either link-edit or make available a load module known as the call attachment language interface, or
DSNALI. Alternatively, you can link-edit with the Universal Language Interface program (DSNULI).
When the language interface is available, your program can use CAF to connect to Db2 in the following
ways:
• DSNALI only: Implicitly, by including SQL statements or IFI calls in your program just as you would any
program.
• DSNALI or DSNULI: Explicitly, by writing CALL DSNALI or CALL DSNULI statements. GUPI
Related concepts
Call attachment facility (Db2 Application programming and SQL)
Procedure
Either link-edit or make available a load module known as the RRSAF language interface, or DSNRLI.
Alternatively, you can link-edit with the Universal Language Interface program (DSNULI).
When the language interface is available, your program can use RRSAF to connect to Db2 in the following
ways:
• DSNRLI only: Implicitly, by including SQL statements or IFI calls in your program just as you would any
program.
• DSNRLI or DSNULI: Explicitly, by using CALL DSNRLI or CALL DSNULI statements to invoke RRSAF
functions. Those functions establish a connection between Db2 and RRS and allocate Db2 resources.
GUPI
Related concepts
Resource Recovery Services attachment facility (Db2 Application programming and SQL)
Related tasks
Controlling RRS connections
You can start or restart a Resource Recovery Services attachment facility (RRSAF) connection at any time
after Resource Recovery Services (RRS) is started.
Adding a task
Use the stored procedure ADMIN_TASK_ADD to define new scheduled tasks. The parameters that you
use when you call the stored procedure define the schedule and the work for each task.
Table 51. Relationship of null and non-null values for scheduling parameters
Parameter specified Required null parameters
interval point-in-time
trigger-task-name
trigger-task-cond
trigger-task-code
point-in-time interval
trigger-task-name
trigger-task-cond
trigger-task-code
If interval, point-in-time, trigger-task-name, trigger-task-cond, and trigger-task-code are all null, max-
invocations must be set to 1.
You can restrict scheduled executions either by defining a window of time during which execution is
permitted or by specifying how many times a task can execute. Three parameters control restrictions:
• begin-timestamp: earliest permitted execution time
• end-timestamp: latest permitted execution time
• max-invocations: maximum number of executions
The begin-timestamp and end-timestamp parameters are timestamps that define a window of time during
which tasks can start. Before and after this window, the task will not start even if the schedule parameters
are met. If begin-timestamp is null, the window begins at the time when the task is added, and executions
can start immediately. If end-timestamp is null, the window extends infinitely into the future, so that
repetitive or triggered executions are not limited by time. Timestamps must either be null values or future
times, and end-timestamp cannot be earlier than begin-timestamp.
For repetitive or triggered tasks, the number of executions can be limited using the max-invocations
parameter. In this case, the task executes no more than the number of times indicated by the parameter,
even if the schedule and the window of time would require the task to be executed. Executions that
Procedure
Connect to the Db2 subsystem with sufficient authorization to call the ADMIN_TASK_ADD stored
procedure.
The following task definitions show some common scheduling options.
To define Do this
A task that Set max-invocations to 1.
executes only
one time: Optionally, provide a value for the begin-timestamp parameter to control when
execution happens. Leave other parameters null.
For example, if max-invocations is set to 1 and begin-timestamp is set to
2008-05-27-06.30.0, the task executes at 6:30 AM on May 27, 2008.
With this definition, the task executes one time. If begin-timestamp has been
provided, execution happens as soon as permitted.
A regular Set interval to the number of minutes that you want to pass between the start of one
repetitive execution and the start of the next execution.
execution:
Optionally, provide values for the max-invocations, begin-timestamp, and end-
timestamp parameters to limit execution. Leave other parameters null.
For example, if interval is set to 5 and begin-timestamp is set to 2008-05-27-06.30.0,
the task executes at 6:30 AM on May 27, 2008, then again at 6:35, 6:40, and so forth.
With this definition, the task executes every interval minutes, so long as the previous
execution has finished. If the previous execution is still in progress, the new execution
is postponed interval minutes. Execution continues to be postponed until the running
task completes.
An irregular Set point-in-time to a valid UNIX cron format string. The string specifies a set of times.
repetitive
execution: Optionally, provide values for the max-invocations, begin-timestamp and end-
timestamp parameters to limit execution. Leave other parameters null.
For example, if point-in-time is set to 0 22 * * 1,5, the task executes at 10:00 PM each
Monday and Friday.
With this definition, the task executes at each time specified, so long as the previous
execution has finished. If the previous execution is still in progress, the new execution
is skipped. Subsequent executions continue to be skipped until the running task
completes.
An execution Set trigger-task-name to the name of the triggering task. Optionally set trigger-task-
that is cond and trigger-task-code to limit execution based on the result of the triggering task.
triggered when The trigger-task-cond and trigger-task-code parameters must either both be null or
another task both be non-null.
completes:
Optionally, provide values for the max-invocations, begin-timestamp and end-
timestamp parameters to limit execution. Leave other parameters null.
For example, assume that a scheduled INSERT job has a task name of test_task. If
trigger-task-name is test_task, trigger-task-cond is EQ, and trigger-task-code is 0, then
this task executes when the INSERT job completes with a return code of 0.
With this definition, the task executes at each time specified, so long as the previous
execution has finished. If the previous execution is still in progress, the new execution
is skipped. Subsequent executions continue to be skipped until the running task
completes.
Related concepts
UNIX cron format
The UNIX cron format is a way of specifying time for the point-in-time parameter of the
ADMIN_TASK_ADD stored procedure.
Related tasks
Choosing an administrative task scheduler in a data sharing environment
In a data sharing group, tasks can be added, removed, or executed in any of the administrative task
schedulers with the same result. Tasks are not localized to one administrative task scheduler. A task
Procedure
Specify the associated Db2 subsystem ID in the db2-ssid parameter when you schedule the task.
Related concepts
UNIX cron format
The UNIX cron format is a way of specifying time for the point-in-time parameter of the
ADMIN_TASK_ADD stored procedure.
Related tasks
Defining task schedules
You can use different combinations of parameters to define schedules for task executions.
Related reference
Scheduling capabilities of the administrative task scheduler
The administrative task scheduler can execute a task once or many times, at fixed points in time, or in
response to events.
1,2,5,9
0-4,8-12
Unrestricted range
A field can contain an asterisk (*), which represents all possible values in the field.
The day of a command's execution can be specified by two fields: day of month and day of week. If both
fields are restricted by the use of a value other than the asterisk, the command will run when either field
matches the current time.
For example, The value 30 4 1,15 * 5 causes a command to run at 4:30 AM on the 1st and 15th of each
month, plus every Friday.
Step values
Step values can be used in conjunction with ranges. The syntax range/step defines the range and an
execution interval.
If you specify first-last/step, execution takes place at first, then at all successive values that are distant
from first by step, until last.
Example
To specify command execution every other hour, use 0-23/2. This expression is equivalent to the
value 0,2,4,6,8,10,12,14,16,18,20,22.
If you specify */step, execution takes place at every interval of step through the unrestricted range.
Example
As an alternative to 0-23/2 for execution every other hour, use */2.
Related tasks
Defining task schedules
You can use different combinations of parameters to define schedules for task executions.
Choosing an administrative task scheduler in a data sharing environment
In a data sharing group, tasks can be added, removed, or executed in any of the administrative task
schedulers with the same result. Tasks are not localized to one administrative task scheduler. A task
can be added by one administrative task scheduler, and then executed by any of the administrative task
schedulers that are in the data sharing group.
Related reference
Scheduling capabilities of the administrative task scheduler
Procedure
Connect to the Db2 subsystem with sufficient authorization to call the function ADMIN_TASK_LIST.
The function contacts the administrative task scheduler to update the Db2 task list in the table
SYSIBM.ADMIN_TASKS, if necessary, and then reads the tasks from the Db2 task list. The parameters
that were used to create the task are column values of the returned table. The table also includes the
authorization ID of the task creator, in the CREATOR column, and the time that the task was created, in
the LAST_MODIFIED column.
Related reference
ADMIN_TASK_LIST (Db2 SQL)
Procedure
To determine the last execution status of a scheduled task:
1. Issue the ADMIN_TASK_STATUS() table function to generate the status table.
2. Select the rows in the table that correspond to the task name.
Tip: You can relate the task execution status to the task definition by joining the output tables from the
ADMIN_TASK_LIST and ADMIN_TASK_STATUS() table functions on the TASK_NAME column.
Results
The table that is created by the ADMIN_TASK_STATUS() table function indicates the last execution of
scheduled tasks. Each row is indexed by the task name and contains the last execution status of the
corresponding task.
Procedure
To list multiple execution statuses of scheduled tasks:
1. Issue the ADMIN_TASK_STATUS(MAX_HISTORY) table function to generate the status table.
The max-history parameter specifies the number of execution statuses that you want to view.
2. Select the rows in the table that correspond to the task name.
To order the execution statuses, you can use the NUM_INVOCATIONS and START_TIMESTAMP
columns in the table that is returned.
If a task ran fewer times than the max-history value, this function returns all of the execution statuses.
If the SYSIBM.ADMIN_TASKS_HIST table is not available, this function returns only the last five
Procedure
Call the ADMIN_TASK_OUTPUT table function.
This user-defined table function returns up to one row for each output parameter of the stored procedure
and up to one row for each column value of each row of each result set of the stored procedure. If
the output values and the result set values are too large to be stored in the OUTPUT column of the
SYSIBM.ADMIN_TASKS_HIST table, only the last rows of the result sets are returned. The column and
parameter values are returned as strings.
Related reference
ADMIN_TASK_OUTPUT (Db2 SQL)
Procedure
Call the ADMIN_TASK_UPDATE stored procedure.
If the task that you want to update is running, the changes go into effect after the current execution
finishes.
Related reference
ADMIN_TASK_UPDATE stored procedure (Db2 SQL)
Procedure
Issue the ADMIN_TASK_CANCEL stored procedure on the Db2 subsystem that is specified in the
DB2_SSID column of the task status.
For a task that is running, the stored procedure cancels the Db2 thread or the JES job that the task runs
in, and issues a return code of 0 (zero). If the task is not running or if cancellation of the task cannot be
initiated, the stored procedure issues a return code of 12.
Related reference
ADMIN_TASK_CANCEL stored procedure (Db2 SQL)
Procedure
To remove a scheduled task:
1. Optional: Issue the following SQL statement to identify tasks that will never execute again:
SELECT T.TASK_NAME
FROM TABLE (DSNADM.ADMIN_TASK_LIST()) T,
TABLE (DSNADM.ADMIN_TASK_STATUS()) S
WHERE T.TASK_NAME = S.TASK_NAME AND
(S.NUM_INVOCATIONS = T.MAX_INVOCATIONS OR
T.END_TIMESTAMP < CURRENT TIMESTAMP) AND
STATUS <> 'RUNNING'
Procedure
Use one of the following commands to start the administrative task scheduler:
• To start an administrative task scheduler that is named admtproc from the operator's console using
the default tracing option, issue the MVS system command:
start admtproc
• To start an administrative task scheduler that is named admtproc from the operator's console with
tracing enabled, issue the MVS system command:
start admtproc,trace=on
• To start an administrative task scheduler that is named admtproc from the operator's console with
tracing disabled, issue the MVS system command:
start admtproc,trace=off
Results
When the administrative task scheduler starts, message DSNA671I displays on the console.
Procedure
To stop the administrative task scheduler:
• Recommended method: To stop an administrative task scheduler that is named admtproc from the
operator's console, issue the following MVS system command:
modify admtproc,appl=shutdown
stop admtproc
Procedure
To enable tracing for problem determination:
• To start a trace for an administrative task scheduler that is named admtproc, issue the following MVS
system command:
modify admtproc,appl=trace=on
modify admtproc,appl=trace=off
• To configure the system so that tracing starts automatically when the administrative task scheduler
starts, modify the procedure parameter TRACE in the JCL job that starts the administrative task
scheduler.
This job has the name that was assigned when the administrative task scheduler was installed. The job
was copied into one of the PROCLIB library during the installation. Specify TRACE=ON.
To disable tracing, change the parameter to TRACE=OFF.
Procedure
To recover the task list if it is lost or damaged:
• To recover if the ADMIN_TASKS task list is corrupted:
a) Create a new and operable version of the table.
b) Grant SELECT, UPDATE, INSERT and DELETE privileges on the table to the administrative task
scheduler started task user.
As soon as the ADMIN_TASKS table is accessible again, the administrative task scheduler performs an
autonomic recovery of the table using the content of the VSAM task list.
• To recover if the VSAM file is corrupted, create an empty version of the VSAM task list.
As soon as the VSAM task list is accessible again, the administrative task scheduler performs an
autonomic recovery using the content of the ADMIN_TASKS task list.
• If both task lists (the VSAM data set and the ADMIN_TASKS table) are corrupted and inaccessible, the
administrative task scheduler is no longer operable. Messages DSNA681I and DSNA683I display on
the console and the administrative task scheduler terminates. To recover from this situation:
a) Create an empty version of the VSAM task list.
b) Recover the table space DSNADMDB.DSNADMTS, where the ADMIN_TASKS table is located.
c) Restart the administrative task scheduler.
As soon as both task lists are accessible again, the administrative task scheduler performs an
autonomic recovery of the VSAM task list using the content of the recovered ADMIN_TASKS table.
Related tasks
Installation step 24: Set up the administrative task scheduler (Db2 Installation and Migration)
Symptoms
A task was scheduled successfully, but the action did not complete or did not complete correctly.
Symptoms
An SQL code is returned. When SQLCODE is -443, the error message cannot be read directly, because only
a few characters are available.
Symptoms
An SQL code is returned.
DB2AMSTR DB2AADMT
START START
Started task
DB2 for z/OS DB2AADMT
Scheduler
Subsystem parameter
DB2 association
ADMTPROC = DB2AADMT DB2SSID = DB2A
Call
SSID = DB2A
Related reference
ADMIN_TASK_ADD stored procedure (Db2 SQL)
ADMIN_TASK_REMOVE stored procedure (Db2 SQL)
scheduler
Scheduler
with name in
ADMTPROC already Start
yes running? no
RRSAF
start event
Connect to DB2
Scheduler executes
Stop Scheduler executes
JCL jobs &
JCL jobs only
stored procedures
RRSAF
stop event
If you want the administrative task scheduler to terminate when Db2 is stopped, you can specify the
STOPONDB2STOP parameter in the started task before restarting the administrative task scheduler. This
parameter has no value. You specify this parameter by entering STOPONDB2STOP without an equal sign
(=) or a value. When you specify this parameter, the administrative task scheduler terminates after it
finishes executing the tasks that are running and after executing the tasks that are triggered by Db2
stopping. When Db2 starts again, the administrative task scheduler is restarted.
Important: When you use the STOPONDB2STOP parameter to stop the administrative task scheduler
when Db2 is stopped, JCL tasks will not run. These JCL tasks will not run, even if they could have run
successfully had an administrative task scheduler remained active.
Related tasks
Installation step 24: Set up the administrative task scheduler (Db2 Installation and Migration)
DB2AADMT DB2BADMT
Security Security
DFLTUID = ... VSAM task list DFLTUID = ...
..........
External task list .......... External task list
ADMTDD1 = prefix.TASKLIST ADMTDD1 = prefix.TASKLIST
Tasks are not localized to a administrative task scheduler. They can be added, removed, or executed in
any of the administrative task schedulers in the data sharing group with the same result. However, you
can force the task to execute on a given administrative task scheduler by specifying the associated Db2
subsystem ID in the DB2SSID parameter when you schedule the task. The tasks that have no affinity to a
given Db2 subsystem are executed among all administrative task schedulers. Their distribution cannot be
predicted.
JES reader
Execute JCL
Related tasks
Installation step 22: Set up Db2-supplied routines (Db2 Installation and Migration)
Migration step 24: Set up Db2-supplied routines (Db2 Installation and Migration)
Installation step 24: Set up the administrative task scheduler (Db2 Installation and Migration)
Related reference
z/OS UNIX System Services Planning
ADMIN_TASL_REMOVE() Interface
Call
Add Task
Call
User-defined functions Remove Task name
SQL
scheduler
select from ADMIN_TASK_LIST()
Call
Refresh task lists
ADMIN_TASK_STATUS()
The minimum permitted value for the MAXTHD parameter is 1, but this value should not be lower than
the maximum number of tasks that you expect to execute simultaneously. If there are more tasks to
be executed simultaneously than there are available sub-threads, some tasks will not start executing
immediately. The administrative task scheduler tries to find an available sub-thread within one minute of
when the task is scheduled for execution. As a result, multiple short tasks might be serialized in the same
sub-thread, provided that their total execution time does not go over this minute.
The parameters of the started task are not positional. Place parameters in a single string separated by
blank spaces.
If you receive this message, increase the MAXTHD parameter value and restart the administrative task
scheduler.
Related tasks
Installation step 24: Set up the administrative task scheduler (Db2 Installation and Migration)
Related information
DSNA677I (Db2 Messages)
DSNA678I (Db2 Messages)
Procedure
To schedule execution of a stored procedure:
1. Add a task for the administrative task scheduler by using the ADMIN_TASK_ADD stored procedure.
When you add your task, specify which stored procedure to run and when to run it.
Use one of the following parameters or groups of parameters of ADMIN_TASK_ADD to control when
the stored procedure is run:
Option Description
interval The stored procedure is to execute at the specified regular interval.
point-in-time The stored procedure is to execute at the specified times.
trigger-task-name The stored procedure is to execute when the specified task occurs.
trigger-task-name trigger-task- The stored procedure is to execute when the specified task and
cond trigger-task-code task result occur.
Optionally, you can also use one or more of the following parameters to control when the stored
procedure runs:
begin-timestamp
Earliest permitted execution time
end-timestamp
Latest permitted execution time
max-invocations
Maximum number of executions
When the specified time or event occurs for the stored procedure to run, the administrative task
scheduler calls the stored procedure in Db2.
2. Optional: After the task finishes execution, check the status by using the ADMIN_TASK_STATUS
function.
This function returns a table with one row that indicates the last execution status for each
scheduled task. If the scheduled task is a stored procedure, the JOB_ID, MAXRC, COMPLETION_TYPE,
SYSTEM_ABENDCD, and USER_ABENDCD fields contain null values. In the case of a Db2 error, the
SQLCODE, SQLSTATE, SQLERRMC, and SQLERRP fields contain the information that Db2 returned from
calling the stored procedure.
//CEEOPTS DD *
ENVAR("TZ=MEZ-1MESZ,M3.5.0,M10.5.0")
Related tasks
Using the CEEOPTS DD statement (z/OS Language Environment Customization)
Related reference
ENVAR (z/OS Language Environment Customization)
Procedure
Issue the START DATABASE, STOP DATABASE, or DISPLAY DATABASE commands.
START DATABASE
Makes a database or individual partitions available. Also removes pages from the logical page list
(LPL).
DISPLAY DATABASE
Displays status, user, and locking information for a database.
STOP DATABASE
Makes a database or individual partitions unavailable after existing users have quiesced. Db2 also
closes and deallocates the data sets.
GUPI
Related tasks
Monitoring databases
You can use the DISPLAY DATABASE command to obtain information about the status of databases
and the table spaces and index spaces within each database. If applicable, the output also includes
information about physical I/O errors for those objects.
Starting databases
Issue the START DATABASE (*) command to start all databases for which you have the STARTDB privilege.
Making objects unavailable
You can make databases, table spaces, and index spaces unavailable by using the STOP DATABASE
command.
Related reference
-START DATABASE (Db2) (Db2 Commands)
-DISPLAY DATABASE (Db2) (Db2 Commands)
-STOP DATABASE (Db2) (Db2 Commands)
Example
GUPI For example, the following command starts two partitions of table space DSN8S12E in the database
DSN8D12A:
GUPI
Related reference
-START DATABASE (Db2) (Db2 Commands)
-STOP DATABASE (Db2) (Db2 Commands)
Procedure
In cases when the object was explicitly stopped, you can make it available again by issuing the START
DATABASE command.
Example
For example, the following command starts all table spaces and index spaces in database DSN8D12A for
read-only access:
GUPI
Related reference
-START DATABASE (Db2) (Db2 Commands)
-STOP DATABASE (Db2) (Db2 Commands)
Related information
DSN9022I (Db2 Messages)
Procedure
Issue the START DATABASE command with the ACCESS(FORCE) option.
This command releases most restrictions for the named objects. These objects must be explicitly named
in a list following the SPACENAM option.
Example
For example:
Db2 cannot process the START DATABASE ACCESS(FORCE) request if postponed-abort or indoubt units
of recovery exist. The RESTP (restart-pending) status and the AREST (advisory restart-pending) status
remain in effect until either automatic backout processing completes or you perform one of the following
actions:
Related tasks
Resolving postponed units of recovery
You can postpone some of the backout work that is associated with long-running units of work during
system restart by using the LBACKOUT subsystem parameter. By delaying such backout work, the Db2
subsystem can be restarted more quickly.
Related reference
-START DATABASE (Db2) (Db2 Commands)
Monitoring databases
You can use the DISPLAY DATABASE command to obtain information about the status of databases
and the table spaces and index spaces within each database. If applicable, the output also includes
information about physical I/O errors for those objects.
Procedure
GUPI To monitor databases:
1. Issue the DISPLAY DATABASE command as follows:
D1 TS RW,UTRO
D2 TS RW
D3 TS STOP
D4 IX RO
D5 IX STOP
D6 IX UT
LOB1 LS RW
******* DISPLAY OF DATABASE dbname ENDED **********************
11:45:15 DSN9022I - DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
DSNT360I = ****************************************
DSNT361I = * DISPLAY DATABASE SUMMARY 483
* GLOBAL OVERVIEW
DSNT360I = ****************************************
DSNT362I = DATABASE = DB486A STATUS = RW 485
DBD LENGTH = 4028
TS486A TS 0004
IX486A IX L0004
IX486B IX 0004
TS486C TS
IX486C IX
******* DISPLAY OF DATABASE DB486A ENDED *********************
DSN9022I = DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
The display indicates that five objects are in database DB486A: two table spaces and three
indexes. Table space TS486A has four parts, and table space TS486C is nonpartitioned. Index
IX486A is a nonpartitioning index for table space TS486A, and index IX486B is a partitioned index
with four parts for table space TS486A. Index IX486C is a nonpartitioned index for table space
TS486C. GUPI
Related reference
Advisory or restrictive states (Db2 Utilities)
-DISPLAY DATABASE (Db2) (Db2 Commands)
Procedure
Issue the DISPLAY DATABASE command with the SPACENAM option.
Example
For example, you can issue a command that names partitions 2, 3, and 4 in table space TPAUGF01 in
database DBAUGF01:
DSNT360I : ***********************************
DSNT361I : * DISPLAY DATABASE SUMMARY
* GLOBAL USE
DSNT360I : ***********************************
DSNT362I : DATABASE = DBAUGF01 STATUS = RW
DBD LENGTH = 8066
DSNT397I :
NAME TYPE PART STATUS CONNID CORRID USERID
-------- ---- ----- ----------------- -------- ------------ --------
GUPI
Procedure
Issue the DISPLAY DATABASE command.
Example
For example, issue a command that names table space TSPART in database DB01:
Use the LOCKS ONLY keywords on the DISPLAY DATABASE command to display only spaces that have
locks. You can substitute the LOCKS keyword with USE, CLAIMERS, LPL, or WEPR to display only
databases that fit the criteria. Use DISPLAY DATABASE as follows:
GUPI
Procedure
Issue the DISPLAY DATABASE command with the LPL option.
The ONLY option restricts the output to objects that have LPL pages.
Example
For example:
DSNT360I = ***********************************************************
DSNT361I = * DISPLAY DATABASE SUMMARY
* GLOBAL LPL
DSNT360I = ***********************************************************
DSNT362I = DATABASE = DBFW8401 STATUS = RW,LPL
DBD LENGTH = 8066
DSNT397I =
NAME TYPE PART STATUS LPL PAGES
-------- ---- ----- ----------------- ------------------
The display indicates that the pages that are listed in the LPL PAGES column are unavailable for access.
GUPI
Related reference
-DISPLAY DATABASE (Db2) (Db2 Commands)
Procedure
Use one of the following methods.
• Start the object with access (RW) or (RO). That command is valid even if the table space is already
started.
When you issue the START DATABASE command, message DSNI006I is displayed, indicating that LPL
recovery has begun. If second pass log apply for LPL recovery starts, message DSNI051I is displayed.
Message DSNI022I is displayed periodically to give you the progress of the recovery. When recovery is
complete, message DSNI021I is displayed.
When you issue the START DATABASE command for a LOB table space that is defined as LOG NO,
and Db2 detects that log records that are required for LPL recovery are missing due to the LOG NO
attribute, the LOB table space is placed in AUXW status, and the LOB is invalidated.
• Run the RECOVER or REBUILD INDEX utility on the object.
The only exception to this is when a logical partition of a nonpartitioned index is in the LPL and has
RECP status. If you want to recover the logical partition by using REBUILD INDEX with the PART
keyword, you must first use the START DATABASE command to clear the LPL pages.
• Run the LOAD utility with the REPLACE option on the object.
• Issue an SQL DROP statement for the object.
GUPI
Related concepts
Pages in error and the logical page list (Db2 Data Sharing Planning and Administration)
Related reference
-START DATABASE (Db2) (Db2 Commands)
Procedure
Issue the DISPLAY DATABASE command.
Example
For example:
Procedure
Issue the STOP DATABASE command with the appropriate options.
Type of object that you want to stop How to issue the STOP DATABASE command
To stop a physical partition of a table Use the PART option.
space:
To stop a physical partition of an index Use the PART option.
space:
To stop a logical partition within a Use the PART option.
nonpartitioning index that is associated
with a partitioned table space:
To stop any object as quickly as Use the AT(COMMIT) option.
possible:
To stop user-defined databases: Start database DSNDB01 and table
spaces DSNDB01.DBD01, DSNDB01.SYSDBDXA, and
DSNDB01.SYSLGRNX before you stop user-defined
databases. If you do not do start these objects, you receive
message DSNI003I. Resolve the problem and run the job
again.
To stop the work file database: Start database DSNDB01 and table spaces DSNDB01.DBD01
DSNDB01.SYSDBDXA, and DSNDB01.SYSLGRNX before you
stop the work file database. If you do not do start
these objects, you receive message DSNI003I. Resolve the
problem and run the job again.
GUPI
Related reference
-STOP DATABASE (Db2) (Db2 Commands)
-START DATABASE (Db2) (Db2 Commands)
Related information
DSNI003I (Db2 Messages)
The data sets containing a table space are closed and deallocated by the preceding commands. GUPI
Related reference
-STOP DATABASE (Db2) (Db2 Commands)
Procedure
Issue the ALTER BUFFERPOOL command.
Related concepts
Buffer pool thresholds (Db2 Performance)
Related reference
-ALTER BUFFERPOOL (Db2) (Db2 Commands)
Procedure
Issue the DISPLAY BUFFERPOOL command.
Example
For example:
-DISPLAY BUFFERPOOL(BP0)
!DIS BUFFERPOOL(BP0)
DSNB401I ! BUFFERPOOL NAME BP0, BUFFERPOOL ID 0, USE COUNT 27
DSNB402I ! BUFFER POOL SIZE = 2000 BUFFERS
VPSIZE MINIMUM = 2250 VPSIZE MAXIMUM = 3125
ALLOCATED = 2000 TO BE DELETED = 0
IN-USE/UPDATED = 0
DSNB406I ! PGFIX ATTRIBUTE -
CURRENT = NO
PENDING = YES
PAGE STEALING METHOD -
CURRENT = LRU
PENDING = LRU
DSNB404I ! THRESHOLDS -
VP SEQUENTIAL = 80
DEFERRED WRITE = 85 VERTICAL DEFERRED WRT = 10,15
PARALLEL SEQUENTIAL = 50 ASSISTING PARALLEL SEQT= 0
DSN9022I ! DSNB1CMD '-DISPLAY BUFFERPOOL' NORMAL COMPLETION
GUPI
Related concepts
Obtaining information about group buffer pools (Db2 Data Sharing Planning and Administration)
Related tasks
Tuning database buffer pools (Db2 Performance)
Monitoring and tuning buffer pools by using online commands (Db2 Performance)
Related reference
-DISPLAY BUFFERPOOL (Db2) (Db2 Commands)
Related information
DSNB401I (Db2 Messages)
Procedure
Issue the appropriate command for the action that you want to take.
START FUNCTION SPECIFIC
Activates an external function that is stopped.
DISPLAY FUNCTION SPECIFIC
Displays statistics about external user-defined functions accessed by Db2 applications.
STOP FUNCTION SPECIFIC
Prevents Db2 from accepting SQL statements with invocations of the specified functions.
GUPI
Related concepts
Sample user-defined functions (Db2 SQL)
Function resolution (Db2 SQL)
Related tasks
Monitoring and controlling stored procedures
Stored procedures, such as native SQL procedures, external SQL procedures, and external stored
procedures, are user-written programs that run at a Db2 server.
Related reference
-START FUNCTION SPECIFIC (Db2) (Db2 Commands)
-DISPLAY FUNCTION SPECIFIC (Db2) (Db2 Commands)
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
SET PATH (Db2 SQL)
CURRENT PATH (Db2 SQL)
Procedure
Issue the START FUNCTION SPECIFIC command.
Example
For example, assume that you want to start functions USERFN1 and USERFN2 in the PAYROLL schema.
Issue the following command:
Related reference
-START FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information
DSNX973I (Db2 Messages)
Procedure
Issue the DISPLAY FUNCTION SPECIFIC command.
Example
For example, to display information about functions in the PAYROLL schema and the HRPROD schema,
issue this command:
GUPI
Related reference
-DISPLAY FUNCTION SPECIFIC (Db2) (Db2 Commands)
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information
DSNX975I (Db2 Messages)
Procedure
Issue the STOP FUNCTION SPECIFIC command.
Example
For example, issue a command like the following one, which stops functions USERFN1 and USERFN3 in
the PAYROLL schema:
While the STOP FUNCTION SPECIFIC command is in effect, attempts to execute the stopped functions
are queued. GUPI
Related reference
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information
DSNX974I (Db2 Messages)
Procedure
Issue the DISPLAY PROCEDURE command.
GUPI For example:
-DISPLAY PROCEDURE
GUPI
Note: To display information about a native SQL procedure, you must run the procedure in DEBUG mode.
If you do not run the native SQL procedure in DEBUG mode (for example, in a production environment),
the DISPLAY PROCEDURE command will not return output for the procedure.
If you do run the procedure in DEBUG mode the WLM environment column in the output contains the
WLM ENVIRONMENT FOR DEBUG that you specified when you created the native SQL procedure. The
DISPLAY PROCEDURE output shows the statistics of native SQL procedures as '0' if the native SQL
procedures are under the effect of a STOP PROCEDURE command.
Example
GUPI The following example shows two schemas (PAYROLL and HRPROD) that have been accessed by
Db2 applications. You can also display information about specific stored procedures.
GUPI
Related tasks
Displaying thread information about stored procedures
Issue the DISPLAY THREAD command to display thread information about stored procedures.
Related reference
-DISPLAY PROCEDURE (Db2) (Db2 Commands)
Procedure
Issue the DISPLAY THREAD command.
For example:
GUPI
Example
Example 1: The following example of output from the DISPLAY THREAD command shows a thread that is
executing an external SQL procedure or an external stored procedure.
The SP status indicates that the thread is executing within the stored procedure. An SW status indicates
that the thread is waiting for the stored procedure to be scheduled.
Example 2: This example of output from the DISPLAY THREAD command shows a thread that is
executing a native SQL procedure. If you do not specify the DETAIL option, the output will not include
information that is specific to the stored procedure.
Issuing the command -display thread(*) type(proc) detail results in the following output:
Related tasks
Displaying statistics about stored procedures
Issue the DISPLAY PROCEDURE command to display statistics about stored procedures that are accessed
by Db2 applications.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
D WLM,APPLENV=WLMENV1
Results
You might get results like the following:
The output indicates that WLMENV1 is available, so WLM can schedule stored procedures for execution in
that environment.
Procedure
Perform one of the following actions:
• Call the WLM_REFRESH stored procedure.
• Issue the following z/OS command:
VARY WLM,APPLENV=environment-name,REFRESH
In this command, environment-name is the name of a WLM application environment that is associated
with one or more stored procedures. The application environment is refreshed to incorporate
the changed load modules for all stored procedures and user-defined functions in the particular
environment.
Alternatively, when you make certain changes to the JCL startup procedure, you must quiesce and then
resume the WLM application environment rather than refresh it. For these types of changes, use the
following z/OS commands:
a. To stop all stored procedures address spaces that are associated with the WLM application
environment name, use the following z/OS command:
VARY WLM,APPLENV=name,QUIESCE
The address spaces stop when the current requests that are executing in those address spaces
complete.
This command puts the WLM application environment in QUIESCED state. When the WLM application
environment is in QUIESCED state, the stored procedure requests are queued. If the WLM application
environment is restarted within a certain time, the stored procedures are executed. If a stored
procedure cannot be executed, the CALL statement returns SQL code -471 with reason code
00E79002.
b. To restart all stored procedures address spaces that are associated with WLM application environment
name, use the following z/OS command:
VARY WLM,APPLENV=name,RESUME
New address spaces start when all JCL changes are established. Until that time, work requests that
use the new address spaces are queued.
Also, you can use the VARY WLM command with the RESUME option when the WLM application
environment is in the STOPPED state due to a failure. This state might be the result of a failure when
starting the address space, or because WLM detected five abnormal terminations within 10 minutes.
When an application environment is in the STOPPED state, WLM does not schedule stored procedures
for execution in it. If you try to call a stored procedure when the WLM application environment is in
the STOPPED state, the CALL statement returns SQL code -471 with reason code 00E7900C. After
correcting the condition that caused the failure, you need to restart the application environment.
Procedure
Add the following DD statement to the startup procedure for the WLM-established stored procedure
address space:
//AUTOREFR DD DUMMY
Db2 saves the environment name in a list of environments to refresh automatically. If the z/OS Resource
Recovery Services (RSS) environment is recycled, but Db2 and the its associated WLM-established stored
procedures address space are not restarted, Db2 issues the following z/OS command to refresh each
environment that is named in the list:
VARY WLM,APPLENV=environment-name,REFRESH
Procedure
Take the appropriate action, depending on the type of stored procedure that you use.
Type of stored Actions
procedure
All types of stored • Look at the diagnostic information in CEEDUMP. If the startup procedures for
procedures your stored procedures address spaces contain a DD statement for CEEDUMP,
Language Environment writes a small diagnostic dump to CEEDUMP when a
stored procedure terminates abnormally. The output is printed after the stored
procedures address space terminates. You can obtain the dump information by
stopping the stored procedures address space in which the stored procedure is
running.
• Debug the stored procedure as a stand-alone program on a workstation.
• Record stored procedure debugging messages to a disk file or JES spool file by
using the Language Environment MSGFILE run time option.
• Store debugging information in a table. This option works well for remote stored
procedures.
Native SQL Use the GET DIAGNOSTICS statement. The DB2_LINE_NUMBER parameter
procedures returns:
• The line number where an error is encountered in parsing, binding, or executing
a CREATE or ALTER statement for a native SQL procedure.
• The line number when a CALL statement invokes a native SQL procedure and the
procedure returns with an error.
This information is not returned for an external SQL procedure, and this value is
meaningful only if the statement source contains new line control characters.
Procedure
Reissue the same CREATE or ALTER statements used in the test environment in the production servers.
Related tasks
Migrating external SQL procedures from test to production
Use IBM Data Studio to migrate external SQL procedures from a test environment to production.
Migrating external stored procedures from test to production
Use the CREATE PROCEDURE statement to migrate external stored procedures from a test environment to
a production environment.
Creating native SQL procedures (Db2 Application programming and SQL)
Deploying a native SQL procedure to another Db2 for z/OS server (Db2 Application programming and SQL)
Procedure
Use IBM Data Studio to deploy the stored procedure.
The binary deploy capability in IBM Data Studio promotes external SQL procedures without recompiling.
The binary deploy capability copies all necessary components from one environment to another and
performs the bind in the target environment.
GUPI
Related tasks
Migrating native SQL procedures from test to production
When migrating native SQL procedures from a test environment to production, you do not need to
determine whether you want to recompile to create new object code and a new package on the
production server.
Migrating external stored procedures from test to production
Use the CREATE PROCEDURE statement to migrate external stored procedures from a test environment to
a production environment.
Related reference
CREATE PROCEDURE (SQL - external) (deprecated) (Db2 SQL)
Procedure
GUPI To migrate an external stored procedure from a test environment to production:
1. Determine the change management policy of your site.
You can choose to recompile to create new object code and a new package on the production server,
or you can choose not to recompile.
2. Depending on your change management policy, complete the appropriate task.
• To migrate the stored procedure without recompiling:
a. Copy the CREATE PROCEDURE statement.
Related tasks
Migrating native SQL procedures from test to production
When migrating native SQL procedures from a test environment to production, you do not need to
determine whether you want to recompile to create new object code and a new package on the
production server.
Migrating external SQL procedures from test to production
Use IBM Data Studio to migrate external SQL procedures from a test environment to production.
Preparing an external user-defined function for execution (Db2 Application programming and SQL)
Binding application packages and plans (Db2 Application programming and SQL)
Related reference
CREATE PROCEDURE (external) (Db2 SQL)
DSNTEP2 and DSNTEP4 sample programs (Db2 Application programming and SQL)
Procedure
GUPI To control autonomous procedures:
1. Issue a DISPLAY THREAD command to find the status of the autonomous procedure and the token for
the invoking thread.
The token for the thread is shown in a DSNV520I message.
The following example output from a DISPLAY THREAD command shows that an autonomous
procedure was invoked by the thread with the token 13:
2. Issue a CANCEL THREAD command to cancel the thread that invoked the autonomous procedure.
For example, you might issue the following command to cancel the thread that invoked the
autonomous procedure shown in the preceding example:
GUPI
Results
COMMIT and ROLLBACK operations that are applied within an autonomous procedure apply to all
procedures and functions that are nested under the autonomous procedure.
Related concepts
Autonomous procedures (Db2 Application programming and SQL)
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
-CANCEL THREAD (Db2) (Db2 Commands)
Procedure
Prepare an appropriate set of JCL statements for a utility job.
The input stream for that job must include Db2 utility control statements.
Procedure
GUPI To monitor and change an online utility:
1. Issue the appropriate command for the action that you want to take.
ALTER UTILITY
Alters parameter values of an active REORG or REBUILD utility.
DISPLAY UTILITY
Displays the status of utility jobs.
TERM UTILITY
Terminates a utility job before its normal completion.
GUPI
Related concepts
Db2 online utilities (Db2 Utilities)
Related reference
-START DATABASE (Db2) (Db2 Commands)
Procedure
GUPI To run a Db2 stand-alone utility:
1. Stop the table spaces and index spaces that are the object of the utility job. If you do not do this, you
might receive inconsistent output.
2. If the utility is one that requires that Db2 be stopped during utility execution, use this command:
3. If the utility is one that requires that Db2 be running during utility execution and if Db2 is not running,
issue this command:
-START DB2
4. Create a JCL job that includes the utility control statement with code specific data set names and
associated parameters for your utility.
GUPI
Stand-alone utilities
Some stand-alone utilities can be run only by means of JCL.
These stand-alone utilities are:
• DSN1COPY
• DSN1COMP
• DSN1PRNT
• DSN1SDMP
• DSN1LOGP
• DSNJLOGF
• DSNJU003 (change log inventory)
• DSNJU004 (print log map)
Most of the stand-alone utilities can be used while Db2 is running. However, for consistency of output, the
table spaces and index spaces must be stopped first because these utilities do not have access to the Db2
buffer pools. In some cases, Db2 must be running or stopped before you invoke the utility.
GUPI Stand-alone utility job streams require that you code specific data set names in the JCL.
To determine the fifth qualifier in the data set name, you need to query the Db2 catalog tables
Procedure
Issue the appropriate z/OS command for the action that you want to take.
In each command description, irlmproc is the IRLM procedure name and irlmnm is the IRLM subsystem
name.
MODIFY irlmproc,ABEND,DUMP
Abnormally terminates the IRLM and generates a dump.
MODIFY irlmproc,ABEND,NODUMP
Abnormally terminates the IRLM but does not generate a dump.
MODIFY irlmproc,DIAG
Initiates diagnostic dumps for IRLM subsystems in a data sharing group when a delay occurs.
MODIFY irlmproc,SET
Dynamically sets the maximum amount of private virtual (PVT) storage or the number of trace buffers
that are used for this IRLM.
MODIFY irlmproc,STATUS
Displays the status for the subsystems on this IRLM.
START irlmproc
Starts the IRLM.
STOP irlmproc
Stops the IRLM normally.
TRACE CT,OFF,COMP=irlmnm
Stops IRLM tracing.
TRACE CT,ON,COMP=irlmnm
Starts IRLM tracing for all subtypes (DBM, SLM, XIT, and XCF).
TRACE CT,ON,COMP=irlmnm,SUB=(subname)
Starts IRLM tracing for a single subtype.
Related concepts
IRLM names (Db2 Installation and Migration)
Related information
Command types and environments in Db2 (Db2 Commands)
MODIFY irlmproc,SET,PVT=nnn
Sets the maximum amount of private virtual (PVT) storage that this IRLM can use for lock control
structures.
MODIFY irlmproc,SET,DEADLOCK=nnnn
Sets the time for the local deadlock detection cycle.
MODIFY irlmproc,SET,LTE=nnnn
Sets the number of LOCK HASH entries that this IRLM can use on the next connect to the XCF LOCK
structure. Use this command only for data sharing.
MODIFY irlmproc,SET,TIMEOUT=nnnn,subsystem-name
Sets the timeout value for the specified Db2 subsystem. Display the subsystem-name by using
MODIFY irlmproc,STATUS.
MODIFY irlmproc,SET,TRACE=nnn
Sets the maximum number of trace buffers that are used for this IRLM.
MODIFY irlmproc,STATUS,irlmnm
Displays the status of a specific IRLM.
MODIFY irlmproc,STATUS,ALLD
Displays the status of all subsystems known to this IRLM in the data sharing group.
MODIFY irlmproc,STATUS,ALLI
Displays the status of all IRLMs known to this IRLM in the data sharing group.
MODIFY irlmproc,STATUS,MAINT
Displays the maintenance levels of IRLM load module CSECTs for the specified IRLM instance.
MODIFY irlmproc,STATUS,STOR
Displays the current and high-water allocation for private virtual (PVT) storage, as well as storage that
is above the 2-GB bar.
MODIFY irlmproc,STATUS,TRACE
Displays information about trace types of IRLM subcomponents.
Each IMS and Db2 subsystem must use a separate instance of IRLM. GUPI
Related information
Command types and environments in Db2 (Db2 Commands)
Procedure
Issue the z/OS START irlmproc command.
When started, the IRLM issues this message to the z/OS console:
GUPI
Procedure
Issue the z/OS STOP irlmproc command.
If you try to stop the IRLM while Db2 or IMS is still using it, you get the following message:
GUPI
If that happens, issue the STOP irlmproc command again, when the subsystems are finished with the
IRLM.
Alternatively, if you must stop the IRLM immediately, enter the following command to force the stop:
MODIFY irlmproc,ABEND,NODUMP
Results
Your Db2 subsystem will abend. An IMS subsystem that uses the IRLM does not abend and can be
reconnected.
IRLM uses the z/OS Automatic Restart Manager (ARM) services. However, it de-registers from ARM for
normal shutdowns. IRLM registers with ARM during initialization and provides ARM with an event exit
routine. The event exit routine must be in the link list. It is part of the IRLM DXRRL183 load module.
The event exit routine ensures that the IRLM name is defined to z/OS when ARM restarts IRLM on a
target z/OS system that is different from the failing z/OS system. The IRLM element name that is used
for the ARM registration depends on the IRLM mode. For local-mode IRLM, the element name is a
concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM, the element name is
a concatenation of the IRLM data sharing group name, IRLM subsystem name, and the IRLM ID.
IRLM de-registers from ARM when one of the following events occurs:
Monitoring threads
Threads are an important resource within a Db2 subsystem. A thread is a structure that describes a
connection made by an application and traces its progress in the Db2 subsystem. You can monitor them
by using the Db2 DISPLAY THREAD command or by using profile tables.
Types of threads
The following types of threads are used in Db2 subsystems:
Allied threads
Db2 uses allied threadsto process local requests and connections from allied subsystems, such as
TSO, batch, IMS, CICS, CAF, or RRSAF. For more information, see Managing allied Db2 threads (Db2
Performance).
Database access threads (DBATs)
Db2 uses distributed database access threads (DBATs) to process requests through a network for
distributed clients that access data in a Db2 for z/OS server. For more information, see Managing
distributed database access threads (DBATs) (Db2 Performance).
Related tasks
Managing Db2 threads (Db2 Performance)
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
The DISPLAY THREAD command allows you to select which type of information you want to include in the
display by using one or more of the following:
• Active, indoubt, postponed-abort, or pooled threads
• Allied threads that are associated with the address spaces whose connection-names are specified
• Allied threads
• Distributed threads
• Distributed threads that are associated with a specific remote location
• Detailed information about connections with remote locations
• A specific logical unit of work ID (LUWID)
The information that is returned by the DISPLAY THREAD command reflects a dynamic status. By the time
the information is displayed, the status might have changed. Moreover, the information is consistent only
within one address space and is not necessarily consistent across all address spaces.
Examples
GUPI
You can issue the following command to reset the indoubt unit of work by specifying the IP address
(FFFF:10.97.217.50) and the resync port number of the coordinator (1332) from the message:
GUPI
Related concepts
Monitoring threads with DISPLAY THREAD commands
The DISPLAY THREAD command output displays information about threads that are processing locally
and for distribute requests, stored procedures or user-defined functions executed by threads, and parallel
tasks. It can also indicate that a system quiesce is in effect as a result of the ARCHIVE LOG command.
Related tasks
Resetting the status of an indoubt thread
After manual recovery of an indoubt thread, allow the systems to resynchronize automatically. Automatic
resynchronization resets the status of the indoubt thread. However, you might need to take additional
steps if heuristic damage or a protocol error occurs.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
DSNV406I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the LOCATION option, followed by a list of location names.
Example
For example, you can specify an asterisk (*) after the THREAD and LOCATION options:
When you issue this command, Db2 returns messages like the following:
Key
Description
1
The ST (status) column contains characters that indicate the connection status of the local site.
The TR indicates that an allied, distributed thread has been established. The RA indicates that a
distributed thread has been established and is in receive mode. The RD indicates that a distributed
thread is performing a remote access on behalf of another location (R) and is performing an
operation that involves DCE services (D). Currently, Db2 supports the optional use of DCE services
to authenticate remote users.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the LOCATION option.
Example
For example, if you want to display information about a non-Db2 database management system (DBMS)
with the LUNAME of LUSFOS2, enter the following command:
GUPI
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the LOCATION and DETAIL options:
Db2 returns the following messages, which indicate that the local site application is waiting for a
conversation to be allocated in Db2, and a Db2 server that is accessed by a DRDA client using TCP/IP.
Key
Description
1
The information on this line is part of message DSNV447I. The conversation A (active) column for the
server is useful in determining when a Db2 thread is hung and whether processing is waiting in the
network stack (VTAM or TCP/IP) or in Db2. A value of W indicates that the thread is suspended in Db2
and is waiting for notification from the network that the event has completed. A value of N indicates
that control of the conversation is in the network stack.
2
The information on this line is part of message DSNV448I. The A in the conversation ST (status)
column for a serving site indicates that a conversation is being allocated in Db2. A 2 would indicate
DRDA access. An R in the status column would indicate that the conversation is receiving, or waiting to
receive a request or reply. An S in this column for a server indicates that the application is sending, or
preparing to send a request or reply.
3
The information on this line is part of message DSNV448I. The SESSID column has changed. If the
connection uses VTAM, the SESSID column contains a VTAM session identifier. If the connection uses
TCP/IP, the SESSID column contains "local:remote", where local specifies the Db2 TCP/IP port number
and remote specifies the partner's TCP/IP port number.
GUPI
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the following options:
Key
Description
1
In the preceding display output, you can see that the LUWID has been assigned a token of 2. You
can use this token instead of the long version of the LUWID to cancel or display the given thread. For
example:
2
In addition, the status column for the serving site contains a value of S2. The S means that this thread
can send a request or response, and the 2 means that this is a DRDA access conversation.
GUPI
Procedure
Issue the DISPLAY THREAD command with the TYPE keyword.
For example:
GUPI
USIBMSTODB23
SDA
The application that runs at USIBMSTODB21 is connected to a server at USIBMSTODB22 by using DRDA
access. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB21, you
receive the following output:
This output indicates that the application is waiting for data to be returned by the server at
USIBMSTODB22.
The server at USIBMSTODB22 is running a package on behalf of the application at USIBMSTODB21 to
access data at USIBMSTODB23 and USIBMSTODB24 by Db2 private protocol access. If you enter the
DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB22, you receive the following
output:
This output indicates that the server at USIBMSTODB22 is waiting for data to be returned by the
secondary server at USIBMSTODB24.
The secondary server at USIBMSTODB23 is accessing data for the primary server at USIBMSTODB22. If
you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB23, you receive
the following output:
This output indicates that the secondary server at USIBMSTODB23 is not currently active.
The secondary server at USIBMSTODB24 is also accessing data for the primary server at
USIBMSTODB22. If you enter the DISPLAY THREAD command with the DETAIL keyword from
USIBMSTODB24, you receive the following output:
This output indicates that the secondary server at USIBMSTODB24 is currently active.
The conversation status might not change for a long time. The conversation could be hung, or the
processing could be taking a long time. To determine whether the conversation is hung, issue the
DISPLAY THREAD command again and compare the new timestamp to the timestamps from previous
output messages. If the timestamp is changing, but the status is not changing, the job is still processing.
If you need to terminate a distributed job, perhaps because it is hung and has been holding database
locks for a long time, you can use the CANCEL DDF THREAD command if the thread is in Db2 (whether
active or suspended). If the thread is in VTAM, you can use the VARY NET TERM command. GUPI
Related tasks
Canceling threads
You can use the CANCEL THREAD command to terminate threads that are active or suspended in Db2.
Related reference
-CANCEL THREAD (Db2) (Db2 Commands)
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Notes:
1. After the application connects to Db2 but before a plan is allocated, this field is blank.
The name of the connection can have one of the following values:
Name
Connection to
TSO
Program that runs in TSO foreground
-DISPLAY THREAD(BATCH,TSO,DB2CALL)
Key
Description
1
This is a TSO batch application.
2
This is a TSO batch application running at a remote location and accessing tables at this location.
3
This is a TSO online application.
4
This is a call attachment facility application.
5
This is an originating thread for a TSO batch application.
6
This is a parallel thread for the originating TSO batch application thread.
7
This is a parallel thread for the originating TSO batch application thread.
Figure 29. DISPLAY THREAD output that shows TSO and CAF connections
GUPI
Related concepts
Monitoring threads
Threads are an important resource within a Db2 subsystem. A thread is a structure that describes a
connection made by an application and traces its progress in the Db2 subsystem. You can monitor them
by using the Db2 DISPLAY THREAD command or by using profile tables.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
READY
You enter:
DSN displays:
DSN
You enter:
DSN displays:
DSN
You enter:
END
TSO displays:
READY
GUPI
DSNC DISCONNECT
Terminates threads using a specific Db2 plan.
DSNC DISPLAY
Displays thread information or statistics.
DSNC MODIFY
Modifies the maximum number of threads for a transaction or group.
DSNC STOP
Disconnects CICS from Db2.
DSNC STRT
Starts the CICS attachment facility.
CICS command responses are sent to the terminal from which the corresponding command was entered,
unless the DSNC DISPLAY command specifies an alternative destination. GUPI
Related information
Overview of the CICS Db2 interface (CICS Db2 Guide)
Command types and environments in Db2 (Db2 Commands)
Procedure
GUPI To connect to Db2, use one of the following approaches:
• Issue the following command to start the attachment facility:
For ssid, specify a Db2 subsystem ID to override the value that is specified in the CICS INITPARM
macro.
• Start the attachment facility automatically at CICS initialization by using a program list table (PLT).
GUPI
Restarting CICS
One function of the CICS attachment facility is to keep data synchronized between the two systems.
Procedure
You must auto-start CICS (START=AUTO in the DFHSIT table) to obtain all necessary information for
indoubt thread resolution that is available from its log. Do not perform a cold start.
You specify the START option in the DFHSIT table.
If CICS has requests active in Db2 when a Db2 connection terminates, the corresponding CICS tasks
might remain suspended even after CICS is reconnected to Db2. Purge those tasks from CICS by using a
CICS-supplied transaction such as:
GUPI
GUPI
If any unit of work is indoubt when the failure occurs, the CICS attachment facility automatically attempts
to resolve the unit of work when CICS is reconnected to Db2. Under some circumstances, however, CICS
cannot resolve indoubt units of recovery. You have to manually recover these indoubt units of recovery.
Related concepts
CICS Transaction Server for z/OS Supplied Transactions
Related tasks
Monitoring and CICS threads and recovering CICS-Db2 indoubt units of recovery
No operator intervention is required for connecting applications because CICS handles the threads
dynamically. However, You can monitor threads by using CICS attachment facility commands or Db2
commands.
Related information
Resource definition (CICS Transaction Server for z/OS)
Procedure
• Authorized CICS user can monitor the threads and change the connection parameters as needed.
• Operators can use the following CICS attachment facility commands to monitor the threads:
• DSNC DISPLAY PLAN plan-name destination
DSNC DISPLAY TRANSACTION transaction-id destination
These commands display the threads that the resource or transaction is using. The following
information is provided for each created thread:
- Authorization ID for the plan that is associated with the transaction (8 characters).
- PLAN/TRAN name (8 characters).
- A or I (one character).
If A is displayed, the thread is within a unit of work. If I is displayed, the thread is waiting for a
unit of work, and the authorization ID is blank.
• The following CICS attachment facility command is used to monitor CICS:
• To display a list of indoubt units of recovery, you can issue a DISPLAY THREAD command.
GUPI
Related concepts
Resolution of CICS indoubt units of recovery
The resolution of indoubt units of recovery has no effect on CICS resources.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
GUPI To disconnect a CICS application from Db2, use one of the following methods:
• The Db2 command CANCEL THREAD can be used to cancel a particular thread. CANCEL THREAD
requires that you know the token for any thread that you want to cancel.
Enter the following command to cancel the thread that is identified by the token as indicated in the
display output.
-CANCEL THREAD(46)
When you issue the CANCEL THREAD command for a thread, that thread is scheduled to be terminated
in Db2.
• The command DSNC DISCONNECT terminates the threads allocated to a plan ID, but it does not
prevent new threads from being created. This command frees Db2 resources that are shared by the
CICS transactions and allows exclusive access to them for special-purpose processes such as utilities
or data definition statements.
The thread is not canceled until the application releases it for reuse, either at SYNCPOINT or end-of-
task. GUPI
Related concepts
CICS Transaction Server for z/OS Db2 Guide
Procedure
• To disconnect CICS with an orderly termination, use one of the following methods:
• Enter the DSNC STOP QUIESCE command. CICS and Db2 remain active.
For example, the following command stops the Db2 subsystem (QUIESCE) allows the currently
identified tasks to continue normal execution, and does not allow new tasks to identify themselves
to Db2:
The following message appears when the stop process starts and frees the entering terminal
(option QUIESCE):
When the stop process ends and the connection is terminated, the following message is added to
the output from the CICS job:
• Enter the CICS command CEMT PERFORM SHUTDOWN. During program list table (PLT) processing,
the CICS attachment facility is also named to shut down. Db2 remains active. For information about
this command, see CICS shutdown (CICS Transaction Server for z/OS).
• Enter the Db2 command CANCEL THREAD. The thread terminates abnormally.
• To disconnect CICS with a forced termination, use one of the following methods:
• Enter the DSNC STOP FORCE command. This command waits 15 seconds before detaching the
thread subtasks and in some cases can achieve an orderly termination. This message appears when
the stop process starts and frees the entering terminal (option FORCE):
GUPI
The message is issued regardless of whether Db2 is active and does not imply that the connection is
established.
The order of starting IMS and Db2 is not vital. If IMS is started first, when Db2 comes up, Db2 posts the
control region MODIFY task, and IMS again tries to reconnect.
If Db2 is stopped by the STOP DB2 command, the /STOP SUBSYS command, or a Db2 abend, IMS cannot
reconnect automatically. You must make the connection by using the /START SUBSYS command.
The following messages can be produced when IMS attempts to connect a Db2 subsystem. In each
message, imsid is the IMS connection name.
• If Db2 is active, these messages are sent:
– To the z/OS console:
– To the IMS :
RC=00 means that a notify request has been queued. When Db2 starts, IMS is also notified.
No message goes to the z/OS console.
Notes:
1. After the application connects to Db2 but before sign-on processing completes, this field is blank.
2. After sign-on processing completes but before a plan is allocated, this field is blank.
The following command displays information about IMS threads, including those accessing data at
remote locations:
-DISPLAY THREAD(imsid)
Key
Description
1
This is a message-driven BMP.
2
This thread has completed sign-on processing, but a Db2 plan has not been allocated.
Figure 30. DISPLAY THREAD output showing IMS connections
GUPI
Procedure
To terminate an IMS application, use one of these methods:
• Terminate the application.
The IMS commands /STOP REGION reg# ABDUMP or /STOP REGION reg# CANCEL can be used to
terminate an application that runs in an online environment. For an application that runs in the DL/I
batch environment, the z/OS command CANCEL can be used.
• Use the Db2 command CANCEL THREAD.
-CANCEL THREAD(46)
When you issue the CANCEL THREAD command, that thread is scheduled to be terminated in Db2.
Related information
IMS commands
Procedure
Issue the DISPLAY THREAD command.
For example:
GUPI
Related tasks
Restarting Db2 after termination
When you need to restart Db2 after Db2 terminates normally or abnormally, keep in mind these
considerations, which are important for backup and recovery, and for maintaining consistency.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
Issue one of the following commands.
GUPI In each command, imsid is the connection name, and pst#.psbname is the correlation ID that is
listed by the command DISPLAY THREAD. Your choice of the ACTION parameter tells whether to commit
or roll back the associated unit of recovery.
• -RECOVER INDOUBT (imsid) ACTION (COMMIT) ID (pst#.psbname)
• -RECOVER INDOUBT (imsid) ACTION (ABORT) ID (pst#.psbname)
Results
One of the following messages might be issued after you issue the RECOVER command:
GUPI
Related tasks
Resolving indoubt units of recovery
If Db2 loses its connection to another system, it attempts to recover all inconsistent objects after restart.
The information that is needed to resolve indoubt units of recovery must come from the coordinating
system.
Procedure
Issue the DISPLAY THREAD command.
For example:
Related tasks
Restarting Db2 after termination
When you need to restart Db2 after Db2 terminates normally or abnormally, keep in mind these
considerations, which are important for backup and recovery, and for maintaining consistency.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
To resolve IMS RREs:
1. To display the residual recovery entry (RRE) information, issue the following command:
Where nnnn is the originating application sequence number that is listed in the display. The originating
application sequence number is the schedule number of the program instance, indicating its place in
the sequence of invocations of that program since the last cold start of IMS. IMS cannot have two
indoubt units of recovery with the same schedule number.
Results
These commands reset the status of IMS; they do not result in any communication with Db2.
Related information
Recovering from IMS indoubt units of recovery
Procedure
To monitor activity on connections, issue the following commands:
• From Db2:
• From IMS:
Results
Either command produces the following messages:
Related tasks
Displaying information by location
You can use the DISPLAY THREAD command to display thread information for particular locations.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
Issue the /DISPLAY SUBSYS command with the following syntax:
The connection between IMS and Db2 is shown as one of the following states:
• CONNECTED
• NOT CONNECTED
• CONNECT IN PROGRESS
• STOPPED
• STOP IN PROGRESS
Example
The following four examples show the output that might be generated when you issue the IMS /DISPLAY
SUBSYS command.
The following figure shows the output that is returned for a DSN subsystem that is not connected. The
IMS attachment facility issues message DSNM003I in this example.
The following figure shows the output that is returned for a DSN subsystem that is connected. The IMS
attachment facility issues message DSNM001I in this example.
The following figure shows the output that is returned for a DSN subsystem that is in a stopped status.
The IMS attachment facility issues message DSNM002I in this example.
Figure 33. Example of output from the IMS /DISPLAY SUBSYS command
The following figure shows the output that is returned for a DSN subsystem that is connected and region
1. You can use the values from the REGID and the PROGRAM fields to correlate the output of the
command to the LTERM that is involved.
Figure 34. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected and
the region ID (1) that is included.
That command sends the following message to the terminal that entered it, usually the master terminal
operator (MTO):
Related tasks
Invoking the Resource Recovery Services attachment facility (Db2 Application programming and SQL)
Programming for concurrency (Db2 Performance)
Procedure
Issue the following DISPLAY THREAD command:
For RRSAF connections, a network ID is the z/OS RRS unit of recovery ID (URID), which uniquely identifies
a unit of work. A z/OS RRS URID is a 32-character number. GUPI
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
Procedure
GUPI To recover an indoubt unit of recovery:
1. Determine the correlation ID of the thread to be recovered by issuing the DISPLAY THREAD command.
2. Issue one of the following commands to recover the indoubt unit:
The ACTION parameter of the RECOVER command indicates whether to commit or roll back the
associated unit of recovery.
Results
If you recover a thread that is part of a global transaction, all threads in the global transaction are
recovered.
The following messages might be issued when you issue the RECOVER INDOUBT command:
where nid is the 32-character field that is displayed in the DSNV449I message. GUPI
Related concepts
Multiple system consistency
Db2 can work with other DBMSs, including IMS, and other types of remote DBMSs through the distributed
data facility (DDF). Db2 can also work with other Db2 subsystems through the DDF.
Related tasks
Resolving indoubt units of recovery
If Db2 loses its connection to another system, it attempts to recover all inconsistent objects after restart.
The information that is needed to resolve indoubt units of recovery must come from the coordinating
system.
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
-RECOVER INDOUBT (Db2) (Db2 Commands)
Procedure
Issue the following DISPLAY THREAD command:
For RRSAF connections, a network ID is the z/OS RRS unit of recovery ID (URID), which uniquely identifies
a unit of work. A z/OS RRS URID is a 32-character number. GUPI
Related reference
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV401I (Db2 Messages)
-DISPLAY THREAD(RRSAF)
The command produces output similar to the output in the following figure:
Key
Description
1
This is an application that used CREATE THREAD to allocate the special plan that is used by RRSAF
(plan name = ?RRSAF).
2
This is an application that connected to Db2 and allocated a plan with the name TESTDBD.
3
This is an application that is currently not connected to a TCB (shown by status DI).
4
This is an active connection that is running plan TESTP05. The thread is accessing data at a remote
site.
Figure 35. DISPLAY THREAD output showing RRSAF connections
GUPI
Procedure
Issue the CANCEL THREAD command.
The CANCEL THREAD command requires that you know the token for any thread that you want to cancel.
Issue the DISPLAY THREAD command to obtain the token number, and then enter the following command
to cancel the thread:
-CANCEL THREAD(token)
When you issue the CANCEL THREAD command, Db2 schedules the thread for termination. GUPI
Procedure
To specify stored procedures that can share locks in RSS contexts, complete the following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
a) In the PROFILEID column, specify a unique value or accept the generated default value.
This value identifies the profile and the relationship between DSN_PROFILE_TABLE and
DSN_PROFILE_ATTRIBUTES rows.
b) Specify the filtering criteria of the profile.
You can specify values in the columns from one of the following filtering categories:
• LOCATION only
• PRDID only
• AUTHID, ROLE, or both
• COLLID, PKGNAME, or both
• One of CLIENT_APPLNAME, CLIENT_USERID, or CLIENT_WRKSTNNAME
The filtering values are not case-sensitive, and profiles can match regardless of the case of the
input values.
Other filtering columns must contain the null value.
Tip: If you create multiple profiles with overlapping filtering criteria, Db2 applies only one
profile from each filtering category, based on a specific order of precedence. If multiple
DSN_PROFILE_TABLE rows specify the same filtering criteria, only the newest is row is accepted
when you start the profiles, and the other duplicates are rejected. Also, exact values take
precedence over values that use an asterisk (*) wildcard. However, profiles from different filtering
categories can all apply. For more information about these rules, see “How Db2 applies multiple
matching profiles for threads and connections” on page 559.
Each procedure-name must identify an external procedure (not an external SQL procedure), be
qualified with the procedure schema, and not specify a three-part name. The length attribute of
the ATTRIBUTE1 value must not exceed 1024 bytes.
ATTRIBUTE2
NULL
ATTRIBUTE3
NULL
3. Load or reload the profile tables into memory by issuing a START PROFILE command. (For best
results, do not issue a STOP PROFILE command when you add or modify existing profiles. Use the
STOP PROFILE command only if you intend to disable all existing profiles.) For more information, see
“Starting and stopping profiles” on page 557.
4. Check the status of all newly added profiles in the STATUS columns of the DSN_PROFILE_HISTORY
and DSN_PROFILE ATTRIBUTES_HISTORY tables.
Successful completion of the START PROFILE command does not imply that all profiles started
successfully. If the STATUS column of either history table contains a value that does not start with
'ACCEPTED', further action is required to enable the profile or the keyword action.
Example
Suppose that you insert the following row in SYSIBM.DSN_PROFILE_ATTRIBUTES:
This profile row specifies that the ACCTG.UPDATE_ADDRESS stored procedure can share locks with
distributed threads.
Related tasks
Monitoring and controlling Db2 by using profile tables
Starting DDF
You can start the distributed data facility (DDF) if you have at least SYSOPR authority.
Procedure
Issue the START DDF command.
Results
When DDF is started and is responsible for indoubt thread resolution with remote partners, message
DSNL432I, message DSNL433I, or both, is generated. These messages summarize the responsibility DDF
has for indoubt thread resolution with remote partners.
The following messages are associated with the START DDF command:
Stopping DDF
You can stop the distributed data facility (DDF) if you have SYSOPR authority or higher.
Procedure
GUPI To stop the DDF, use one of the following approaches:
• To stop DDF after existing requests complete, issue the STOP DDF command with the MODE
(QUIESCE) option. This is the default option, and you should use it whenever possible.
With the QUIESCE option, the STOP DDF command does not complete until all VTAM or TCP/IP
requests have completed. In this case, no resynchronization work is necessary when you restart
DDF. If any indoubt units of work require resynchronization, the QUIESCE option produces message
DSNL035I. Use the FORCE option only when you must stop DDF quickly. Restart times are longer if you
use the FORCE option.
• To force the completion of outstanding VTAM or TCP/IP requests by canceling threads that are
associated with distributed requests, issue the STOP DDF with the MODE (FORCE) option. Use this
option only when you must stop DDF quickly.
When DDF is stopped with the FORCE option, and DDF has indoubt thread responsibilities with remote
partners, message DSNL432I, DSNL433I, or both are issued.
DSNL432I shows the number of threads that DDF has coordination responsibility over with remote
participants who could have indoubt threads. At these participants, database resources that are
unavailable because of the indoubt threads remain unavailable until DDF is started and resolution
occurs.
DSNL433I shows the number of threads that are indoubt locally and need resolution from remote
coordinators. At the DDF location, database resources are unavailable because the indoubt threads
remain unavailable until DDF is started and resolution occurs.
To force the completion of outstanding VTAM or TCP/IP requests, use the FORCE option, which cancels
the threads that are associated with distributed requests.
When the FORCE option is specified with STOP DDF, database access threads in the prepared state
that are waiting for the commit or abort decision from the coordinator are logically converted to the
indoubt state. The conversation with the coordinator is terminated. If the thread is also a coordinator
VARY NET,INACT,ID=db2lu,FORCE
This command makes VTAM unavailable and terminates DDF. VTAM forces the completion of any
outstanding VTAM requests immediately.
When DDF has stopped, you must issue the following command before you can issue the START DDF
command:
VARY NET,ACT,ID=db2lu
Results
The STOP DDF command causes the following messages to appear:
If the distributed data facility has already been stopped, the STOP DDF command fails and message
DSNL002I - DDF IS ALREADY STOPPED appears. GUPI
Related concepts
Starting and stopping DDF in data sharing (Db2 Data Sharing Planning and Administration)
Related reference
-STOP DDF (Db2) (Db2 Commands)
Procedure
1. To suspend distributed data facility (DDF) server threads temporarily. issue the following command:
Db2 waits for all active DDF database access threads to become pooled or to terminate. You can use
the optional WAIT and CANCEL keywords, to control how long Db2 waits and what action Db2 takes
after a specified time period.
2. To resume the suspended DDF server threads, issue a START DDF command.
Procedure
Enter one of the following commands:
• To show only the basic information, enter the DISPLAY DDF command.
-DISPLAY DDF
• To show additional information, enter the DISPLAY DDF command with the DETAIL option.
With the DETAIL option, the following additional information is included in the output:
GUPI
Related reference
-DISPLAY DDF (Db2) (Db2 Commands)
Related information
DSNL080I (Db2 Messages)
Procedure
GUPI To display information about connections with other locations, use the following approaches:
• Issue the DISPLAY LOCATION command.
For example:
-DISPLAY LOCATION(*)
You can use an asterisk (*) in place of the end characters of a location name. For example, you can
use DISPLAY LOCATION(SAN*) to display information about all active connections between your
Db2 subsystem and a remote location that begins with "SAN". The results include the number of
conversations and conversation role, requester, or server.
When Db2 connects with a remote location, information about that location, persists in the report even
if no active connections exist, including:
– The physical address of the remote location (LOCATION)
– The product identifier (PRDID)
• When you specify the DETAIL option in the DISPLAY LOCATION command, the report might include
additional messages lines for each remote location, including:
– Information about the number of conversations in the CONNS column with each remote location
that have particular attributes. that are identified by a value in the ATT column. For example, the
number of connections that use trusted context are identified by TRS.
Example
The DISPLAY LOCATION command displays the following types of information for each DBMS that has
active connections, except for the local subsystem:
• The network address for the location:
– For remote locations accessed through SNA connections, the location name the SNA LU name.
– For remote locations accessed through TCP/IP connections, the location name is the dotted decimal
IPv4 or colon hexadecimal IPv6 address. If the T column contains R, the IP address is concatenated
with the port value of the remote location.
• The PRDID, which identifies the database product at the location.
The product identifier (PRDID) value is an 8-byte character value in pppvvrrm format, where: ppp is a
3-letter product code; vv is the version;rr is the release; and m is the modification level. In Db2 12 for
z/OS, the modification level indicates a range of function levels:
DSN12015 for V12R1M500 or higher.
DSN12010 for V12R1M100.
For more information, see “Product identifier (PRDID) values in Db2 for z/OS” on page 518.
• Whether the local system is requesting data from the remote system, or acting as a server to the remote
system. 'R' indicates a requester connection from local subsystem accessing the remote system. 'S'
indicates a server connection from remote system accessing the local subsystem.
• The number of connections that have a particular attribute from or to the location. The attribute value
is blank for the message line that contains the total number of connections for the location. Additional
lines for connections with particular attributes are shown only when a detailed report is requested.
• The total number of conversations that are in use between the local system and the remote system
For example, suppose two threads are at location USIBMSTODB21. One thread is a database access
thread that is related to a non-Db2 requester system, and the other is an allied thread that goes
The following output shows the result of the DISPLAY LOCATION(*) command when Db2 is connected
to the following DRDA partners:
• TCP/IP for DRDA connections to ::FFFF:124.38.54.16
• SNA connections to LULA.
The DISPLAY LOCATION command displays information for each remote location that currently is, or once
was, in contact with Db2. If a location is displayed with zero conversations, one of the following conditions
exists:
• Sessions currently exist with the partner location, but no active conversations are allocated to any of the
sessions.
• Sessions no longer exist with the partner, because contact with the partner has been lost.
GUPI
Related reference
-DISPLAY LOCATION (Db2) (Db2 Commands)
Related information
DSNL200I (Db2 Messages)
Procedure
To monitor remote connections to Db2 by using profile tables, complete the following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
a) In the PROFILEID column, specify a unique value or accept the generated default value.
This value identifies the profile and the relationship between DSN_PROFILE_TABLE and
DSN_PROFILE_ATTRIBUTES rows.
b) In the LOCATION column, specify the filtering scope of the profile.
You can specify an IP address or domain name value. The LOCATION value is not case sensitive,
and profile matches can occur regardless of the case of the input values.
Other filtering columns must contain the null value.
Tip: If you create multiple profiles with overlapping filtering criteria, Db2 applies only one
profile from each filtering category, based on a specific order of precedence. If multiple
DSN_PROFILE_TABLE rows specify the same filtering criteria, only the newest is row is accepted
when you start the profiles, and the other duplicates are rejected. Also, exact values take
precedence over values that use an asterisk (*) wildcard. However, profiles from different filtering
categories can all apply. For more information about these rules, see “How Db2 applies multiple
matching profiles for threads and connections” on page 559.
c) In the PROFILE_ENABLED column, specify 'Y' so that the profile is enabled when profiles are
started.
If the PROFILE_AUTOSTART subsystem parameter setting is YES, the profile starts when you issue
a START PROFILE command or when Db2 starts.
2. Insert one or more SYSIBM.DSN_PROFILE_ATTRIBUTES table rows to specify the monitoring
functions of the profile and its thresholds:
a) Specify the PROFILEID value from the DSN_PROFILE_TABLE row that specifies the filtering criteria
for this profile.
Tip: Use the same PROFILEID value for any DSN_PROFILE_ATTRIBUTES rows that require the
same filtering criteria. If multiple DSN_PROFILE_TABLE rows contain exactly matching filtering
criteria, only the newest duplicate row is accepted when you start the profiles, and the others are
rejected and disabled.
b) In the KEYWORDS column, specify 'MONITOR CONNECTIONS'.
ATTRIBUTE2
For MONITOR CONNECTIONS, an integer that specifies the threshold of the total number of
remote connections that are allowed from each application server.
The maximum allowed value is equal to the value of the CONDBAT subsystem parameter.
See MAX REMOTE CONNECTED field (CONDBAT subsystem parameter) (Db2 Installation and
Migration).
A negative number deactivates this monitor function, and a message is recorded in the profile
attributes history table to indicate that the row is rejected.
ATTRIBUTE3
NULL
3. You can also create a DSN_PROFILE_ATTRIBUTES table row to monitor or limit the number of
cumulative connections from all dynamic or unknown applications.
You can create this row independently or in conjunction with a row for a MONITOR CONNECTONS
profile. That is, completing step 2 is not strictly required for completing this step.
a) Specify the PROFILEID value from the DSN_PROFILE_TABLE row that specifies the filtering criteria
for this profile.
Tip: Use the same PROFILEID value for any DSN_PROFILE_ATTRIBUTES rows that require the
same filtering criteria. If multiple DSN_PROFILE_TABLE rows contain exactly matching filtering
ATTRIBUTE2
For MONITOR ALL CONNECTIONS, an integer that specifies the threshold for total cumulative
number of remote connections allowed from all application servers. This threshold value is
compared to an approximate count, which is maintained periodically by a background process,
of server threads that are under the control of the default location profile.
The maximum allowed value is equal to the value of the CONDBAT subsystem parameter.
See MAX REMOTE CONNECTED field (CONDBAT subsystem parameter) (Db2 Installation and
Migration).
When the specified value is a negative number, this monitor function is not used and a message
is recorded in the profile attributes history table to indicate that this row is rejected.
ATTRIBUTE3
NULL
Example
Assume that you know of one specific remote IP address that accesses Db2, and you want to create a
default location profile to monitor all other remote connections.
In this case, you might complete the following steps:
1. Insert the following DSN_PROFILE_TABLE rows.
Procedure
To monitor threads by using profile tables, complete the following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
a) In the PROFILEID column, specify a unique value or accept the generated default value.
This value identifies the profile and the relationship between DSN_PROFILE_TABLE and
DSN_PROFILE_ATTRIBUTES rows.
b) Specify the filtering criteria of the profile.
The values that you insert must be from one of the following filtering categories:
• LOCATION only
• PRDID only
The following table summarizes the filtering actions that are taken for different filtering
categories.
Product identifier, role, When the total number of queued and suspended threads exceeds
authorization identifier, the threshold, the Db2 server fails subsequent connection requests
or server location name and returns SQLCODE -30041 to the client.
Collection identifier, When the total number of queued and suspended threads exceeds
package name, client the threshold, Db2 fails subsequent SQL statements and returns
user name, client SQLCODE -30041 to the client.
application name, or
For example, suppose that a profile for a package is started. That
client workstation name
profile uses ATTRIBUTE2=2. If five threads request to run the
package, two threads run concurrently, two threads are queued and
suspended, and Db2 fails the SQL statements for the fifth thread.
ATTRIBUTE2
For MONITOR THREADS, an integer that specifies the threshold of the total number of active
server threads that are allowed from each remote application.
The maximum allowed value is equal to the value of the MAXDBAT subsystem parameter. See
MAX REMOTE ACTIVE field (MAXDBAT subsystem parameter) (Db2 Installation and Migration).
A negative number deactivates this monitor function, and a message is recorded in the profile
attributes history table to indicate that the row is rejected.
ATTRIBUTE3
For MONITOR THREADS, specifies the threshold for the maximum number of server threads
that are allowed to be suspended the profile criteria. The value must be a whole number, less
than or equal to the ATTRIBUTE2 column.
3. You can also create a DSN_PROFILE_ATTRIBUTES table row to monitor or limit the number of
cumulative threads from all dynamic or unknown applications.
You can create this row independently or in conjunction with a row for a MONITOR THREADS profile.
That is, completing step 2 is not strictly required for completing this step.
a) Specify the PROFILEID value from the DSN_PROFILE_TABLE row that specifies the filtering criteria
for this profile.
Tip: Use the same PROFILEID value for any DSN_PROFILE_ATTRIBUTES rows that require the
same filtering criteria. If multiple DSN_PROFILE_TABLE rows contain exactly matching filtering
criteria, only the newest duplicate row is accepted when you start the profiles, and the others are
rejected and disabled.
b) In the KEYWORDS column, specify 'MONITOR ALL THREADS'.
c) In the ATTRIBUTEn columns, specify the attributes of the profile:
ATTRIBUTE1
For MONITOR ALL THREADS, specifies the action and messages that are issued when the
number of active server threads that match the filtering criteria of the profile reach the specified
thresholds.
ATTRIBUTE2
For MONITOR ALL THREADS, an integer that specifies the threshold for the total cumulative
number of active server threads that are allowed from all application servers. This threshold
value is compared to an approximate count, which is maintained periodically by a background
process, of server threads that are under the control of the default location profile.
The maximum allowed value is equal to the value of the MAXDBAT subsystem parameter.
For more information, see MAX REMOTE ACTIVE field (MAXDBAT subsystem parameter) (Db2
Installation and Migration).
When the specified value is a negative number, this monitor function is not used and a message
is recorded in the profile attributes history table to indicate that this row is rejected.
ATTRIBUTE3
NULL
4. Load or reload the profile tables into memory by issuing a START PROFILE command. (For best
results, do not issue a STOP PROFILE command when you add or modify existing profiles. Use the
STOP PROFILE command only if you intend to disable all existing profiles.) For more information, see
“Starting and stopping profiles” on page 557.
5. Check the status of all newly added profiles in the STATUS columns of the DSN_PROFILE_HISTORY
and DSN_PROFILE ATTRIBUTES_HISTORY tables.
Successful completion of the START PROFILE command does not imply that all profiles started
successfully. If the STATUS column of either history table contains a value that does not start with
'ACCEPTED', further action is required to enable the profile or the keyword action.
Example
Assume that you know of one specific remote IP address that accesses Db2, and you want to create a
default location profile to monitor threads for all other remote connections.
In this case, you might complete the following steps:
1. Insert the following DSN_PROFILE_TABLE rows.
Procedure
To monitor idle threads by using profile tables, complete the following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
a) In the PROFILEID column, specify a unique value or accept the generated default value.
This value identifies the profile and the relationship between DSN_PROFILE_TABLE and
DSN_PROFILE_ATTRIBUTES rows.
b) Specify the filtering criteria of the profile.
You can specify values from one of the following filtering categories:
• LOCATION only
• PRDID only
• AUTHID, ROLE, or both
• COLLID, PKGNAME, or both
• One of CLIENT_APPLNAME, CLIENT_USERID, or CLIENT_WRKSTNNAME
The filtering values are not case-sensitive, and profiles can match regardless of the case of the
input values.
Other filtering columns must contain the null value.
Tip: If you create multiple profiles with overlapping filtering criteria, Db2 applies only one
profile from each filtering category, based on a specific order of precedence. If multiple
DSN_PROFILE_TABLE rows specify the same filtering criteria, only the newest is row is accepted
What to do next
Examine the accounting trace data that you obtained to determine which connection or thread exceeded
a warning or exception level that was set by a monitor profile. The following trace fields provide that
information:
Related concepts
Examples for profiles that monitor and control threads and connections
Examples are useful for helping you to understand the interactions between profiles that monitor system
resources such as threads and connections.
Related tasks
Monitoring and controlling Db2 by using profile tables
You can create profiles to monitor and control various aspects of a Db2 subsystem in specific application
contexts, especially for remote applications.
Related reference
-START PROFILE (Db2) (Db2 Commands)
DSN_PROFILE_TABLE profile table (Db2 Performance)
DSN_PROFILE_ATTRIBUTES profile table (Db2 Performance)
Related information
00E30501 (Db2 Codes)
00E30502 (Db2 Codes)
Procedure
Issue an SQLCancel() function for C-based driver applications or a Statement.cancel method for
Java-based driver applications.
Note: A client driver can implicitly cancel an SQL statement when the query timeout interval is reached.
Canceling threads
You can use the CANCEL THREAD command to terminate threads that are active or suspended in Db2.
Procedure
To terminate a thread, enter one of the following commands:
• To cancel a thread with a token, enter:
• Alternatively, you can use the following version of the command with either the token or LUW ID:
Results
The token is a 1-character to 5-character number that identifies the thread result. When Db2 schedules
the thread for termination, the following message for a distributed thread is issued:
CANCEL THREAD allows you to specify that a diagnostic dump be taken. GUPI
Related reference
-CANCEL THREAD (Db2) (Db2 Commands)
Key
Description
1
The system that is reporting the error. The system that is reporting the error is always on the left side
of the panel. That system name appears first in the messages. Depending on who is reporting the
error, either the LUNAME or the location name is used.
2
The system that is affected by the error. The system that is affected by the error is always displayed
to the right of the system that is reporting the error. The affected system name appears second in the
messages. Depending on what type of system is reporting the error, either the LUNAME or the location
name is used.
If no other system is affected by the error, this system does not appear on the panel.
3
Db2 reason code.
Related reference
IBM Tivoli NetView for z/OS User's Guide
DDF alerts
Several major events generate alerts.
• Conversation failures
• Distributed security failures
• DDF abends
• DDM protocol errors
• Database access thread abends
• Distributed allied thread abends
Alerts for DDF are displayed on NetView Hardware Monitor panels and are logged in the hardware monitor
database. The following figure is an example of the Alerts-Static panel in NetView.
Controlling traces
Several traces are available for problem determination.
Procedure
To control Db2 traces, use the following approaches:
• Issue the following trace commands for the action that you want to take:
– -START TRACE (Db2) (Db2 Commands) invokes one or more different trace types.
– -DISPLAY TRACE (Db2) (Db2 Commands) display the trace options that are in effect.
– -STOP TRACE (Db2) (Db2 Commands) stops any trace that was started by either the START TRACE
command or by a subsystem parameter setting when Db2 started.
– -MODIFY TRACE (Db2) (Db2 Commands) changes the trace events (IFCIDs) that are being traced for
a specified active trace.
You can specify other parameters to further qualify the scope of a trace. You can trace specific events
within a trace type, and events within specific Db2 plans, authorization IDs, resource manager IDs, and
locations. You can also control the destination for the trace data.
• You can specify certain trace types and classes to start automatically when Db2 starts by setting the
following subsystem parameters:
– AUDIT TRACE field (AUDITST subsystem parameter) (Db2 Installation and Migration) specifies
whether to start the audit trace automatically when Db2 starts,and the audit trace classes to start.
– TRACE AUTO START field (TRACSTR subsystem parameter) (Db2 Installation and Migration)
specifies whether to start the global trace automatically Db2 starts, and the global trace classes
to start.
– SMF ACCOUNTING field (SMFACCT subsystem parameter) (Db2 Installation and Migration) specifies
whether to send accounting data to SMF automatically when Db2 starts, and the accounting trace
classes to be sent.
START irlmproc,TRACE=YES
Related information
Command types and environments in Db2 (Db2 Commands)
Procedure
To modify special register values for the behavior of specific dynamic SQL statements, complete the
following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
a) In the PROFILEID column, specify a unique value or accept the generated default value.
This value identifies the profile and the relationship between DSN_PROFILE_TABLE and
DSN_PROFILE_ATTRIBUTES rows.
b) Specify the filtering criteria of the profile.
You can specify values in the columns from one of the following filtering categories:
• LOCATION only
• PRDID only
• AUTHID, ROLE, or both
• COLLID, PKGNAME, or both
• One of CLIENT_APPLNAME, CLIENT_USERID, or CLIENT_WRKSTNNAME
The filtering values are not case-sensitive, and profiles can match regardless of the case of the
input values.
Important: Although PKGNAME can be used as a filtering category for profile table rows that use
the 'SPECIAL_REGISTER' value for KEYWORDS, when client drivers are used, you should not use
PKGNAME alone or in combination with COLLID. For more information, see Using profile tables to
control which Db2 for z/OS application compatibility levels to use for specific data server client
applications (Db2 Application programming and SQL).
SET CURRENT MAINTAINED The following syntax variations are not supported in profiles:
TABLE TYPES FOR
• CURRENT MAINTAINED TYPES
OPTIMIZATION (Db2 SQL)
• CURRENT MAINTAINED TYPES FOR OPTIMIZATION
SET CURRENT
OPTIMIZATION HINT (Db2
SQL)
SET CURRENT PACKAGE
PATH (Db2 SQL)
SET CURRENT PRECISION
(Db2 SQL)
SET CURRENT QUERY
ACCELERATION (Db2 SQL)
SET CURRENT
QUERY ACCELERATION
WAITFORDATA (Db2 SQL)
SET CURRENT REFRESH AGE The value 99999999999999 is not supported.
(Db2 SQL)
Use the value ANY instead.
SET SESSION TIME ZONE The following syntax variations are not supported in profiles:
(Db2 SQL)
• TIMEZONE
• TIME ZONE
• SESSION TIMEZONE
ATTRIBUTE2
NULL
The profile applies to remote threads only. The profile is evaluated and SET statements are
processed only when the first package is loaded, and when the first SQL statement (other
than a SET statement) in the package executes.
ATTRIBUTE3
NULL
3. Load or reload the profile tables into memory by issuing a START PROFILE command. (For best
results, do not issue a STOP PROFILE command when you add or modify existing profiles. Use the
STOP PROFILE command only if you intend to disable all existing profiles.) For more information, see
“Starting and stopping profiles” on page 557.
4. Check the status of all newly added profiles in the STATUS columns of the DSN_PROFILE_HISTORY
and DSN_PROFILE ATTRIBUTES_HISTORY tables.
Successful completion of the START PROFILE command does not imply that all profiles started
successfully. If the STATUS column of either history table contains a value that does not start with
'ACCEPTED', further action is required to enable the profile or the keyword action.
Example
Suppose that you insert the following row in SYSIBM.DSN_PROFILE_ATTRIBUTES:
Procedure
To set global variables for specific remote applications, complete the following steps:
1. In the SYSIBM.DSN_PROFILE_TABLE table, insert a row to create the profile and specify its filtering
criteria:
ATTRIBUTE2
NULL
The profile applies to remote threads only. The profile is evaluated and SET statements are
processed only when the first package is loaded, and when the first SQL statement (other than
a SET statement) in the package executes.
ATTRIBUTE3
NULL
3. Load or reload the profile tables into memory by issuing a START PROFILE command. (For best
results, do not issue a STOP PROFILE command when you add or modify existing profiles. Use the
STOP PROFILE command only if you intend to disable all existing profiles.) For more information, see
“Starting and stopping profiles” on page 557.
4. Check the status of all newly added profiles in the STATUS columns of the DSN_PROFILE_HISTORY
and DSN_PROFILE ATTRIBUTES_HISTORY tables.
Successful completion of the START PROFILE command does not imply that all profiles started
successfully. If the STATUS column of either history table contains a value that does not start with
'ACCEPTED', further action is required to enable the profile or the keyword action.
Example
Suppose that you insert the following row into SYSIBM.DSN_PROFILE_ATTRIBUTES:
If a set of profile tables and related objects do not already exist on the Db2 subsystem, you must create
them.
The profile tables and related indexes are created when you run job DSNTIJSG during Db2 installation, as
described in Job DSNTIJSG (Db2 Installation and Migration). A complete set of profile tables and related
indexes includes the following tables and indexes:
• SYSIBM.DSN_PROFILE_TABLE
• SYSIBM.DSN_PROFILE_HISTORY
• SYSIBM.DSN_PROFILE_ATTRIBUTES
• SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
• SYSIBM.DSN_PROFILE_TABLE_IX_ALL
• SYSIBM.DSN_PROFILE_TABLE_IX2_ALL
• SYSIBM.DSN_PROFILE_ATTRIBUTES_IX_ALL
If you plan to use TCP/IP domain names for profile filtering, you must enable the database services
address space (ssnmDBM1) to access TCP/IP services. See Enabling Db2 to access TCP/IP services in
z/OS UNIX System Services (Db2 Installation and Migration).
MONITOR ALL LOCATION only (specify '*', '::0', or “Monitoring remote connections by
CONNECTIONS '0.0.0.0'.) using profile tables” on page 521
(see step 3)
Remote threads MONITOR • LOCATION only “Monitoring threads by using profile
THREADS tables” on page 526
• PRDID only
• AUTHID, ROLE, or both
• COLLID, PKGNAME, or both
• One of CLIENT_APPLNAME,
CLIENT_USERID, or
CLIENT_WRKSTNNAME
The filtering values are not case-
sensitive, and profiles can match
regardless of the case of the input
values.
MONITOR ALL LOCATION only (specify '*', '::0', or “Monitoring threads by using profile
THREADS '0.0.0.0'.) tables” on page 526 (see step 3)
Lock sharing for SHARE_LOCKS • LOCATION only “Sharing locks for stored
RRS connections procedures that invoke transactions
• PRDID only in RRS contexts by using profile
• AUTHID, ROLE, or both tables” on page 512
• COLLID, PKGNAME, or both
• One of CLIENT_APPLNAME,
CLIENT_USERID, or
CLIENT_WRKSTNNAME
The filtering values are not case-
sensitive, and profiles can match
regardless of the case of the input
values.
Query • ACCEL_TABLE_ Contact IBM Support for the specific See the accelerator product
acceleration THRESHOLD accelerator product. documentation.
thresholds
• ACCEL_RESUL
TSIZE_THRES
HOLD
• ACCEL_TOTALC
OST_THRESHO
LD
Subsystem • BPname None. Profiles for this purpose Modeling a production environment
modeling have a global scope on the test on a test subsystem (Db2
• MAX_RIDBLOC subsystem. Performance)
KS
• SORT_POOL_S
IZE
Procedure
To create a profile, complete the following general steps:
1. In the SYSIBM.DSN_PROFILE_TABLE, insert rows to define the filtering criteria for the profiles.
a) Specify a unique PROFILEID value to identify the profile.
b) In the other columns, specify the filtering criteria for the profile.
The filtering criteria columns that you specify must be from a single filtering category for the type of
profile that you are creating. See the "Applicable DSN_PROFILE_TABLE filtering categories" column
in the preceding table. Because the filtering criteria must be from a single valid filtering category,
every valid DSN_PROFILE_TABLE row contains null values in some columns.
Also, the same filtering column and value must not already used by an existing profile.
Tip: If you create multiple profiles with overlapping filtering criteria, Db2 applies only one
profile from each filtering category, based on a specific order of precedence. If multiple
DSN_PROFILE_TABLE rows specify the same filtering criteria, only the newest is row is accepted
when you start the profiles, and the other duplicates are rejected. Also, exact values take
precedence over values that use an asterisk (*) wildcard. However, profiles from different filtering
categories can all apply. For more information about these rules, see “How Db2 applies multiple
matching profiles for threads and connections” on page 559.
For column descriptions, see DSN_PROFILE_TABLE profile table (Db2 Performance).
2. In SYSIBM.DSN_PROFILE_ATTRIBUTES table, insert rows to specify the actions that Db2 takes when
processes or applications match the filtering criteria specified in DSN_PROFILE_TABLE.
a) Specify the same PROFILEID value from the DSN_PROFILE_TABLE row for this profile.
Tip: Use the same PROFILEID value for any DSN_PROFILE_ATTRIBUTES rows that require the
same filtering criteria. If multiple DSN_PROFILE_TABLE rows contain exactly matching filtering
Procedure
To start or stop profiles, complete the following steps:
• To start a profile, issue a START PROFILE command.
Important: The START PROFILE command has member scope in Db2 data sharing, so is best to run
the command on each member to ensure consistent results from all members.
Db2 activates the functions specified in the profile tables for every valid row of the
SYSIBM.DSN_PROFILE_TABLE table that contains PROFILE_ENABLED='Y'. Profiles in rows that
contain PROFILE_ENABLED='N' are not started.
START PROFILE
What to do next
After you modify an existing, active profile row, or insert a new profile row, you must issue the -START
PROFILE command again to apply the changes. For profiles that affect access path selection, you must
invalidate the statements in the dynamic statement cache before changes to profile attribute values are
applied for those statements.
Important: Always check the status of any newly added profile table rows in the STATUS columns of
DSN_PROFILE_HISTORY and DSN_PROFILE ATTRIBUTES_HISTORY tables after you issue the START
PROFILE command. Successful completion of the START PROFILE command does not imply that all
profiles started successfully. If the STATUS column of either history table contains a value that does not
start with 'ACCEPTED', further action is required to enable the profile or the keyword action.
Related tasks
Invalidating statements in the dynamic statement cache (Db2 Performance)
Related reference
-START PROFILE (Db2) (Db2 Commands)
-STOP PROFILE (Db2) (Db2 Commands)
PROFILE AUTOSTART field (PROFILE_AUTOSTART subsystem parameter) (Db2 Installation and Migration)
Procedure
To modify existing profiles and apply the changes:
1. Insert, update, or delete from the profile tables to define the changes.
Important: To avoid disabling existing profiles, always use existing PROFILEID values for
any new DSN_PROFILE_ATTRIBUTES rows that require the same filtering criteria as existing
DSN_PROFILE_TABLE rows. If multiple DSN_PROFILE_TABLE rows contain exactly matching filtering
criteria, only the newest duplicate row is accepted when you start the profiles, and the others are
rejected and disabled.
Table 57. Categories and columns used to specify valid profiles for threads and connections
Filtering category Columns to specify
Client IP address or domain Specify only the LOCATION column. The value can be an IP address or
name domain name.
This category is the only accepted filtering criteria for profiles that specify
the MONITOR CONNECTIONS.
If one of the following values is specified, this category can also be used
for profiles that specify the MONITOR ALL THREADS or MONITOR ALL
CONNECTONS keywords: '*', '::0', or '0.0.0.0'.
Location name, or location Specify only the location name or location alias in LOCATION column.
alias
This category applies only to profiles that specify MONITOR THREADS and
MONITOR IDLE THREADS.
PROFILEID AUTHID
21 AUTHID="USER1'
22 AUTHID='USER*'
23 AUTHID='*'
Also assume that the DSN_PROFILE_ATTRIBUTES table contains rows with the following values:
In this case, the profiles with PROFILEID=22 or PROFILEID=23 do not apply to threads from the
USER1 authorization ID. USER1 can have as many as 100 threads before a request is rejected,
whereas other authorization IDs that begin with USER can have only 50 threads, and authorization IDs
that do not begin with USER can only have 20 threads.
However, threads from USER1 do count against the thresholds for evaluating all three profiles, so if
USER1 has 50 or more active threads, threads from all other authorization IDs are rejected.
Rule: Apply only one profile for the same filtering category
For profiles that specify overlapping filtering criteria in the same filtering category, Db2 applies only
one profile.
For example, assume that DSN_PROFILE_TABLE contains rows with the following values:
PROFILEID AUTHID
17 *
18 JIM
Also assume that the DSN_PROFILE_ATTRIBUTES table contains rows with the following values:
Only the PROFILEID=18 applies to any thread from the JIM authorization ID. In this example, the
special register setting specified by the profile with PROFILEID=17 does not apply to any thread from
the JIM authorization ID.
Rule: Different filtering categories take precedence
Db2 evaluates the filtering criteria for profiles in the following order of precedence. This list also
illustrates the next two rules.
1. IP address or domain name, in the LOCATION column.
2. Product identifier, in the PRDID column.
3. Role and authorization identifier, in both ROLE and AUTHID columns. Within this category, Db2
uses the following order of precedence:
a. ROLE and AUTHID
b. ROLE only
c. AUTHID only
4. Server location name, location alias, or database name, in the LOCATION column.
5. The location name of a requester, for monitored threads from a Db2 for z/OS requester, in the
LOCATION column.
6. Collection identifier and package name, in both COLLID and PKGNAME columns. Within this
category, Db2 uses the following order of precedence:
a. COLLID and PKGNAME
b. COLLID only
c. PKGNAME only
7. Client application name, user identifier, or workstation name, in the following columns:
a. CLIENT_APPLNAME
b. CLIENT_USERID
c. CLIENT_WRKSTNNAME
Rule: Apply the profile that specifies more criteria in the same filtering category
Db2 applies the profile that specifies more criteria in a category. That is Db2 applies profiles in the
following order of precedence.
For example assume that DSN_PROFILE_TABLE contains rows with the following values:
Db2 applies the profile with PROFILEID=17 because it specifies more criteria.
Rule: Certain criteria take priority within filtering categories
Within certain categories, Db2 gives priority to certain criteria.
• ROLE takes precedence over AUTHID.
• COLLID takes precedence over PKGNAME.
For example, assume that at DSN_PROFILE table contains rows with the following values:
Db2 applies the profile with PROFILEID=20 because COLLID always takes precedence over
PKGNAME.
Related concepts
Examples for profiles that monitor and control threads and connections
Examples are useful for helping you to understand the interactions between profiles that monitor system
resources such as threads and connections.
Related reference
Profile tables (Db2 Performance)
Notes:
1. The profile that is identified by PROFILEID=17 specifies values for columns in different filtering categories.
Consequently, Db2 rejects this row when you issue the START PROFILE command.
2. The value DSN* is a string in which the first three characters match DSN followed by the version identifier.
For example, PRDID='DSN12015' or any PRDID where the first three characters match 'DSN'.
3. The value of USER* is a string in which the first four characters match USER followed by the unique user ID.
For example, AUTHID='USER8' or any AUTHID where the first four characters match 'USER'.
4. Setting the LOCATION column to '0.0.0.0' has the same effect as setting it to '::0'.
The following examples assume that DSN_PROFILE_ATTRIBUTES also contains rows with the matching
PROFILEID value and a KEYWORDS value that accepts the filtering criteria in each example.
Consider example threads that have the following attributes:
ROLE='ROLE_APP' and AUTHID='USER1':
The criteria of profile 12 and profile 14 match the thread, but Db2 uses only profile 14 to evaluate
whether to apply a threshold to the thread because ROLE takes precedence over AUTHID.
ROLE='ROLE_DBA' and AUTHID='USER2':
Db2 applies only the profile that is identified by PROFILEID=11.
ROLE='ROLE_DBA' and AUTHID='USER1':
The criteria of the following profiles match the thread: PROFILEID=11, PROFILEID=12, and
PROFILEID=13. However Db2 applies only PROFILEID=13 to evaluate whether to apply a threshold
against the thread. The profile that defines both ROLE and AUTHID takes precedence over a profile
that defines only one of those values.
In practice this result means that a profile that sets a lower threshold might be overruled by a profile
that specifies a greater threshold. For example, assume that the DSN_PROFILE_ATTRIBUTES table
contains the rows shown in the following table.
When you consider these values and the values in Table 58 on page 563, you see that the following
thresholds are created:
• "Profile 11" indicates that as many as 100 threads are accepted from the ROLE_DBA role.
• "Profile 12" indicates that as many as 20 threads are accepted from the USER1 authorization ID.
• "Profile 13" indicates that as many as 50 threads are accepted for threads from the USER1
authorization ID under the ROLE_DBA role.
All of the example profiles specify filtering criteria from the same category. So, only one of the profiles
applies to any particular thread. In this example, profile 13 applies to any thread that matches the
AUTHID='USER1' and ROLE='ROLE_DBA' values. Therefore, because profile 13 takes precedence,
profile 12 is never applied to any thread that meets both of these criteria. So, as many as 50 threads
might be accepted from the 'USER1' authorization ID, before any action is taken.
• "Profile 21" indicates that Db2 monitors active threads that meet the criteria defined by the
DSN_PROFILE_TABLE row that contains 21 in the PROFILEID column. When the number of active
threads exceeds 100, Db2 issues a message and suspends any new thread requests. When the number
of the suspended threads exceeds 100, Db2 starts to reject any new thread request and issues
SQLCODE -30041.
• "Profile 22" indicates that Db2 monitors idle threads that meet the criteria defined by the
DSN_PROFILE_TABLE that contains 22 in the PROFILEID column. When a thread remains idle for more
than 30 seconds, Db2 issues a message and terminates the idle thread.
Unit of recovery
SQL transaction 1 SQL transaction 2
Time
line
In this example, the application process makes changes to databases at SQL transactions 1 and 2. The
application process can include a number of units of recovery or just one, but any complete unit of
recovery ends with a commit point.
For example, a bank transaction might transfer funds from account A to account B. First, the program
subtracts the amount from account A. Next, it adds the amount to account B. After subtracting the
amount from account A, the two accounts are inconsistent. These accounts are inconsistent until the
Time
Database updates Back out updates
line
GUPI The possible events that trigger "Begin rollback" in this figure include:
• SQL ROLLBACK statement
• Deadlock (reported as SQLCODE -911)
• Timeout (reported as SQLSTATE 40001)
The effects of inserts, updates, and deletes to large object (LOB) values are backed out along with all the
other changes that were made during the unit of work that is being rolled back, even if the LOB values that
were changed reside in a LOB table space that has the LOG NO attribute.
An operator or an application can issue the CANCEL THREAD command with the NOBACKOUT option to
cancel long-running threads without backing out data changes. Db2 backs out changes to catalog and
directory tables regardless of the NOBACKOUT option. As a result, Db2 does not read the log records and
does not write or apply the compensation log records. After CANCEL THREAD NOBACKOUT processing,
Db2 marks all objects that are associated with the thread as refresh-pending (REFP) and puts the objects
in a logical page list (LPL).
The NOBACKOUT request might fail for either of the following two reasons:
• Db2 does not completely back out updates of the catalog or directory (message DSNI032I with reason
00C900CC).
• The thread is part of a global transaction (message DSNV439I).
GUPI
Chapter 10. Managing the log and the bootstrap data set 569
When Db2 is initialized, the active log data sets that are named in the BSDS are dynamically allocated for
exclusive use by Db2 and remain allocated exclusively to Db2 (the data sets were allocated as DISP=OLD)
until Db2 terminates. Those active log data sets cannot be replaced, nor can new ones be added, without
terminating and restarting Db2. The size and number of log data sets is indicated by what was specified
by installation panel DSNTIPL. The use of dual active logs increases availability as well as the reliability of
recovery by eliminating a single point of failure.
Offload
process
Write to
archive log
Record on
BSDS
During the process, Db2 determines which data set to offload. Using the last log relative byte address
(RBA) that was offloaded, as registered in the BSDS, Db2 calculates the log RBA at which to start. Db2
also determines the log RBA at which to end, from the RBA of the last log record in the data set, and
registers that RBA in the BSDS.
When all active logs become full, the Db2 subsystem runs an offload and halts processing until the offload
is completed. If the offload processing fails when the active logs are full, Db2 cannot continue doing any
work that requires writing to the log.
Related information
Recovering from active log failures
A variety of active log failures might occur, but you can recover from them.
Db2 continues processing. The operator can cancel and then restart the offload.
• One of the following message appears when offload reaches end-of-volume or end-of-data-set in an
archive log data set:
The non-data sharing version of this message is:
• The following message appears when one data set of the next pair of active logs is not available
because of a delay in offloading, and logging continues on one copy only:
Chapter 10. Managing the log and the bootstrap data set 571
• The following message appears when dual active logging resumes after logging has been performed on
one copy only:
• The following message indicates that the offload task has ended:
Chapter 10. Managing the log and the bootstrap data set 573
Related reference
SINGLE VOLUME field (SVOLARC subsystem parameter) (Db2 Installation and Migration)
The preceding command allows for a quiesce period of up to 60 seconds before archive log processing
occurs.
Important: Use of this option during prime time, or when time is critical, can cause a significant
disruption in Db2 availability for all jobs and users that use Db2 resources.
By default, the command is processed asynchronously from the time you submit the command. (To
process the command synchronously with other Db2 commands, use the WAIT(YES) option with
QUIESCE; the z/OS console is then locked from Db2 command input for the entire QUIESCE period.)
During the quiesce period:
• Jobs and users on Db2 are allowed to go through commit processing, but they are suspended if they try
to update any Db2 resource after the commit.
• Jobs and users that only read data can be affected, because they can be waiting for locks that are held
by jobs or users that were suspended.
• New tasks can start, but they are not allowed to update data.
As shown in the following example, the DISPLAY THREAD output issues message DSNV400I to indicate
that a quiesce is in effect:
Chapter 10. Managing the log and the bootstrap data set 575
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I - DSNVDT '-DISPLAY THREAD' NORMAL COMPLETION
When all updates are quiesced, the quiesce history record in the BSDS is updated with the date and time
that the active log data sets were truncated, and with the last-written RBA in the current active log data
sets. Db2 truncates the current active log data sets, switches to the next available active log data sets,
and issues message DSNJ311E, stating that offload started.
If updates cannot be quiesced before the quiesce period expires, Db2 issues message DSNJ317I, and
archive log processing terminates. The current active log data sets are not truncated and not switched to
the next available log data sets, and offload is not started.
Regardless of whether the quiesce is successful, all suspended users and jobs are then resumed, and Db2
issues message DSNJ312I, stating that the quiesce is ended and update activity is resumed.
If ARCHIVE LOG is issued when the current active log is the last available active log data set, the
command is not processed, and Db2 issues this message:
If ARCHIVE LOG is issued when another ARCHIVE LOG command is already in progress, the new
command is not processed, and Db2 issues this message:
GUPI
Related reference
DSNTIPA: Archive log data set parameters panel (Db2 Installation and Migration)
-ARCHIVE LOG (Db2) (Db2 Commands)
Procedure
Enter the following command:
-ARCHIVE LOG
When you issue the preceding command, Db2 truncates the current active log data sets, runs an
asynchronous offload, and updates the BSDS with a record of the offload. The RBA that is recorded in
the BSDS is the beginning of the last complete log record that is written in the active log data set that is
being truncated.
Example
You can use the ARCHIVE LOG command as follows to capture a point of consistency for the MSTR01 and
XUSR17 databases:
When you enter the command, Db2 restarts the offload, beginning with the oldest active log data set and
proceeding through all active log data sets that need offloading. If the offload fails again, you must fix the
problem that is causing the failure before the command can work. GUPI
Adding an active log data set to the active log inventory with the SET LOG
command
You can use the SET LOG command to add a new active log data set to the active log inventory without
stopping and starting Db2.
Procedure
Issue the SET LOG command with the NEWLOG and COPY keywords.
If the Db2 database manager can open the newly defined log data set, the log data set is added to the
active log inventory in the bootstrap data set (BSDS). The new active log data set is immediately available
for use without stopping and starting the database manager.
Currently, if you do not stop and start Db2 during the period after you add new active log data sets to the
inventory and before Db2 uses those active log data sets, Db2 uses the active log data sets in the reverse
order from the order in which you add them to the active log inventory with the SET LOG command.
For example, suppose that the active log inventory contains data sets DS01, DS02, and DS03, and you
add data set DS04 and then data set DS05. If data set DS03 is active, and you issue the ARCHIVE LOG
command, the new active log becomes DS05. However, if you stop and start Db2 during the period after
you add new active log data sets and before Db2 uses them, the order of use might be different.
This behavior might change in the future, so schemes for adding and switching active logs should not
depend on this order.
Related reference
-SET LOG (Db2) (Db2 Commands)
DSNJLOGF (preformat active log) (Db2 Utilities)
Chapter 10. Managing the log and the bootstrap data set 577
Dynamically changing the checkpoint frequency
You can use the LOGLOAD option, the CHKTIME option, or a combination of both of these options of the
SET LOG command to dynamically change the checkpoint frequency without recycling Db2.
Either value affects the restart time for Db2. For example, during prime shift, your Db2 shop might have
a low logging rate but require that Db2 restart quickly if it terminates abnormally. To meet this restart
requirement, you can decrease the LOGLOAD value to force a higher checkpoint frequency. In addition,
during off-shift hours, the logging rate might increase as batch updates are processed, but the restart
time for Db2 might not be as critical. In that case, you can increase the LOGLOAD value which lowers the
checkpoint frequency.
You also can use either the LOGLOAD option or the CHKTIME option to initiate an immediate system
checkpoint. For example:
The CHKFREQ value that is altered by the SET LOG command persists only while Db2 is active. On restart,
Db2 uses the CHKFREQ value in the Db2 subsystem parameter load module. GUPI
Related reference
-SET LOG (Db2) (Db2 Commands)
Db2 continues processing. This situation can result in a very long restart if logging continues without a
system checkpoint. If Db2 continues logging beyond the defined checkpoint frequency, quiesce activity
and terminate Db2 to minimize the restart time.
Procedure
To display the most recent checkpoint, use one of the following approaches:
• Issue the DISPLAY LOG command.
• Run the print log map utility (DSNJU004).
Related tasks
Displaying log information
You can use the DISPLAY LOG command to display the current checkpoint frequency. You can obtain
additional information about log data sets and checkpoints from the print log map utility (DSNJU004).
Related reference
DSNJU004 (print log map) (Db2 Utilities)
-DISPLAY LOG (Db2) (Db2 Commands)
Procedure
Issue the DISPLAY LOG command, or use the print log map utility (DSNJU004).
Related reference
-DISPLAY LOG (Db2) (Db2 Commands)
-SET LOG (Db2) (Db2 Commands)
DSNJU004 (print log map) (Db2 Utilities)
Chapter 10. Managing the log and the bootstrap data set 579
What to do before RBA or LRSN limits are reached
Before a Db2 subsystem or data sharing group reaches the end of the log RBA range, you must reset
the log RBA value. The process that you use to reset the log RBA value depends on whether the Db2
subsystem is a member of a data sharing group or in a non-data sharing environment.
Chapter 10. Managing the log and the bootstrap data set 581
Converting page sets to the 10-byte RBA or LRSN format
To prevent RBAs and LRSNs from reaching the hard limits, convert user, catalog, and directory page sets
to the 10-byte RBA or LRSN format.
Procedure
To convert the RBA and LRSN to extended 10-byte format, complete the following steps:
1. Identify the objects to convert by querying the RBA_FORMAT columns of the SYSIBM.SYSTABLEPART
and SYSIBM.SYSINDEXPART catalog tables.
RBA_FORMAT='B', and blank values in those columns, indicate objects in the 6-byte format.
2. Enable Db2 to create new table spaces and indexes with the RBA or LRSN in extended 10-byte format,
and convert the RBA for existing table spaces and indexes to extended 10-byte format:
a) If it is not already applied, apply the PTF for APAR PH26317.
b) Run the updated DSNTIJUZ job to rebuild the subsystem parameter (DSNZPxxx) module.
c) Issue the -SET SYSPARM command, or restart Db2.
3. Identify objects to convert by checking the RBA_FORMAT column of the SYSIBM.SYSTABLEPART and
SYSIBM.SYSINDEXPART catalog tables.
If the RBA_FORMAT value is 'B' or blank, the object is in the basic 6-byte format.
4. Run the LOAD REPLACE, REBUILD, or REORG utilities.
During utility processing, any objects processed by the utility that are in basic 6-byte format are
converted to extended 10-byte format. To verify the conversion, check the RBA_FORMAT column
value of the SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART catalog tables. The value is 'E' for
converted objects.
Results
New table spaces and indexes are created in extended format with 10-byte RBA or LRSN values, and
existing table spaces and indexes are converted to extended format with 10-byte RBA or LRSN values.
Related concepts
“How RBA and LRSN values are displayed ” on page 731
Db2 12 always displays RBA and LRSN values in the 10-byte format. This 10-byte display is unrelated
to migration of the catalog or directory, conversion of individual objects to EXTENDED format, or BSDS
conversion. For recovery purposes, this 10-byte format is the preferred input format for Db2. When
10-byte RBA or LRSN values are specified as input to Db2, conversion to 6-byte format is performed
internally as needed.
Related reference
DSNTIP7: SQL OBJECT DEFAULTS PANEL 1 (Db2 Installation and Migration)
DSNJCNVT (Db2 Utilities)
Resetting the log RBA value in a data sharing environment (6-byte format)
Before the member of a data sharing group reaches the end of the log RBA range, you must reset the log
RBA value for that member.
Procedure
To reset the log RBA value in a data sharing environment:
1. Issue the STOP DB2 command to quiesce the member that is approaching the end of the log RBA
range.
2. Restart this member in ACCESS(MAINT) mode.
3. Issue the -DISPLAY THREAD command. Ensure that there are no INDOUBT or POSTPONED ABORT
units of recovery.
4. Issue the -DISPLAY DATABASE(*) SPACENAM(*) RESTRICT command. Ensure that all restricted
states are removed.
5. Quiesce the member again by issuing the -STOP DB2 command.
6. Optional: Start a new member to take over the work of the member that is quiesced. If using another
member is an acceptable solution, you can leave the original member stopped indefinitely.
7. Optional: Before you cold start the member, complete this step to avoid a potential performance issue
that is caused by log formatting.
When the residual RBA range is greater than the log truncation point, Db2 resets the high used RBA
(HURBA) for the active logs after the cold start of a member. This action avoids log read errors that can
result from reading residual log data with higher RBA values from peer members of the data sharing
group. The first time the logs become current, Db2 must format the active logs ahead of the log writes
until the log is full.
To preformat the active logs before a cold start:
a) Delete and redefine the active logs with IDCAMS.
b) Format the empty logs by using the DSNJLOGF utility.
c) Use the DSNJU003 utility to delete the active logs from the BSDS and add the logs back in with no
RBA range. Adding the logs back in with no RBA range shows that the logs are empty.
During the subsequent cold start, Db2 detects these changes to the active logs and does not reset the
HURBA. Therefore, log formatting is not required ahead of the log writes.
8. To bring the original member back into the data sharing group, you must cold start the member with a
STARTRBA of 0 (zero). To cold start the member:
Chapter 10. Managing the log and the bootstrap data set 583
a) Make a full image copy of all data by using the COPY utility.
For example, in a data sharing group with two members, where you are resetting the log RBA of the
first member, make a full image copy by using the second member. The first member must remain
quiesced for awhile. The length of time that the member is quiesced depends on how long it takes
to establish a new recovery base and the time required for the logs from the quiesced member to
no longer be required for any form of recovery. After you are satisfied that the logs are not required
for recovery purposes, you can reset the log RBA for the quiesced member and restart the member.
b) Stop all IFI applications that might issue READS calls for IFCID 0306 to read log records from the
member that you are cold starting.
c) Cold start the first member back to the RBA value of 0 (zero).
This step removes all log data from the BSDS, and you can use the member again. This step
requires utility DSNJU003 with the following options:
CRESTART CREATE,STARTRBA=0,ENDRBA=0
d) Restart all IFI applications that you stopped in step “8.b” on page 584.
Related tasks
What to do before RBA or LRSN limits are reached
Before a Db2 subsystem or data sharing group reaches the end of the log RBA range, you must reset
the log RBA value. The process that you use to reset the log RBA value depends on whether the Db2
subsystem is a member of a data sharing group or in a non-data sharing environment.
Related reference
COPY (Db2 Utilities)
DSNJU003 (change log inventory) (Db2 Utilities)
DSNJLOGF (preformat active log) (Db2 Utilities)
Procedure
To reset the log RBA value in a non-data sharing environment by using the COPY utility:
1. Drop any user-created indexes on the SYSIBM.SYSTSCPY catalog table.
2. Alter all of the indexes on the catalog tables so that they have the COPY YES attribute by issuing
the ALTER INDEX COPY YES statement, and commit the changes after running every 30 ALTER
statements.
Tip: You do not need to alter the following objects:
• Db2 directory indexes. By default, these indexes already have the COPY YES attribute.
CAUTION: Edit these members so that only the pertinent page sets are processed.
10. Complete the following steps to enable the COPY utility to reset the log RBA values in data pages and
index pages as they are copied:
a) Edit member DSN6SPRC of the prefix.SDSNMACS library and locate the entry SPRMRRBA.
b) Change the SPRMRRBA setting to '1' and save the change.
c) Run the first two steps of your customized copy of job DSNTIJUZ to rebuild your Db2 subsystem
parameter module (DSNZPxxx).
11. Stop all IFI applications that might issue READS calls for IFCID 0306 to read log records from the
subsystem.
12. Cold start the subsystem back to the RBA value of 0 (zero).
This step removes all log data from the BSDS. This step requires utility DSNJU003 with the following
options:
CRESTART CREATE,STARTRBA=0,ENDRBA=0
13. Restart all IFI applications that you stopped in step “11” on page 585.
14. Start the Db2 subsystem in ACCESS(MAINT) mode.
15. Take new, full image copies of all table spaces and indexes by using the COPY utility with the
SHRLEVEL REFERENCE option to automatically reset the log RBA values.
a) COPY the DSNDB06.SYSTSTSS and DSNDB06.SYSTSISS catalog table spaces.
b) Copy the rest of the catalog and directory table spaces and indexes, including any user-created
Db2 catalog indexes.
c) Alter indexes on user table spaces so that they have the COPY YES attribute by issuing the ALTER
INDEX COPY YES statement, and commit the changes after running every 30 ALTER statements.
d) Copy the table spaces and indexes in user-created databases.
You can do this step and the next step in parallel.
e) Copy the tables spaces and indexes in default database DSNDB04.
Do not copy the table spaces in workfile database DSNDB07. Db2 automatically resets the log RBA
values in these table spaces when they are used.
Restriction: Because FlashCopy technology does not reset the log RBA, you cannot use the COPY
FLASHCOPY option in this situation.
16. Re-create any user-created indexes on the SYSIBM.SYSCOPY table that were dropped in step 1 of
this procedure.
17. Verify that the log RBA values were reset:
Chapter 10. Managing the log and the bootstrap data set 585
a) Run a query against the tables SYSIBM.SYSCOPY, SYSIBM.SYSTABLEPART, and
SYSIBM.SYSINDEXPART to verify that all objects were copied.
b) Use the DSN1PRNT utility with the FORMAT option to print several pages from some of the objects
so that you can verify that the PGLOGRBA field in the pages are reset to zero.
The COPY utility updates the PGLOGRBA field and other RBA fields in header pages (page zero) so
these fields will contain non-zero values.
18. Stop Db2, and disable the reset RBA function in the COPY utility by following the instructions in step
10 and setting SPRMRRBA to '0'.
19. Restart Db2 for normal access.
20. Alter the Db2 catalog indexes and user-created indexes to have the COPY NO attribute by issuing the
ALTER INDEX COPY NO statement.
Commit the changes after every thirty ALTER statements. However, you should issue these ALTER
statements over several days, because during this process SYSCOPY and SYSLGRNX records are
deleted and contention might occur.
Note: If the RBA fields for an object are not reset, abend04E RC00C200C1 is returned during SQL
update, delete, and insert operations. The object also is placed in STOPE status. You can use the
DSN1COPY utility with the RESET option to reset the log RBA values. This two-step process requires
copying the data out and then back into the specified data sets. Before using DSN1COPY with the
RESET option, make sure that the object is stopped by issuing the command -STOP DB(...)
SPACENAM(...).
Related tasks
What to do before RBA or LRSN limits are reached
Before a Db2 subsystem or data sharing group reaches the end of the log RBA range, you must reset
the log RBA value. The process that you use to reset the log RBA value depends on whether the Db2
subsystem is a member of a data sharing group or in a non-data sharing environment.
Related reference
COPY (Db2 Utilities)
Procedure
Issue the ARCHIVE LOG CANCEL OFFLOAD command.
GUPI
Procedure
Issue the DISPLAY LOG command.
GUPI
Related reference
-DISPLAY LOG (Db2) (Db2 Commands)
Procedure
GUPI To locate archive log data sets:
1. Resolve indoubt units of recovery. If Db2 is running with TSO, continue with step “2” on page 588. If
Db2 is running with IMS, CICS, or distributed data, the following substeps apply:
a) Ensure that the period between one startup and the next startup is free of any indoubt units of
recovery. Ensure that no Db2 activity is going on when you are performing this set of substeps. (To
minimize impact on users, consider planning this work for a non-prime shift.) To determine whether
indoubt units of recovery exist, issue the following Db2 command:
If you find no indoubt units of recovery, skip to step “2” on page 588.
b) If one or more indoubt units of recovery exist, take one of the following actions:
• If IMS or CICS is involved with the indoubt units of work, start IMS or CICS. Starting IMS or
CICS causes that subsystem to resolve the indoubt units of recovery. If the thread is a distributed
indoubt unit of recovery, restart the distributed data facility (DDF) to resolve the unit of work. If
Chapter 10. Managing the log and the bootstrap data set 587
DDF does not start or cannot resolve the unit of work, issue the following command to resolve the
unit of work:
-RECOVER INDOUBT
-RECOVER INDOUBT
c) Reissue the DISPLAY THREAD TYPE(INDOUBT) command to ensure that the indoubt units have
been recovered. When no indoubt units of recovery remain, continue with step “2” on page 588.
2. Find the startup log RBA. Keep at least all log records with log RBAs greater than the one that is given
in this message, which is issued at restart:
If you suspended Db2 activity while performing step 1, restart Db2 now.
3. Find the minimum log RBA that is needed. Suppose that you have determined to keep some number of
complete image copy cycles of your least-frequently copied table space. You now need to find the log
RBA of the earliest full image copy that you want to keep.
a) If you have any table spaces that were created so recently that no full image copies of them have
ever been taken, take full image copies of them. If you do not take image copies of them, and you
discard the archive logs that log their creation, Db2 can never recover them.
GUPI The following SQL statement generates a list of the table spaces for which no full image copy
is available:
GUPI
GUPI
The statement generates a list of all databases and the table spaces within them, in ascending
order by date.
c) Find the START_RBA value for the earliest full image copy (ICTYPE=F) that you intend to keep. If
your least-frequently copied table space is partitioned, and you take full image copies by partition,
use the earliest date for all the partitions.
If you plan to discard records from SYSIBM.SYSCOPY and SYSIBM.SYSLGRNX, note the date of the
earliest image copy that you want to keep.
4. Use job DSNTIJIC to copy all catalog and directory table spaces. Doing so ensures that copies of these
table spaces are included in the range of log records that you plan to keep.
5. Locate and discard archive log volumes. Now that you know the minimum log RBA, from step 3,
suppose that you want to find archive log volumes that contain only log records earlier than that.
Proceed as follows:
a) Execute the print log map utility (DSNJU004) to print the contents of the BSDS.
Related tasks
Resolving indoubt units of recovery
If Db2 loses its connection to another system, it attempts to recover all inconsistent objects after restart.
The information that is needed to resolve indoubt units of recovery must come from the coordinating
system.
Related reference
DSNTIPA: Archive log data set parameters panel (Db2 Installation and Migration)
DSNJU003 (change log inventory) (Db2 Utilities)
DSNJU004 (print log map) (Db2 Utilities)
Chapter 10. Managing the log and the bootstrap data set 589
Archive log data sets
Archive log data sets are dynamically allocated. When one is allocated, the data set name is registered
in the BSDS in separate entries for each volume on which the archive log resides. The list of archive log
data sets expands as archives are added, and the list wraps around when a user-determined number of
entries is reached. The maximum number of archive log data sets that Db2 keeps in the BSDS depends
on the value of the MAXARCH subsystem parameter. The allowable values for the MAXARCH subsystem
parameter are between 10 and 10,000 (inclusive). If two copies of the archive log are being created, the
BSDS will contain records for both copies, resulting in as many as 20,000 entries.
You can manage the inventory of archive log data sets with the change log inventory utility (DSNJU003).
A wide variety of tape management systems exist, along with the opportunity for external manual
overrides of retention periods. Because of that, Db2 does not have an automated method to delete
the archive log data sets from the BSDS inventory of archive log data sets. Thus, the information about
an archive log data set can be in the BSDS long after the archive log data set is scratched by a tape
management system following the expiration of the retention period of the data set.
Conversely, the maximum number of archive log data sets might be exceeded, and the data from the
BSDS might be dropped long before the data set reaches its expiration date.
Important: The BSDS must be converted to use extended 10-byte RBA and LRSN format records before
Db2 is started in Db2 12.
Related concepts
Automatic archive log deletion
You can use a disk or tape management system to delete archive log data sets or tapes automatically.
Related tasks
Convert the BSDS, Db2 catalog, and directory to 10-byte RBA and LRSN format (Db2 Installation and
Migration)
Related reference
DSNJCNVT (Db2 Utilities)
DSNJU003 (change log inventory) (Db2 Utilities)
-SET LOG (Db2) (Db2 Commands)
Procedure
To restore dual-BSDS mode:
1. Use access method services to rename or delete the failing BSDS.
2. Define a new BSDS with the same name as the deleted BSDS.
3. Issue the Db2 RECOVER BSDS command to make a copy of the good BSDS in the newly allocated data
set.
Chapter 10. Managing the log and the bootstrap data set 591
592 Db2 12 for z/OS: Administration Guide (Last updated: 2023-07-20)
Chapter 11. Restarting Db2 after termination
When you need to restart Db2 after Db2 terminates normally or abnormally, keep in mind these
considerations, which are important for backup and recovery, and for maintaining consistency.
Methods of restarting
Db2 can restart in several different ways. Some options are based on how Db2 terminated or what your
environment is.
Types of termination
Db2 terminates normally in response to the STOP DB2 command . If Db2 stops for any other reason, the
termination is considered abnormal.
Normal termination
In a normal termination, Db2 stops all activity in an orderly way.
GUPI You can use either STOP DB2 MODE (QUIESCE) or STOP DB2 MODE (FORCE). The effects of each
command are compared in the following table.
You can use either command to prevent new applications from connecting to Db2.
When you issue the STOP DB2 MODE(QUIESCE) command, current threads can run to completion, and
new threads can be allocated to an application that is running.
With IMS and CICS, STOP DB2 MODE(QUIESCE) allows a current thread to run only to the end of the unit
of recovery, unless either of the following conditions are true:
• Open, held cursors exist.
• Special registers are not in their original state.
EXEC SQL
. ← -STOP DB2 MODE(QUIESCE) issued here
⋮
SYNCPOINT
⋮
EXEC SQL ← This receives an AETA abend
Without the check, the next Db2 session could conceivably update an entirely different catalog and
set of table spaces. If the check fails, you probably have the wrong parameter module. Start Db2
with the command START DB2 PARM(module-name), and name the correct module.
2. Db2 checks the consistency of the timestamps in the BSDS.
• If both copies of the BSDS are current, Db2 tests whether the two timestamps are equal.
– If they are equal, processing continues with step “3” on page 595.
– If they are not equal, Db2 issues message DSNJ120I and terminates. That can happen when the
two copies of the BSDS are maintained on separate disk volumes (as recommended) and one of
the volumes is restored while Db2 is stopped. Db2 detects the situation at restart.
To recover, copy the BSDS with the latest timestamp to the BSDS on the restored volume. Also
recover any active log data sets on the restored volume, by copying the dual copy of the active log
data sets onto the restored volume.
• If one copy of the BSDS was deallocated, and logging continued with a single BSDS, a problem could
arise. If both copies of the BSDS are maintained on a single volume, and the volume was restored, or
if both BSDS copies were restored separately, Db2 might not detect the restoration. In that case, log
records that are not noted in the BSDS would be unknown to the system.
3. Db2 finds in the BSDS the log RBA of the last log record that was written before termination.
The highest RBA field (as shown in the output of the print log map utility) is updated only when the
following events occur:
Procedure
To terminate Db2, issue either of the following commands:
• -STOP DB2 MODE (QUIESCE)
• -STOP DB2 MODE (FORCE)
During shutdown, use the command DISPLAY THREAD to check the shutdown progress. If shutdown is
taking too long, you can issue STOP DB2 MODE (FORCE), but rolling back work can take as long as or
longer than the completion of QUIESCE.
Results
When stopping in either mode, the following steps occur:
1. Connections end.
2. Db2 ceases to accept commands.
3. Db2 disconnects from the IRLM.
4. The shutdown checkpoint is taken and the BSDS is updated. GUPI
Procedure
Identify the element names of the Db2 and IRLM subsystems.
• For a non-data-sharing Db2, the element name is 'Db2$' concatenated by the subsystem name
(Db2$DB2A, for example). To specify that a Db2 subsystem is not to be restarted after a failure, include
RESTART_ATTEMPTS(0) in the policy for that Db2 element.
• For local mode IRLM, the element name is a concatenation of the IRLM subsystem name and the IRLM
ID. For global mode IRLM, the element name is a concatenation of the IRLM data sharing group name,
the IRLM subsystem name, and the IRLM ID.
Related reference
Adding MVS systems to a sysplex
Procedure
To defer restart processing, use one of the following approaches:
• To vary the device (or volume) on which the objects reside offline:
If the data sets that contain an object are not available, and the object requires recovery during restart,
Db2 flags it as stopped and requiring deferred restart. Db2 then restarts without it.
• To delay the backout of a long-running UR, specify the following subsystem parameters:
Procedure
To perform a conditional restart:
1. Optional: When considering a conditional restart, it is often useful to run the DSN1LOGP utility and
review a summary report of the information contained in the log.
2. While Db2 is stopped, run the change log inventory utility by using the CRESTART control statement to
create a new conditional restart control record.
3. Restart Db2. The recovery operations that take place are governed by the current conditional restart
control record.
Procedure
Use the RECOVER POSTPONED command.
You cannot specify a single unit of work for resolution. This command might take several hours to
complete depending on the content of the long-running job.
In some circumstances, you can elect to use the CANCEL option of the RECOVER POSTPONED command.
This option leaves the objects in an inconsistent state (REFP) that you must resolve before using the
objects. However, you might choose the CANCEL option for the following reasons:
• You determine that the complete recovery of the postponed units of recovery will take more time to
complete than you have available. You also determine it is faster to either recover the objects to a prior
point in time or run the LOAD utility with the REPLACE option.
• You want to replace the existing data in the object with new data.
• You decide to drop the object. To drop the object successfully, complete the following steps:
a. Issue the RECOVER POSTPONED command with the CANCEL option.
b. Issue the DROP TABLESPACE statement.
• You do not have the Db2 logs to successfully recover the postponed units of recovery.
Example
Output from the RECOVER POSTPONED command consists of informational messages. In the following
example, backout processing was performed against two table space partitions and two index partitions:
Procedure
If Db2 terminates abnormally:
1. Fix the error.
2. Restart Db2.
3. Re-issue the RECOVER POSTPONED command if automatic backout processing has not been
specified.
If the RECOVER POSTPONED processing lasts for an extended period, the output includes DSNR047I
messages to help you monitor backout processing. These messages show the current RBA that is
being processed and the target RBA.
Related tasks
Deferring restart processing
When a specific object is causing problems, you can defer its restart processing by starting Db2 and
preventing the problem object from going through restart processing.
Related reference
-RECOVER POSTPONED (Db2) (Db2 Commands)
Related information
DSNR047I (Db2 Messages)
Time
1 2 3 4 5 6 7 8 9 10 11 12 13
line
Participant
Phase 1 Phase 2
Figure 42. Time line illustrating a commit that is coordinated with another subsystem
The numbers below are keyed to the timeline in the figure. The resultant state of the update operations at
the participant are shown between the two lines.
1. The data in the coordinator is at a point of consistency.
2. An application program in the coordinator calls the participant to update some data, by executing an
SQL statement.
3. This starts a unit of recovery in the participant.
4. Processing continues in the coordinator until an application synchronization point is reached.
5. The coordinator then starts commit processing. IMS can do that by using a DL/I CHKP call, a fast
path SYNC call, a GET UNIQUE call to the I/O PCB, or a normal application termination. CICS
uses a SYNCPOINT command or a normal application termination. A Db2 application starts commit
processing by an SQL COMMIT statement or by normal termination. Phase 1 of commit processing
begins.
6. The coordinator informs the participant that it is to prepare for commit. The participant begins phase
1 processing.
7. The participant successfully completes phase 1, writes this fact in its log, and notifies the
coordinator.
8. The coordinator receives the notification.
9. The coordinator successfully completes its phase 1 processing. Now both subsystems agree to
commit the data changes because both have completed phase 1 and could recover from any
failure. The coordinator records on its log the instant of commit—the irrevocable decision of the
two subsystems to make the changes.
The coordinator now begins phase 2 of the processing—the actual commitment.
10. The coordinator notifies the participant that it can begin phase 2.
11. The participant logs the start of phase 2.
IMS/
DB2D
CICS
DB2A DB2B Server
AS2 DB2E
Server
If the connection between DB2A and the coordinating IMS system fails, the connection becomes an
indoubt thread. However, DB2A connections to the other systems are still waiting and are not considered
indoubt. Automatic recovery occurs to resolve the indoubt thread. When the thread is recovered, the unit
of work commits or rolls back, and this action is propagated to the other systems that are involved in the
unit of work.
Coordinator
Phase 1 Phase 2
Prepare Commit
Time 1 2 3 4 5
line
The following process describes each action that the figure illustrates.
Phase 1
1. When an application commits a logical unit of work, it signals the Db2 coordinator. The coordinator
starts the commit process by sending messages to the participants to determine whether they can
commit.
2. A participant (Participant 1) that is willing to let the logical unit of work be committed, and which
has updated recoverable resources, writes a log record. It then sends a request-commit message
to the coordinator and waits for the final decision (commit or roll back) from the coordinator. The
logical unit of work at the participant is now in the prepared state.
If a participant (Participant 2) has not updated recoverable resources, it sends a forget message
to the coordinator, releases its locks, and forgets about the logical unit of work. A read-only
participant writes no log records. The disposition (commit or rollback) of the logical unit of work is
irrelevant to the participant.
If a participant wants to have the logical unit of work rolled back, it writes a log record and sends
a message to the coordinator. Because a message to roll back acts like a veto, the participant in
this case knows that the logical unit of work is to be rolled back by the coordinator. The participant
does not need any more information from the coordinator and therefore rolls back the logical unit
of work, releases its locks, and forgets about the logical unit of work. (This case is not illustrated in
the figure.)
Phase 2
1. After the coordinator receives request-commit or forget messages from all its participants, it starts
the second phase of the commit process. If at least one of the responses is request-commit,
the coordinator writes a log record and sends committed messages to all the participants who
responded to the prepare message with request-commit. If neither the participants nor the
coordinator have updated any recoverable resources, no second phase occurs, and no log records
are written by the coordinator.
2. Each participant, after receiving a committed message, writes a log record, sends a response to the
coordinator, and then commits the logical unit of work.
If any participant responds with a roll back message, the coordinator writes a log record and sends
a roll back message to all participants. Each participant, after receiving a roll back message writes
a log record, sends an acknowledgment to the coordinator, and then rolls back the logical unit of
work. (This case is not illustrated in the figure.)
Time
1 2 3 4 5 6 7 8 9 10 11 12 13
line
Participant
Phase 1 Phase 2
Figure 45. Time line illustrating a commit that is coordinated with another subsystem
Status
Description and Processing
Inflight
The participant or coordinator failed before finishing phase 1 (period a or b); during restart, both
systems back out the updates.
Indoubt
The participant failed after finishing phase 1 and before starting phase 2 (period c); only the
coordinator knows whether the failure happened before or after the commit (point 9). If it happened
before, the participant must back out its changes; if it happened afterward, it must make its changes
and commit them. After restart, the participant waits for information from the coordinator before
processing this unit of recovery.
In-commit
The participant failed after it began its own phase 2 processing (period d); it makes committed
changes.
In-abort
The participant or coordinator failed after a unit of recovery began to be rolled back but before the
process was complete (not shown in the figure). The operational system rolls back the changes; the
failed system continues to back out the changes after restart.
postponed-abort
If the LIMIT BACKOUT installation option is set to YES or AUTO, any backout not completed during
restart is postponed. The status of the incomplete URs is changed from inflight or in-abort to
postponed-abort.
DSNL405I = THREAD
G91E1E35.GFA7.00F962CC4611.0001=217
PLACED IN INDOUBT STATE BECAUSE OF
COMMUNICATION FAILURE WITH COORDINATOR ::FFFF:9.30.30.53.
INFORMATION RECORDED IN TRACE RECORD WITH IFCID=209
AND IFCID SEQUENCE NUMBER=00000001
After a failure, WebSphere Application Server is responsible for resolving indoubt transactions and for
handling any failure recovery. To perform these functions, the server must be restarted and the recovery
process initiated by an operator. You can also manually resolve indoubt transactions with the RECOVER
INDOUBT command.
Recommendation: Let WebSphere Application Server resolve the indoubt transactions. Manually recover
indoubt transactions only as a last resort to start Db2 and to release locks.
Procedure
GUPI To manually resolve indoubt transactions:
1. Issue the command -DISPLAY THREAD(*) T(I) DETAIL to display indoubt threads from the
resource manager console.
This command produces output like this example:
Key
Description
4. Display indoubt threads again from the resource manager console by issuing the -DISPLAY
THREAD(*) T(I) DETAIL command.
This command produces output like this example:
Key
Description
1
Notice that the transaction now appears as a heuristically committed transaction
(COMMITTED=H).
5. If the transaction manager does not recover the indoubt transactions in a timely manner, reset the
transactions from the resource manager console to purge the indoubt thread information. Specify the
IP address and port from the DISPLAY THREAD command in step 1 by issuing the -RESET INDOUBT
IPADDR(::FFFF:9.30.30.53..4007)FORCE command.
This command produces output like this example:
GUPI
Procedure
To resolve units of recovery manually, you must use the following approaches:
• Commit changes that were made by logical units of work that were committed by the other system.
• Roll back changes that were made by logical units of work that were rolled back by the other system.
Procedure
To ascertain the status of indoubt units of work, use one of the following approaches:
• Use a NetView program. Write a program that analyzes NetView alerts for each involved system, and
returns the results through the NetView system.
• Use an automated z/OS console to ascertain the status of the indoubt threads at the other involved
systems.
• GUPI Use the command DISPLAY THREAD TYPE(INDOUBT) LUWID(luwid).
If the coordinator Db2 system is started and no Db2 cold start was performed, you can issue a
DISPLAY THREAD TYPE(INDOUBT) command. If the decision was to commit, the display thread
indoubt report includes the LUWID of the indoubt thread. If the decision was to abort, the thread
is not displayed. GUPI
• Read the recovery log by using DSN1LOGP.
If the coordinator Db2 cannot be started, DSN1LOGP can determine the commit decision. If the
coordinator Db2 performed a cold start (or any type of conditional restart), the system log should
contain messages DSNL438I or DSNL439I, which describe the status of the unit of recovery (LUWID).
Procedure
Issue the RECOVER INDOUBT command.
Use the ACTION(ABORT|COMMIT) option of the RECOVER INDOUBT command to commit or roll back
a logical unit of work (LUW). If your system is the coordinator of one or more other systems that are
involved with the logical unit of work, your action propagates to the other system that are associated with
the LUW.
Example
GUPI Assume that you need to recover two indoubt threads. The first has
LUWID=DB2NET.LUNSITE0.A11A7D7B2057.0002, and the second has a token of 442. To commit the
LUWs, enter the following command:
GUPI
Related concepts
Scenarios for resolving problems with indoubt threads
Indoubt threads can cause a variety of problems, but you can recover from these problems.
Procedure
Issue the RESET INDOUBT command.
Db2 maintains this information until normal automatic recovery. You can purge information about threads
where Db2 is either the coordinator or participant. If the thread is an allied thread that is connected
to IMS or CICS, the command applies only to coordinator information about downstream participants.
Information that is purged does not appear in the next display thread report and is erased from the Db2
logs.
Examples
GUPI
You can issue the following command to reset the indoubt unit of work by specifying the IP address
(FFFF:10.97.217.50) and the resync port number of the coordinator (1332) from the message:
GUPI
Related concepts
Monitoring threads with DISPLAY THREAD commands
The DISPLAY THREAD command output displays information about threads that are processing locally
and for distribute requests, stored procedures or user-defined functions executed by threads, and parallel
tasks. It can also indicate that a system quiesce is in effect as a result of the ARCHIVE LOG command.
Related reference
-RESET INDOUBT (Db2) (Db2 Commands)
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information
DSNV406I (Db2 Messages)
Procedure
Run the DSNJU003 utility and specify the CRESTART control statement with the following options:
• STARTRBA, where the value is the first log RBA that is available after the indoubt UR
• FORWARD=YES to allow forward-log recovery
• BACKOUT=YES to allow backward-log recovery
Related reference
DSNJU003 (change log inventory) (Db2 Utilities)
Plans for recovering the Db2 tables and indexes used to support
Db2 query acceleration
The process of recovering the Db2 tables and indexes that are created specifically for use with IBM Db2
Analytics Accelerator for z/OS is different than the process of recovering Db2 catalog tables and indexes.
To support Db2 query acceleration with IBM Db2 Analytics Accelerator, certain Db2 tables and indexes
are created and then used by both Db2 and IBM Db2 Analytics Accelerator. These Db2 tables and indexes
have the qualifier SYSACCEL and are not created in the Db2 catalog table spaces. Instead, they are
created independently in separate table spaces by running DDL that is provided by IBM. Because these
Db2 SYSACCEL objects are not part of the Db2 catalog space, they must be backed up and recovered
separately, as you do with your user data. For these SYSACCEL objects, follow the recommended backup
and recovery steps and strategies that are provided for user data.
For more information about the SYSACCEL objects that are used to support Db2 query acceleration and
how they are created, see Tables that support query acceleration (Db2 SQL) and Creating database
objects that support query acceleration (Db2 Installation and Migration).
Time
line
LOGRBA of
more recent
incremental
image copy
Time
line
If you are using the BACKUP SYSTEM utility, you should schedule the frequency of system-level backups
based on your most critical data.
Procedure
To discard SYSCOPY and SYSLGRNX records:
1. Complete the first three steps of the procedure that is presented in “Locating archive log data sets” on
page 587. In the third step, note the date of the earliest image copy that you intend to keep.
Important: The earliest image copies and log data sets that you need for recovery to the present date
are not necessarily the earliest ones that you want to keep. If you foresee resetting the Db2 subsystem
The RETAIN LAST( n ) option keeps the n recent records and removes the older one.
You can delete SYSCOPY records for a single partition by naming it with the DSNUM keyword. That
option does not delete SYSLGRNX records and does not delete SYSCOPY records that are later than
the earliest point to which you can recover the entire table space. Thus, you can still recover by
partition after that point.
The MODIFY utility discards SYSLGRNX records that meet the deletion criteria when the AGE or DATE
options are specified, even if no SYSCOPY records were deleted.
You cannot run the MODIFY utility on a table space that is in RECOVER-pending status.
Even if you take system-level backups, use the MODIFY utility to delete obsolete records from
SYSIBM.SYSCOPY and SYSIBM.SYSLGRNX. You do not need to delete the system-level backup
information in the bootstrap data set (BSDS) because only the last 85 system-level backups are kept.
In this example, use RECOVER with RESTOREBEFORE X'A0000' to use inline image recovery copy taken at
X'90000' as a recovery base.
Related concepts
How to report recovery information
You can use the REPORT utility when you plan for recovery.
Dump Tasks (z/OS DFSMShsm Storage Administration)
Member A
Member B
Member C
Figure 50. Using the RECOVER TOLOGPOINT option in a data sharing system
Inflight
UR1 start Update TS1 Update TS2
T1 T2 T3
Inabort
UR2 start Update TS2
T4 T5
Figure 51. Using the RECOVER TOLOGPOINT option in a non-data sharing system
Procedure
To recover a table space to a point in time that is before materialization of pending definition changes:
1. Run the RECOVER utility to recover the data to the point in time that you want.
If you specify the TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY option, you need to use an image copy
that was taken with the SHRLEVEL REFERENCE option. If no appropriate image copies are available,
you can run RECOVER with the TOLOGPOINT or TORBA option.
For most types of pending definition changes, the table space is placed in REORG-pending (REORP)
status after the RECOVER utility runs. Changes to partition limit keys and column definitions are
exceptions that require no subsequent REORG.
Restrictions: After you complete this step, and before you complete the next step, you cannot perform
any of the following actions:
• Execute any of the following statements on the table space, on any objects in the table space, on
indexes that are related to tables in the table space, or on auxiliary objects that are associated with
the table space:
– CREATE TABLE
– CREATE AUXILIARY TABLE
– CREATE INDEX
– ALTER TABLE
– ALTER INDEX
– RENAME
– DROP TABLE
• Execute SQL statements that result in pending definition changes on any of the following objects:
– The table space
– Tables in the table space
– Auxiliary table spaces that are related to the table space
– Indexes on tables in the table space
• Run any utilities that are not in this list:
– RECOVER to the same point in time
– REORG
– REPAIR DBD
– REPORT RECOVERY
2. If the table space is in REORG-pending (REORP) status, run the REORG TABLESPACE utility with
SHRLEVEL REFERENCE on the entire table space to complete the point-in-time recovery process.
Example
The following example provides a scenario that shows how you can recover a table space to a point in
time before pending definition changes were materialized, and then use the REORG TABLESPACE utility
with SHRLEVEL REFERENCE to complete recovery.
GUPI
1. You execute the following ALTER TABLESPACE statement to change the buffer pool page size. This
change is a pending definition change.
When this statement runs, the table space is placed in REORG-pending (REORP) state, and an entry is
inserted into the SYSPENDINGDDL table with OBJTYPE = 'S', for table space.
4. You run the following SELECT statement to query the SYSIBM.SYSPENDINGDDL catalog table:
Table 63. Output from the SELECT statement for the SYSPENDINGDDL catalog table after RECOVER to a
point in time before materialization of pending definition changes
DBNAME TSNAME OBJSCHEMA OBJNAME OBJTYPE
DB1 TS1 DB1 TS1 S
Table 64. Continuation of output from the SELECT statement for the SYSPENDINGDDL catalog table
after RECOVER to a point in time before materialization of pending definition changes
OPTION_SEQNO OPTION_KEYWORD OPTION_VALUE CREATEDTS
GUPI
5. Now, you run the REORG TABLESPACE utility with SHRLEVEL REFERENCE on the entire table space.
For example:
The REORG utility completes point-in-time recovery. After the REORG utility runs, the REORG-pending
(REORP) state is cleared, and all entries in the SYSPENDINGDDL table for the table space are removed.
Recovery of indexes
When you recover indexes to a prior point of consistency, some rules apply.
In general, the following rules apply:
• If image copies exists for an indexes, use the RECOVER utility.
• If you take system-level backups, use the RECOVER utility.
• If indexes do not have image copies or system-level backups, use REBUILD INDEX to re-create the
indexes after the data has been recovered.
More specifically, you must consider how indexes on altered tables and indexes on tables in partitioned
table spaces can restrict recovery.
Before attempting recovery, analyze the recovery information.
Related concepts
How to report recovery information
You can use the REPORT utility when you plan for recovery.
Related reference
Implications of moving data sets after a system-level backup
When you recover data sets from a system-level backup to a prior point in time, the data sets do not need
to reside on the same volume as when the backup was made. The RECOVER utility can use system-level
backups even if a data set has moved since the backup was created.
Procedure
Run the REPORT utility with the TABLESPACESET option.
Related concepts
Recovery of table space sets
If you restore a page set to a prior state, restore all related tables and indexes to the same point to avoid
inconsistencies.
Creation of relationships with referential constraints (Introduction to Db2 for z/OS)
LOB table spaces
LOB table spaces (also known as large object or auxiliary table spaces) hold LOB data, such as graphics,
video, or large text strings. If your data does not fit entirely within a data page, you can define one or more
columns as LOB columns.
XML table spaces
An XML table space is an implicitly created universal (UTS) table space that stores an XML table.
Archive-enabled tables and archive tables (Introduction to Db2 for z/OS)
Related reference
Syntax and options of the REPORT control statement (Db2 Utilities)
Procedure
Issue the DISPLAY DATABASE RESTRICT command.
Procedure
Run the QUIESCE utility.
Typically, you name all of the table spaces in a table space set that you want recovered to the same point
in time to avoid referential integrity violations. Alternatively, you can use the QUIESCE utility with the
TABLESPACESET keyword for referential integrity-related tables.
The QUIESCE utility writes changed pages from the page set to disk. The SYSIBM.SYSCOPY catalog
table records the current RBA and the timestamp of the quiesce point. At that point, neither page
set contains any uncommitted data. A row with ICTYPE Q is inserted into SYSCOPY for each table
space that is quiesced. Page sets DSNDB06.SYSTSCPY, DSNDB01.DBD01, DSNDB01.SYSUTILX, and
DSNDB01.SYSDBDXA are an exception. Their information is written to the log. Indexes are quiesced
automatically when you specify WRITE(YES) on the QUIESCE statement. A SYSIBM.SYSCOPY row with
ICTYPE Q is inserted for indexes that have the COPY YES attribute.
The QUIESCE utility allows concurrency with many other utilities; however, it does not allow concurrent
updates until it has quiesced all specified page sets and depending on the amount of activity, that can
take considerable time. Try to run the QUIESCE utility when system activity is low.
Example
The following statement quiesces two table spaces in database DSN8D12A:
Related tasks
Archiving the log
If you are a properly authorized operator, you can archive the current Db2 active log data sets when
necessary by issuing the ARCHIVE LOG command. Using the ARCHIVE LOG command can help with
diagnosis by enabling you to quickly offload the active log to the archive log, where you can use
DSN1LOGP to further analyze the problem.
Related reference
QUIESCE (Db2 Utilities)
Procedure
To prepare a point of consistency:
1. Display and resolve any indoubt units of recovery.
2. Use the COPY utility to make image copies of all the following types of data:
• User data
• Db2 catalog and directory table spaces, and optionally indexes
• If you are using Db2 query acceleration with IBM Db2 Analytics Accelerator for z/OS, the table
spaces and index spaces for the Db2 SYSACCEL tables and indexes
Copy SYSLGRNX and SYSCOPY last. Installation job DSNTIJIC creates image copies of the Db2 catalog
and directory table spaces. If you decide to copy your directory and catalog indexes, modify job
DSNTIJIC to include those indexes.
Alternate method: Alternatively, you can use an off-line method to copy the data. In that case, stop
Db2 first; that is, do the next step before doing this step. If you do not stop Db2 before copying, you
might have trouble restarting after restoring the system. If you do a volume restore, verify that the
restored data is cataloged in the integrated catalog facility catalog. Use the access method services
LISTCAT command to get a listing of the integrated catalog.
3. Stop Db2 with the command STOP DB2 MODE (QUIESCE).
Important: Be sure to use MODE (QUIESCE); otherwise, I/O errors can occur when you fall back
before a Db2 restart.
Db2 does not actually stop until all currently executing programs have completed processing.
4. When Db2 has stopped, use access method services EXPORT to copy all BSDS and active log data sets.
If you have dual BSDSs or dual active log data sets, export both copies of the BSDS and the logs.
5. Save all the data that has been copied or dumped, and protect it and the archive log data sets from
damage.
Related tasks
Installation step 23: Back up the Db2 directory and catalog: DSNTIJIC (Db2 Installation and Migration)
Procedure
To create essential disaster recovery elements:
1. Make image copies:
a) Make copies of your data sets and Db2 catalogs and directories.
Use the COPY utility to make copies for the local subsystem and additional copies for disaster
recovery. You can also use the COPYTOCOPY utility to make additional image copies from the
primary image copy made by the COPY utility. Install your local subsystem with the LOCALSITE
option of the SITE TYPE field on installation panel DSNTIPO. Use the RECOVERYDDN option when
you run COPY to make additional copies for disaster recovery. You can use those copies on any Db2
subsystem that you have installed using the RECOVERYSITE option.
Tip: You can also use these copies on a subsystem that is installed with the LOCALSITE option
if you run RECOVER with the RECOVERYSITE option. Alternatively, you can use copies that are
prepared for the local site on a recovery site if you run RECOVER with the option LOCALSITE.
Important: Do not produce copies by invoking COPY twice.
b) Optional: Catalog the image copies if you want to track them.
c) Create a QMF report or use SPUFI to issue a SELECT statement to list the contents of SYSCOPY.
d) Send the image copies, and report to the recovery site.
e) Record this activity at the recovery site when the image copies and the report are received.
All table spaces should have valid image copies. Indexes can have valid image copies or they can be
rebuilt from the table spaces.
2. Make copies of the archive logs for the recovery site:
a) Use the ARCHIVE LOG command to archive all current Db2 active log data sets.
Recommendation: When using dual logging, keep both copies of the archive log at the local site in
case the first copy becomes unreadable. If the first copy is unreadable, Db2 requests the second
copy. If the second copy is not available, the read fails.
However, if you take precautions when using dual logging, such as making another copy of the
first archive log, you can send the second copy to the recovery site. If recovery is necessary at the
recovery site, specify YES for the READ COPY2 ARCHIVE field on installation panel DSNTIPO. Using
this option causes Db2 to request the second archive log first.
b) Optional: Catalog the archive logs if you want to track them.
You will probably need some way to track the volume serial numbers and data set names. One way
of doing this is to catalog the archive logs to create a record of the necessary information. You can
also create your own tracking method and do it manually.
c) Use the print log map utility to create a BSDS report.
d) Send the archive copy, the BSDS report, and any additional information about the archive log to the
recovery site.
e) Record this activity at the recovery site when the archive copy and the report are received.
3. Choose consistent system time:
Important: After you establish a consistent system time, do not alter the system clock. Any manual
change in the system time (forward or backward) can affect how Db2 writes and processes image
copies and log records.
a) Choose a consistent system time for all Db2 subsystems.
What to do next
For disaster recovery to be successful, all copies and reports must be updated and sent to the recovery
site regularly. Data is up to date through the last archive that is sent.
Related concepts
Multiple image copies (Db2 Utilities)
Related tasks
Archiving the log
If you are a properly authorized operator, you can archive the current Db2 active log data sets when
necessary by issuing the ARCHIVE LOG command. Using the ARCHIVE LOG command can help with
diagnosis by enabling you to quickly offload the active log to the archive log, where you can use
DSN1LOGP to further analyze the problem.
Related information
Performing remote-site disaster recovery
When your local system experiences damage or disruption that prevents recovery from that site, you can
recover by using a remote site that you have set up for this purpose.
Procedure
To resolve problems:
1. Issue the following Db2 command:
Note: If you are adding or deleting work files, you do not need to stop database DSNDB07.
2. Use the DELETE and DEFINE functions of access method services to redefine a user work file on a
different volume, and reconnect it to Db2.
Procedure
GUPI To resolve problems with the work file database:
1. Issue the following SQL statement to remove the problem volume from the Db2 storage group:
Note: If you are adding or deleting work files, you do not need to stop database DSNDB07.
3. Issue the following SQL statement to drop the table space that has the problem:
4. Re-create the table space. You can use the same storage group, because the problem volume has
been removed, or you can use an alternate volume.
GUPI
Procedure
To recover error ranges for a work file table space:
1. Stop the work file table space.
2. Correct the disk error, using the ICKDSF service utility or access method services to delete and
redefine the data set.
3. Start the work file table space.
When the work file table space is started, Db2 automatically resets the error range.
Procedure
Instead of using the RECOVER utility, use the following procedure to recover those table spaces and their
indexes:
1. Run DSN1COPY to restore the table spaces from an image copy.
2. Run the RECOVER utility with the LOGONLY option to apply updates from the log records to the
recovered table spaces.
3. Rebuild the indexes.
4. Make a full image copy of the table spaces, and optionally the indexes, to make the table spaces and
indexes recoverable.
Procedure
To regenerate missing identity column values:
1. GUPI Choose a starting value for the identity column with the following ALTER TABLE statement:
GUPI
Tip: To determine the last value in an identity column, issue the MAX column function for ascending
sequences of identity column values, or the MIN column function for descending sequences of identity
column values. This method works only if the identity column does not use CYCLE.
Procedure
To recover to a log point:
1. Use the RECOVER utility to recover table spaces to the log point.
2. Use concurrent REBUILD INDEX jobs to rebuild the indexes for each table space.
Procedure
To clear the ICOPY status:
1. First, use the DISPLAY DATABASE ADVISORY command to display the ICOPY status for table spaces.
For example:
2. To clear the ICOPY status, you must take a full image copy of the table space.
The following table shows the possible table space statuses for non-LOB tables spaces that are not
logged:
Table 67. Status of non-LOB table spaces that are not logged, after LOAD or REORG with LOG NO keyword
Inline copy Records discarded Table space status
Yes No No pending status
Yes Yes ICOPY-pending
No not applicable ICOPY-pending
Removing various pending states from LOB and XML table spaces
You can remove various pending states from a LOB table space or an XML table space by using a
collection of utilities in a specific order.
Procedure
To remove pending states from a LOB table space or an XML table space:
1. Use the REORG TABLESPACE utility to remove the REORP status.
2. If the table space status is auxiliary CHECK-pending status:
a) Use CHECK LOB for all associated LOB table spaces.
b) Use CHECK INDEX for all LOB indexes, as well as the document ID, node ID, and XML indexes.
3. Use the CHECK DATA utility to remove the CHECK-pending status.
Tip: GUPI If you specify WITH RESTRICT ON DROP when you create a table, the table, and the table
space and database that contain it, cannot be dropped unless the restriction on the table is removed first.
The ALTER TABLE statement includes a clause for imposing this restriction or removing it. GUPI
Procedure
To prepare for recovering accidentally dropped objects, use the following approaches:
• Run regular catalog reports to collect lists of all OBIDs in the subsystem.
• Create catalog reports that list dependencies on the table or (such as referential constraints, indexes,
and so on).
After a table is dropped, this information disappears from the catalog.
• If an OBID has been reused by Db2, run DSN1COPY to translate the OBIDs of the objects in the data
set.
However, this event is unlikely; Db2 reuses OBIDs only when no image copies exist that contain data
from that table.
Related reference
ALTER TABLE (Db2 SQL)
Procedure
To recover a dropped table:
1. If you know the DBID, the PSID, the original OBID of the dropped table, and the OBIDs of all other
tables in the table space, go to step “2” on page 679.
If you do not know all of the preceding items, use the following steps to find them. For later use with
DSN1COPY, record the DBID, the PSID, and the OBIDs of all the tables in the table space, not just the
dropped table.
GUPI
Stopping the table space is necessary to ensure that all changes are written out and that no data
updates occur during this procedure.
4. Find the OBID for the table that you created in step “2” on page 679 by querying the
SYSIBM.SYSTABLES catalog table.
GUPI The following statement returns the object ID (OBID) for the table:
This value is returned in decimal format, which is the format that you need for DSN1COPY. GUPI
5. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to
copy data from the dropped table into the original data set. You must specify a previous full image
copy data set, inline copy data set, or DSN1COPY file as the input data set SYSUT1 in the control
statement. Specify each of the input records in the following order in the SYSXLAT file to perform
OBID translations:
a) The DBID that you recorded in step “1” on page 678 as both the translation source and the
translation target
b) The PSID that you recorded in step “1” on page 678 as both the translation source and the
translation target
c) The original OBID that you recorded in step “1” on page 678 for the dropped table as the
translation source and the OBID that you recorded in step “4” on page 679 as the translation
target
d) OBIDs of all other tables in the table space that you recorded in step “2” on page 679 as both the
translation sources and translation targets
Be sure that you have named the VSAM data sets correctly by checking messages DSN1998I and
DSN1997I after DSN1COPY completes.
6. Use DSN1COPY with the OBIDXLAT and RESET options to apply any incremental image copies. You
must apply these incremental copies in sequence, and specify the same SYSXLAT records that step
“5” on page 679 specifies.
GUPI
Procedure
To recover a dropped table space:
1. Find the original DBID for the database, the PSID for the table space, and the OBIDs of all tables that
are contained in the dropped table space.
a) Run the DSN1PRNT utility for the data set that contains the dropped table space, with the
FORMAT and NODATA options to locate the following values for the table space:
DBID
The first two bytes in the 4-byte field HPGOBID of the header page contain the DBID for the
database.
PSID
The last two bytes in field HPGOBID of the header page contain the PSID for the table space.
OBID
For universal (UTS), partitioned (non-UTS), or LOB table spaces, the HPGROID field of the
header page contains the OBID for the single table in the table space.
For segmented (non-UTS) table spaces, field SEGOBID in the space map page contains the
OBIDs. If the table space contains more than one table, you must specify all OBIDs from the
data set as input to the DSN1COPY utility.
b) Convert the hex values in the identifier fields to decimal so that they can be used as input for the
DSN1COPY utility.
2. Re-create the table space and all tables. This re-creation can be difficult when any of the following
conditions is true:
• A table definition is not available.
• A table is no longer required.
If you cannot re-create a table, you must use a dummy table to take its place. A dummy table is
a table with an arbitrary structure of columns that you delete after you recover the dropped table
space.
Attention: When you use a dummy table, you lose all data from the dropped table that you do
not re-create.
3. Re-create auxiliary tables and indexes if a LOB table space has been dropped.
4. To allow DSN1COPY to access the Db2 data set, stop the table space with the following command:
5. Find the new PSID and OBIDs by querying the SYSIBM.SYSTABLESPACE and
SYSIBM.SYSTABLES catalog tables.
The following statement returns the object ID for a table space; this is the PSID.
9. Drop all dummy tables. The row structure does not match the table definition. This mismatch makes
the data in these tables unusable.
10. Reorganize the table space to remove all rows from dropped tables.
11. Rebuild all indexes on the table space.
12. Execute SELECT statements on each table in the recovered table space to verify the recovery. Include
all LOB columns in these queries.
13. Make a full image copy of the table space.
See “Page set and data set copies” on page 644 for more information about the COPY utility.
14. Re-create the objects that are dependent on the table.
See step “Recovering an accidentally dropped table” on page 678 for more information.
Related reference
DSN1COPY (Db2 Utilities)
This step is necessary to prevent updates to the table space during this procedure in the event that
the table space has been left open.
6. Find the target identifiers of the objects that you created in step “4” on page 683 (which
consist of a PSID for the table space and the OBIDs for the tables within that table space) by querying
the SYSIBM.SYSTABLESPACE and SYSIBM.SYSTABLES catalog tables.
The following statement returns the object ID for a table space; this is the PSID.
These values are returned in decimal format, which is the format that you need for the DSN1COPY
utility.
7. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to copy
the data from the renamed VSAM data set that contains the dropped table space to the newly defined
VSAM data set. Specify the VSAM data set that contains data from the dropped table space as the
Procedure
To recover data to a given point in time:
1. Issue the STOP DB2 command to stop your Db2 system.
If your system is a data sharing group, stop all members of the group.
2. If the backup is a full system backup, you might need to restore the log copy pool outside of Db2 by
using DFSMShsm FRRECOV COPYPOOL (cpname) GENERATION (gen).
For data-only system backups, skip this step.
3. Create a conditional restart record, where the SYSPITR option specifies the given point in time that you
want to recover to.
Run DSNJU003 (the change log inventory utility) with the CRESTART SYSPITR and SYSPITRT options,
and specify the log truncation point that corresponds to the point in time to which you want to recover
the system. For data sharing systems, run DSNJU003 on all active members of the data-sharing group,
and specify the same LRSN truncation point for each member. If the point in time that you specify
for recovery is prior to the oldest system backup, you must manually restore the volume backup from
tape.
4. For data sharing systems, delete all CF structures that the data sharing group owns.
5. Restore any logs on tape to disk.
6. Issue the START DB2 command to restart your Db2 system.
For data sharing systems, start all active members.
7. Run the RESTORE SYSTEM utility.
If you manually restored the backup, use the LOGONLY option of RESTORE SYSTEM to apply the
current logs.
8. Stop and restart Db2 again to remove ACCESS(MAINT) status.
Results
After the RESTORE SYSTEM utility completes successfully, your Db2 system has been recovered to the
given point in time with consistency.
Related concepts
Backup and recovery involving clone tables
When you recover a clone table that has been exchanged, you can use an image copy that was made prior
to an exchange. However, no point-in-time recovery is possible prior to the most recent exchange.
Related tasks
Recovering a Db2 subsystem to a prior point in time
You can recover a Db2 subsystem and data sharing group to a prior point in time by using the BACKUP
SYSTEM and RESTORE SYSTEM utilities.
Related reference
RESTORE SYSTEM (Db2 Utilities)
Related information
Tape Authorization for DB2 RESTORE SYSTEM Utility
Procedure
To recover your Db2 system to the point in time of a backup:
1. Back up your system by issuing the BACKUP SYSTEM FULL command.
DFSMShsm maintains up to 85 versions of system backups on disk at any given time.
2. Recover the system:
a) Stop the Db2 subsystem. For data sharing systems, stop all members of the group.
b) Use the DFSMShsm command FRRECOV * COPYPOOL(cpname) GENERATION(gen) to restore the
database and log copy pools that the BACKUP SYSTEM utility creates. In this command, cpname
specifies the name of the copy pool, and gen specifies which version of the copy pool is to be
restored.
c) For data sharing systems, delete all CF structures that are owned by this group.
d) Restore any logs on tape to disk.
e) Start Db2. For data sharing systems, start all active members.
f) For data sharing systems, execute the GRECP and LPL recovery, which recovers the changed data
that was stored in the coupling facility at the time of the backup.
Related concepts
Point-in-time recovery with system-level backups
System-level backups are fast replication backups that are created by using the BACKUP SYSTEM utility.
Procedure
To recover by using FlashCopy volume backups:
1. Back up your system:
a) Issue the Db2 command SET LOG SUSPEND to suspend logging and update activity, and to quiesce
32 KB page writes and data set extensions. For data sharing systems, issue the command to each
member of the group.
b) Use the FlashCopy function to copy all Db2 volumes. Include any ICF catalogs that are used by
Db2, as well as active logs and BSDSs.
c) Issue the Db2 command SET LOG RESUME to resume normal Db2 update activity. To save disk
space, you can use DFSMSdss to dump the disk copies that you just created to a lower-cost
medium, such as tape.
2. Recover your system:
a) Stop the Db2 subsystem. For data sharing systems, stop all members of the group.
b) Use DFSMSdss RESTORE to restore the FlashCopy data sets to disk.
c) For data sharing systems, delete all CF structures that are owned by this group.
d) Start Db2. For data sharing systems, start all active members.
e) For data sharing systems, execute the GRECP and LPL recovery, which recovers the changed data
that was stored in the coupling facility at the time of the backup.
Related information
FlashCopy (DFSMS Advanced Copy Services)
Procedure
To make catalog definitions consistent with your data after a point-in-time recovery:
1. Run the DSN1PRNT utility with the PARM=(FORMAT, NODATA) option on all data sets that might
contain user table spaces. The NODATA option suppresses all row data, which reduces the output
volume that you receive. Data sets that contain user tables are of the following form, where y can be
either I or J:
catname.DSNDBC.dbname.tsname.y0001.A00n
2. Execute the following SELECT statements to find a list of table space and table definitions in the
Db2 catalog:
3. For each table space name in the catalog, look for a data set with a corresponding name. If a data set
exists, take the following additional actions:
a) Locate the DBID, PSID, and OBID values for the table space in the DSN1PRNT output.
DBID
The first two bytes in the 4-byte field HPGOBID of the header page contain the DBID for the
database.
PSID
The last two bytes in field HPGOBID of the header page contain the PSID for the table space.
OBID
For universal (UTS), partitioned (non-UTS), or LOB table spaces, the HPGROID field of the
header page contains the OBID for the single table in the table space.
For segmented (non-UTS) table spaces, field SEGOBID in the space map page contains the
OBIDs. If the table space contains more than one table, you must specify all OBIDs from the
data set as input to the DSN1COPY utility.
b) Check if the corresponding table space name in the Db2 catalog has the same DBID and PSID.
c) If the DBID and PSID do not match, execute DROP TABLESPACE and CREATE TABLESPACE
statements to replace the incorrect table space entry in the Db2 catalog with a new entry. Be sure
to make the new table space definition exactly like the old one. If the table space is segmented,
SEGSIZE must be identical for the old and new definitions.
You can drop a LOB table space only if it is empty (that is, it does not contain auxiliary tables). If a
LOB table space is not empty, you must first drop the auxiliary table before you drop the LOB table
space. To drop auxiliary tables, you can perform one of the following actions:
• Drop the base table.
• Delete all rows that reference LOBs from the base table, and then drop the auxiliary table.
catname.DSNDBC.dbname.spname.I0001.A001
catname.DSNDBC.dbname.spname.J0001.A001
7. Delete the VSAM data sets that are associated with table spaces that were created with the DEFINE
NO option and that reverted to an unallocated state. After you delete the VSAM data sets, you can
insert or load rows into these unallocated table spaces to allocate new VSAM data sets.
Related concepts
Recovery of tables that contain identity columns
Procedure
To recover your Db2 subsystem:
1. Prepare for recovery:
a) Use BACKUP SYSTEM FULL to take the system backup.
b) Transport the system backups to the remote site.
where .
• In this control statement, substitute log-truncation-timestamp with the timestamp of the point to
which you want to recover.
CRESTART CREATE,SYSPITRT=log-truncation-timestamp
b) Start Db2.
c) Run the RESTORE SYSTEM utility by issuing the RESTORE SYSTEM control statement.
This utility control statement performs a recovery to the current time (or to the time of the last log
transmission from the local site).
Procedure
To recover your Db2 subsystem without using the BACKUP SYSTEM utility:
1. Prepare for recovery.
a) Issue the Db2 command SET LOG SUSPEND to suspend logging and update activity, and to quiesce
32 KB page writes and data set extensions.
For data sharing systems, issue the command to each member of the data sharing group.
b) Use the FlashCopy function to copy all Db2 volumes. Include any ICF catalogs that are used by
Db2, as well as active logs and BSDSs.
c) Issue the Db2 command SET LOG RESUME to resume normal Db2 activity.
d) Use DFSMSdss to dump the disk copies that you just created to tape, and then transport this tape
to the remote site. You can also use other methods to transmit the copies that you make to the
remote site.
2. Recover your Db2 subsystem.
a) Use DFSMSdss to restore the FlashCopy data sets to disk.
b) Run the DSNJU003 utility by using the CRESTART CREATE, SYSPITR=log-truncation-point control
statement.
The log-truncation-point is the RBA or LRSN of the point to which you want to recover.
c) Restore any logs on tape to disk.
d) Start Db2.
e) Run the RESTORE SYSTEM utility using the RESTORE SYSTEM LOGONLY control statement to
recover to the current time (or to the time of the last log transmission from the local site).
Procedure
The following procedure first describes how to create image copies of the source table space and then
describes how to access historical data from the moved tables from those image copies.
• To create image copies of the source table space:
a) Check for and insert missing system pages into the table space.
a. Run REPAIR CATALOG TEST to check for missing system pages for tables. If the table space
has any missing system pages, message DSNU667I is issued, with this additional information:
MISSING SYSTEM PAGE IN THE PAGE SET.
b. If the table space has any missing system pages for tables that are in version 0 format, run
REPAIR INSERTVERSIONPAGES SETCURRENTVERSION to insert system pages into the table
space. Tables that are in version 0 format have had no version-changing alter operations.
b) Collect the following information:
– The DDL for the source base table space, tables, and indexes
– The DBID, PSID, and OBID values for the source base table space, which can be queried from the
SYSIBM.SYSTABLESPACE catalog table
– The OBID value of each table, which can be queried from the SYSIBM.SYSTABLES catalog table
– The INCREMENT and MAXASSIGNEDVAL column values for each table in the source base table
space that has an identity column, which can be queried from the SYSIBM.SYSSEQUENCES
catalog table. Add the INCREMENT value to the MAXASSIGNEDVAL value to determine the next
value (nv) for the identity column.
If any related auxiliary LOB or XML table spaces exist, collect the following additional information:
– The DDL for the auxiliary table spaces
– The DBID, PSID, and OBID values for the auxiliary table spaces, which can be queried from the
SYSIBM.SYSTABLESPACE catalog table
c) Using the collected information, make full image copies or incremental copies of the source base
table space and any related auxiliary LOB and XML table spaces.
Attention: Using fuzzy image copies for this procedure might result in fuzzy data, such as
duplicate rows, missing rows, or uncommitted data.
After the image copies are created, you can materialize the MOVE TABLE operations by running the
REORG utility.
• To access historical data from the moved tables from the image copies:
a) Using application compatibility level V12R1M503 or lower and the information that was collected
in step “2” on page 692 from the first part of this procedure, create the following objects. If
incremental copies were made, use the information that was collected from the most recent
incremental copy.
– Create a new base table space with the same attributes as the source base table space at the
time the image copy was created.
Note: If the source base table space of the MOVE TABLE operation is a simple table space, you
cannot create a new simple table space. If simple table spaces that have the same attributes
as the source base table space already exist for testing purposes, use one of those table spaces
instead of creating a new table space.
– If image copies of related auxiliary table spaces were created, create new auxiliary table spaces
with the same attributes as the source auxiliary table spaces at the time the image copies were
created.
– Create new tables in the new base table space with the same attributes as the source tables at
the time the image copy was created. For tables with identity columns, specify nv for the START
WITH value in the CREATE TABLE statement.
Note: For tables with XML columns, you cannot set or alter the starting value for the DOCID
columns. Do not issue SQL INSERT or UPDATE statements or run the LOAD utility after this
procedure because duplicate DOCID column values might be generated.
– Create indexes that are needed for the new tables.
b) Collect the following information:
– The DBID, PSID, and OBID values for the new base table space, which can be queried from the
SYSIBM.SYSTABLESPACE catalog table
– If new auxiliary table spaces were created, the DBID, PSID, and OBID values for the new
auxiliary table spaces, which can be queried from the SYSIBM.SYSTABLESPACE catalog table
– The OBID value of each new table, which can be queried from the SYSIBM.SYSTABLES catalog
table
c) For each new table space, including any new auxiliary table spaces, complete the following steps:
a. Issue the STOP DATABASE command.
b. Run the DSN1COPY stand-alone utility with the following specifications to restore the full image
copy to the new table space:
– Specify the full image copy as SYSUT1, the input data set.
– Specify the table space linear data set (LDS) as SYSUT2, the output data set.
– Specify the RESET option so that the log RBA or LRSN values in the pages are set to zero.
– Specify the OBIDXLAT option. In the SYSXLAT data set, specify the proper mapping of DBIDs,
PSIDs, and table OBIDs from the production to the new table space.
– For sequential full image copies, specify the FULLCOPY option unless the new table space is a
segmented table space. For a segmented table space, specify the SEGMENT option instead.
The three main types of log records are unit of recovery, checkpoint, and database page set control
records.
Each log record has a header that indicates its type, the Db2 subcomponent that made the record, and,
for unit-of-recovery records, the unit-of-recovery identifier. The log records can be extracted and printed
by the DSN1LOGP utility.
The log relative byte address and log record sequence number
For basic 6-byte RBA format, the Db2 log can contain up to 248 bytes, where 248 is 2 to the 48th power.
For extended 10-byte RBA format, the Db2 log can contain up to 280 bytes, where 280 is 2 to the 80th
power. Each byte is addressable by its offset from the beginning of the log. That offset is known as its
relative byte address (RBA).
A log record is identifiable by the RBA of the first byte of its header; that RBA is called the relative byte
address of the record. The record RBA is like a timestamp because it uniquely identifies a record that
starts at a particular point in the continuing log.
In the data sharing environment, each member has its own log. The log record sequence number (LRSN)
identifies the log records of a data sharing member. The LRSN might not be unique on a data sharing
member. The LRSN is a hexadecimal value derived from a store clock timestamp. Db2 uses the LRSN for
recovery in the data sharing environment.
The redo information is required if the work is committed and later must be recovered. The undo
information is used to back out work that is not committed.
If the work is rolled back, the undo/redo record is used to remove the change. At the same time that the
change is removed, a new redo/undo record is created that contains information, called compensation
information, that is used if necessary to reverse the change. For example, if a value of 3 is changed to 5,
redo compensation information changes it back to 3.
If the work must be recovered, Db2 scans the log forward and applies the redo portions of log records
and the redo portions of compensation records, without keeping track of whether the unit of recovery
was committed or rolled back. If the unit of recovery had been rolled back, Db2 would have written
compensation redo log records to record the original undo action as a redo action. Using this technique,
the data can be completely restored by applying only redo log records on a single forward pass of the log.
Db2 also logs the creation and deletion of data sets. If the work is rolled back, the operations are
reversed. For example, if a table space is created using Db2-managed data sets, Db2 creates a data set;
if rollback is necessary, the data set is deleted. If a table space using Db2-managed data sets is dropped,
Db2 deletes the data set when the work is committed, not immediately. If the work is rolled back, Db2
does nothing.
DBET log records also register exception information that is not related to units of recovery.
Exception states
DBET log records register whether any database, table space, index space, or partition is in an exception
state. To list all objects in a database that are in an exception state, use the command DISPLAY
DATABASE (database name) RESTRICT.
Table 71. Example of a log record sequence for an INSERT of one row using TSO
Type of record Information recorded
1. Begin_UR Beginning of the unit of recovery. Includes the connection name,
correlation name, authorization ID, plan name, and LUWID.
2. Undo/Redo for data Insertion of data. Includes the database ID (DBID), page set ID, page
number, internal record identifier (RID), and the data inserted.
3. Undo/Redo for Index Insertion of index entry. Includes the DBID, index space object ID,
page number, and index entry to be added.
4. Begin Commit 1 The beginning of the commit process. The application has requested
a commit either explicitly (EXEC SQL COMMIT) or implicitly (for
example, by ending the program).
5. Phase 1-2 Transition The agreement to commit in TSO. In CICS and IMS, an End Phase
1 record notes that Db2 agrees to commit. If both parties agree, a
Begin Phase 2 record is written; otherwise, a Begin Abort record is
written, noting that the unit of recovery is to be rolled back.
6. End Phase 2 Completion of all work required for commit.
Table 72 on page 697 shows the log records for processing and rolling back an insertion.
Update data1 The old and new values of the changed data.
• On redo, the new data is replaced.
• On undo, the old data is replaced.
Insert index entry The new key value and the data RID.
Delete index entry The deleted key value and the data RID.
Add column The information about the column being added, if the table was defined with
DATA CAPTURE(CHANGES).
EXCHANGE DATA on a The database ID (DBID) and the page set ID (PSID) of the table space on
clone table space which the operation was run.
REPAIR SET DELETE The database ID (DBID) and the page set ID (PSID) of the table space on
which the operation was run.
Note:
1. If an update occurs to a table defined with DATA CAPTURE(CHANGES), the entire before-image of the
data row is logged.
Related reference
DSNTIPL: Active log data set parameters (Db2 Installation and Migration)
Database page set control records
Page set control records primarily register the allocation, opening, and closing of every page set (table
space or index space).
The same information is in the Db2 directory (SYSIBM.SYSLGRNX). It is also registered in the log so
that it is available at restart.
The physical output unit written to the active log data set is a control interval (CI) of 4096 bytes (4
KB). Each CI contains one VSAM record.
One physical record can contain several logical records, one or more logical records and part of
another logical record, or only part of one logical record. The physical record must also contain 37 bytes
of Db2 control information if the log record is in 10-byte format, or 21 bytes of Db2 control information
if the log record is in six-byte format. The control information is called the log control interval definition
(LCID).
Figure 67 on page 701 shows a VSAM CI containing four log records or segments, namely:
• The last segment of a log record of 768 bytes (X'0300'). The length of the segment is 100 bytes
(X'0064').
• A complete log record of 40 bytes (X'0028').
• A complete log record of 1024 bytes (X'0400').
• The first segment of a log record of 4108 bytes (X'100C'). The length of the segment is 2911 bytes
(X'0B5F').
Record 4
VSAM record
ends here
For data sharing, the LRSN of
the last log record in this CI
Offset of last segment in this CI
(beginning of log record 4)
Total length of spanned record that
ends in this CI (log record 1)
Total length of spanned record that
begins in this CI (log record 4)
The term log record refers to a logical record, unless the term physical log record is used. A part of a
logical record that falls within one physical record is called a segment.
Related reference
The log control interval definition (LCID)
The first segment of a log record must contain the header and some bytes of data. If the current
physical record has too little room for the minimum segment of a new record, the remainder of the
physical record is unused, and a new log record is written in a new physical record.
The log record can span many VSAM CIs. For example, a minimum of nine CIs are required to hold the
maximum size physical log record of 36,000 bytes. Only the first segment of the record contains the entire
LRH; later segments include only the first two fields. When a specific log record is needed for recovery, all
segments are retrieved and presented together as if the record were stored continuously.
Table 75. Contents of the log record header for 10-byte format
Hex offset Length Information
00 4 Length of this record or segment
04 2 Length of any previous record or segment in this CI; 0 if this is the
first entry in the CI.
06 1 Flags
07 1 Release identifier
08 1 Resource manager ID (RMID) of the Db2 component that created the
log record
09 1 Flags
0A 16 Unit of recovery ID, if this record relates to a unit of recovery;
otherwise, 0
1A 16 Log RBA of the previous log record, if this record relates to a unit of
recovery; otherwise, 0
2A 1 Length of header
2B 1 Available
2C 2 Type of log record
2E 2 Subtype of the log record
30 12 Undo next LSN
3C 14 LRHTIME
4A 6 Available
Table 76. Contents of the log record header for 6-byte format
Hex offset Length Information
00 2 Length of this record or segment
Related concepts
Unit of recovery log records
Most of the log records describe changes to the Db2 database. All such changes are made within units of
recovery.
Related reference
Log record type codes
The type code of a log record tells what kind of Db2 event the record describes.
Log record subtype codes
The log record subtype code provides a more granular definition of the event that occurred and that
generated the log record. Log record subtype codes are unique only within the scope of the corresponding
log record type code.
The following tables describe the contents of the LCID. You can determine the LCID format by
testing the first bit of the next to last byte. If the bit is 1, then the LCID is in the 10-byte format. If the bit is
0, the LCID is in the 6-byte format.
Table 78. Contents of the log control interval definition for 6 byte RBA and LRSN
Hex offset Length Information
00 1 An indication of whether the CI contains free space: X'00' = Yes,
X'FF' = No
01 2 Total length of a segmented record that begins in this CI; 0 if no
segmented record begins in this CI
03 2 Total length of a segmented record that ends in this CI; 0 if no
segmented record ends in this CI
05 2 Offset of the last record or segment in the CI
07 6 Log RBA of the start of the CI
0D 6 LRSN (data sharing) or timestamp of the last log record in this CI
(non-data sharing)
13 2 Member ID (data sharing) or 0 (non-data sharing)
Each recovery log record consists of two parts: a header, which describes the record, and data. The
following illustration shows the format schematically; the following list describes each field.
Flags (1)
Resource manager ID (1)
Flags (1)
Length of previous record or segment (2)
Length of this record or segment (4)
LINK (6)
Unit of recovery ID (6)
Flags (1)
Related reference
Log record type codes
The type code of a log record tells what kind of Db2 event the record describes.
Log record subtype codes
The log record subtype code provides a more granular definition of the event that occurred and that
generated the log record. Log record subtype codes are unique only within the scope of the corresponding
log record type code.
Code
Type of event
0002
Page set control
0004
SYSCOPY utility
0010
System event
0020
Unit of recovery control
0100
Checkpoint
0200
Unit of recovery undo
0400
Unit of recovery redo
0800
Archive log command
1000 to 8000
Assigned by Db2
2200
Savepoint
4200
End of rollback to savepoint
4400
Alter or modify recovery log record
A single record can contain multiple type codes that are combined. For example, 0600 is a combined
UNDO/REDO record; F400 is a combination of four Db2-assigned types plus a REDO. A diagnostic
log record for the TRUNCATE IMMEDIATE statement is type code 4200, which is a combination of a
diagnostic log record (4000) and an UNDO record (0200).
Log record type 0004 (SYSCOPY utility) has log subtype codes that correspond to the page
set ID values of the table spaces that have their SYSCOPY records in the log (SYSIBM.SYSUTILX,
SYSIBM.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSDBDXA).
Log record type 0800 (quiesce) does not have subtype codes.
Some log record types (1000 - 8000 assigned by Db2) can have proprietary log record subtype codes
assigned.
Subtypes for type 4200 (diagnostic log record for TRUNCATE IMMEDIATE)
Code
Type of event
0085
Special begin for TRUNCATE IMMEDIATE
0086
Special commit for TRUNCATE IMMEDIATE
Related reference
DSN1LOGP (Db2 Utilities)
The macros are contained in the data set library prefix SDSNMACS and are documented by
comments in the macros themselves.
Log record formats for the record types and subtypes are detailed in the mapping macro DSNDQJ00.
DSNDQJ00 provides the mapping of specific data change log records, UR control log records, and page
set control log records that you need to interpret data changes by the UR. DSNDQJ00 also explains the
content and usage of the log records.
Related reference
Log record subtype codes
The log record subtype code provides a more granular definition of the event that occurred and that
generated the log record. Log record subtype codes are unique only within the scope of the corresponding
log record type code.
Procedure
Issue the following START TRACE command in an instrumentation facility interface (IFI) program:
where:
• P signifies to start a Db2 performance trace. Any of the Db2 trace types can be used.
• CLASS(30) is a user-defined trace class (31 and 32 are also user-defined classes).
• IFCID(126) activates Db2 log buffer recording.
• DEST(OPX) starts the trace to the next available Db2 online performance (OP) buffer. The size of this OP
buffer can be explicitly controlled by the BUFSIZE keyword of the START TRACE command. Valid sizes
range from 256 KB to 16 MB. The number must be evenly divisible by 4.
When the START TRACE command takes effect, from that point forward until Db2 terminates, Db2 begins
writing 4-KB log buffer VSAM control intervals (CIs) to the OP buffer as well as to the active log. As part
of the IFI COMMAND invocation, the application specifies an ECB to be posted and a threshold to which
the OP buffer is filled when the application is posted to obtain the contents of the buffer. The IFI READA
request is issued to obtain OP buffer contents.
CALL DSNWLI(READS,ifca,return_area,ifcid_area,qual_area)
CALL DSNWLI(READS,ifca,return_area,ifcid_area,qual_area)
IFCID 0306 must appear in the IFCID area. IFCID 0306 returns complete log records. Multi-segmented
control interval log records are combined for a complete log record.
Generally, catalog and directory objects cannot be in group buffer pool RECOVER-pending (GRECP) status
when an IFCID 0306 request accesses the compression dictionary. Only log entries for tables that are
defined with DATA CAPTURE CHANGES enabled are decompressed.
Related tasks
Reading specific log records (IFCID 0129)
You can use IFCID 129 with an IFI READS (read synchronously) request to return a specific range of log
records from the active log into the return area that is initialized by your program.
WQALLMOD WQALLRBA
-------- --------------------
READS input: C6 00000000CAC5B606C843
In the next F call for a data sharing environment, you specify either QW0306ES or QW0306ES+1 as
the input for WQALLRBA:
WQALLMOD WQALLRBA
-------- --------------------
READS input: C6 00000000CAC5B606CB6C
WQALLCRI
In this 1-byte field, indicate what types of log records are to be returned:
Modifying Db2 for the GDPS Continuous Availability with zero data loss
solution
If you are using the GDPS Continuous Availability with zero data loss solution with Db2 for the first time,
you need to modify your Db2 data sharing groups.
Procedure
To prepare Db2 data sharing groups to use the GDPS Continuous Availability with zero data loss solution,
follow these steps:
1. Convert all members of your source and proxy data sharing groups to Db2 11 new-function mode.
2. Convert the BSDS data sets to extended 10-byte format by running the DSNTIJCB job on all members
of your source and proxy data sharing groups.
3. Choose a member of the source data sharing group that is not running a capture program to be the
first member to be upgraded. Create the CDDS on that member.
To minimize the possibility of an out-of-space condition, you should define an SMS data class for the
CDDS with the following attributes enabled:
• Extended addressability
• Extended format
• Extent constraint relief
• CA reclaim
Define the CDDS with a DEFINE CLUSTER command like the one below. In your DEFINE CLUSTER
command, you need to specify the same values that are shown in the example for these parameters:
• KEYS
• RECORDSIZE
• SPANNED
• SHAREOPTIONS
• CONTROLINTERVALSIZE
DEFINE CLUSTER -
( NAME(prefix.CDDS) -
KEYS(8 0) -
RECORDSIZE(66560 66560) -
SPANNED -
SHAREOPTIONS(3 3)) -
DATA -
( CYLINDERS(1000 1000) -
CONTROLINTERVALSIZE(16384)) -
INDEX -
( CYLINDERS(20 20) -
CONTROLINTERVALSIZE(2048))
Procedure
1. Migrate all members of the proxy data sharing group to Db2 12 function level V12R1M100 by following
these steps. Suppose that the proxy data sharing group has n members. If n>1, follow steps “1.a” on
page 719 through “1.g” on page 719. If n=1, follow steps “1.a” on page 719, “1.b” on page 719,
“1.d” on page 719, “1.e” on page 719, and “1.g” on page 719.
For i=1 to n:
a) Stop replication on member i of the proxy data sharing group.
b) Upgrade the replication product on member i as necessary for use with Db2 12.
c) To decrease the amount of time that replication is unavailable, on another active member of the
proxy data sharing group, start the replication that was previously running on member i. Do this
only if your environment can support additional replication without significantly degrading the
performance of the replication that is already running on the other active member.
d) Migrate member i of the proxy data sharing group to Db2 12 function level V12R1M100. See
Migrating to Db2 12 (Db2 Installation and Migration) for detailed instructions.
e) On proxy data sharing group member i, which you migrated in step “1.d” on page 719, apply all Db2
12 PTFs that meet these criteria:
• The associated APARs indicate that the PTFs are for proxy data sharing group members in a GDPS
Continuous Availability with zero data loss environment.
• You have not already applied the PTFs as part of the migration to function level V12R1M100.
Contact IBM Support if you need assistance in identifying those PTFs.
Tip: Although you can apply all PTFs that are for GDPS Continuous Availability with zero data
loss on the proxy data sharing group members without affecting functionality, applying only the
PTFs that are intended for proxy members decreases the number of unnecessary PTFs on those
members.
f) On the member on which you started replication in step “1.c” on page 719, stop the replication for
member i.
g) Restart replication on member i of the proxy data sharing group.
2. Migrate all members of the source and target data sharing groups, one member at a time, to Db2 12
function level V12R1M100.
Modifying IFI READS calls for the GDPS Continuous Availability with zero
data loss environment
When you implement the GDPS Continuous Availability with zero data loss (GDPS Continuous Availability
with zero data loss) solution, you need to modify your programs that issue IFI READS calls for IFCID 0306
to capture log records.
Procedure
Specify one of the following values in the WQALLCRI field in the IFI qualification area to indicate that log
records are being returned by the proxy data sharing group.
X'01' (WQALLCR1)
Only log records for changed data capture and unit of recovery control from the proxy data sharing
group in a GDPS Continuous Availability with zero data loss environment. Records are returned until
the end-of-scope log point is reached.
X'02' (WQALLCR2)
All types of log records from the proxy data sharing group in a GDPS Continuous Availability with zero
data loss environment. Records are returned until the end-of-scope log point is reached.
X'03' (WQALLCR3)
Only log records for changed data capture and unit of recovery control from the proxy data sharing
group in a GDPS Continuous Availability with zero data loss environment. Records are returned until
the end-of-log point is reached for all members of the data sharing group.
X'04' (WQALLCR4)
All types of log records from the proxy data sharing group in a GDPS Continuous Availability with zero
data loss environment. Records are returned until the end-of-log point is reached for all members of
the data sharing group.
Procedure
To recover or rebuild a CDDS, follow these steps in the source data sharing group:
1. Issue the -STOP CDDS command to direct all members of the data sharing group to close and
deallocate the CDDS.
2. Issue the DFSMSdss RESTORE command to restore the CDDS from the latest backup copy.
If you do not have a backup copy, delete and redefine the CDDS. See “Modifying Db2 for the GDPS
Continuous Availability with zero data loss solution” on page 716 for an example of the CDDS
definition.
3. Issue the -START CDDS command to direct all members of the data sharing group to allocate and open
the CDDS.
4. Run REORG TABLESPACE with the INITCDDS option to repopulate the CDDS.
You can specify the SEARCHTIME option with the INITCDDS option to allow REORG to populate the
CDDS with an earlier dictionary than the dictionary that currently resides in the target table space.
Related concepts
RESTORE command for DFSMSdss (z/OS DFSMSdss Storage Administration)
Related tasks
Modifying Db2 for the GDPS Continuous Availability with zero data loss solution
If you are using the GDPS Continuous Availability with zero data loss solution with Db2 for the first time,
you need to modify your Db2 data sharing groups.
Related reference
Syntax and options of the REORG TABLESPACE control statement (Db2 Utilities)
-STOP CDDS (Db2) (Db2 Commands)
-START CDDS (Db2) (Db2 Commands)
The following tables list and describe the JCL DD statements that are used by stand-alone services.
Table 80. JCL DD statements for Db2 stand-alone log services in a data-sharing environment
JCL DD statement Explanation
GROUP If you are reading logs from every member of a data sharing group in LRSN
sequence, you can use this statement to locate the BSDSs and log data sets
needed. You must include the data set name of one BSDS in the statement. Db2
can find the rest of the information from that one BSDS.
All members' logs and BSDS data sets must be available. If you use this DD
statement, you must also use the LRSN and RANGE parameters on the OPEN
request. The GROUP DD statement overrides any MxxBSDS statements that are
used.
(Db2 searches for the BSDS DD statement first, then the GROUP statement, and
then the MxxBSDS statements. If you want to use a particular member's BSDS for
your own processing, you must call that DD statement something other than BSDS.)
MxxBSDS Names the BSDS data set of a member whose log must participate in the read
operation and whose BSDS is to be used to locate its log data sets. Use a separate
MxxBSDS DD statement for each Db2 member. xx can be any two valid characters.
Use these statements if logs from selected members of the data sharing group
are required and the BSDSs of those members are available. These statements are
ignored if you use the GROUP DD statement.
For one MxxBSDS statement, you can use either RBA or LRSN values to specify a
range. If you use more than one MxxBSDS statement, you must use the LRSN to
specify the range.
MyyARCHV Names the archive log data sets of a member to be used as input. yy can be
any two valid characters that do not duplicate any xx used in an MxxBSDS DD
statement.
Concatenate all required archived log data sets of a given member in time
sequence under one DD statement. Use a separate MyyARCHV DD statement for
each member. You must use this statement if the BSDS data set is unavailable or if
you want only some of the log data sets from selected members of the group.
If you name the BSDS of a member by a MxxBSDS DD statement, do not name
the log of the same member by an MyyARCHV statement. If both MyyARCHV and
MxxBSDS identify the same log data sets, the service request fails. MyyARCHV
statements are ignored if you use the GROUP DD statement.
The DD statements must specify the log data sets in ascending order of log RBA (or LRSN) range. If both
ARCHIVE and ACTIVEn DD statements are included, the first archive data set must contain the lowest
log RBA or LRSN value. If the JCL specifies the data sets in a different order, the job terminates with
an error return code with a GET request that tries to access the first record breaking the sequence. If
the log ranges of the two data sets overlap, this is not considered an error; instead, the GET function
skips over the duplicate data in the second data set and returns the next record. The distinction between
out-of-order and overlap is as follows:
• An out-of-order condition occurs when the log RBA or LRSN of the first record in a data set is greater
than that of the first record in the following data set.
• An overlap condition occurs when the out-of-order condition is not met but the log RBA or LRSN of the
last record in a data set is greater than that of the first record in the following data set.
Gaps within the log range are permitted. A gap is created when one or more log data sets containing part
of the range to be processed are not available. This can happen if the data set was not specified in the JCL
or is not reflected in the BSDS. When the gap is encountered, an exception return code value is set, and
the next complete record after the gap is returned.
Normally, the BSDS DD name is supplied in the JCL, rather than a series of ACTIVE DD names or a
concatenated set of data sets for the ARCHIVE ddname. This is commonly referred to as "running in BSDS
mode".
Related reference
Stand-alone log CLOSE request
A stand-alone log CLOSE request deallocates any log data sets that were dynamically allocated by
previous processing. In addition, all storage that was obtained by previous functions, including the
request block that is specified on the request, is freed.
Stand-alone log OPEN request
A stand-alone log OPEN request initializes the stand-alone log services.
Stand-alone log GET request
A stand-alone log GET request returns a pointer to a buffer that contains the next log record, based on
position information in the request block.
If you use the GROUP DD statement, then the determinant is the number of members in the group.
Otherwise, the number of different xxs and yys used in the Mxx and Myy type DD statements.
For example, assume you need to read log records from members S1, S2, S3, S4, S5, and S6.
The request macro invoking these services can be used by reentrant programs. The macro requires
that register 13 point to an 18-word save area at invocation. In addition, registers 0, 1, 14, and 15 are
used as work and linkage registers. A return code is passed back in register 15 at the completion of each
request. When the return code is nonzero, a reason code is placed in register 0. Return codes identify a
class of errors, while the reason code identifies a specific error condition of that class. The stand-alone
log return codes are shown in the following table.
The stand-alone log services invoke executable macros that can execute only in 24-bit addressing mode
and reference data below the 16-MB line. User-written applications should be link-edited as AMODE(24),
RMODE(24).
A log record is available in the area pointed to by the request block until the next GET request is
issued. At that time, the record is no longer available to the requesting program. If the program requires
reference to a log record's content after requesting a GET of the next record, the program must move the
record into a storage area that is allocated by the program.
The first GET request, after a FUNC=OPEN request that specified a RANGE parameter, returns a pointer in
the request feedback area. This points to the first record with a log RBA value greater than or equal to the
low log RBA value specified by the RANGE parameter. If the RANGE parameter was not specified on the
FUNC=OPEN request, then the data to be read is determined by the JCL specification of the data sets. In
this case, a pointer to the first complete log record in the data set that is specified by the ARCHIVE, or by
ACTIVE1 if ARCHIVE is omitted, is returned. The next GET request returns a pointer to the next record in
ascending log RBA order. Subsequent GET requests continue to move forward in log RBA sequence until
the function encounters the end of RANGE RBA value, the end of the last data set specified by the JCL, or
the end of the log as determined by the bootstrap data set.
The syntax for the stand-alone log GET request is:
Keyword
Explanation
FUNC=GET
Requests the stand-alone log GET function.
RBR
Specifies a register that contains the address of the request block this request is to use. Although
you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of
an unnecessary load register and is therefore more efficient. The pointer to the request block (that is
passed in register n of the RBR=(n) keyword) must be used by subsequent GET and CLOSE function
requests.
Output
Explanation
GPR 15
General-purpose register 15 contains a return code upon completion of a request. For nonzero return
codes, a corresponding reason code is contained in register 0.
Related concepts
X'D1......' codes (Db2 Codes)
Related tasks
Recovering from different Db2 for z/OS problems
You can troubleshoot and recover from many Db2 problems on your own by using the provided recovery
procedures.
Related reference
JCL DD statements for Db2 stand-alone log services
Stand-alone services, such as OPEN, GET, and CLOSE, use a variety of JCL DD statements as they operate.
Registers and return codes
Keyword
Explanation
FUNC=CLOSE
Requests the CLOSE function.
RBR
Specifies a register that contains the address of the request block that this function uses. Although
you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of an
unnecessary load register and is therefore more efficient.
Output
Explanation
GPR 15
Register 15 contains a return code upon completion of a request. For nonzero return codes, a
corresponding reason code is contained in register 0.
GPR 0
Register 0 contains a reason code that is associated with a nonzero return code that is contained in
register 15. The only reason code used by the CLOSE function is 00D10030.
Related reference
JCL DD statements for Db2 stand-alone log services
Stand-alone services, such as OPEN, GET, and CLOSE, use a variety of JCL DD statements as they operate.
Registers and return codes
Db2 uses registers to store important information and return codes to help you determine the status of
stand-alone log activity.
Related information
00D10030 (Db2 Codes)
For example:
*****************************************************************
* HANDLE ERROR FROM OPEN FUNCTION AT THIS POINT *
*****************************************************************
⋮
GETCALL EQU *
DSNJSLR FUNC=GET,RBR=(R1)
C R0,=X'00D10020' END OF RBA RANGE ?
BE CLOSE YES, DO CLEANUP
C R0,=X'00D10021' RBA GAP DETECTED ?
BE GAPRTN HANDLE RBA GAP
LTR R15,R15 TEST RETURN CODE FROM GET
BNZ ERROR
⋮
⋮
******************************************************************
* PROCESS RETURNED LOG RECORD AT THIS POINT. IF LOG RECORD *
* DATA MUST BE KEPT ACROSS CALLS, IT MUST BE MOVED TO A *
* USER-PROVIDED AREA. *
******************************************************************
USING SLRF,1 BASE SLRF DSECT
L R8,SLRFFRAD GET LOG RECORD START ADDR
LR R9,R8
AH R9,SLRFRCLL GET LOG RECORD END ADDRESS
BCTR R9,R0
⋮
CLOSE EQU *
DSNJSLR FUNC=CLOSE,RBR=(1)
⋮
NAME DC C'DDBSDS'
RANGER DC X'00000000000000000005FFFF'
⋮
DSNDSLRB
DSNDSLRF
EJECT
R0 EQU 0
R1 EQU 1
R2 EQU 2
⋮
R15 EQU 15
END
Figure 70. Excerpts from a sample program using stand-alone log services
Procedure
You must write an exit routine (or use the one that is provided by the preceding program offering) that
can be loaded and called under the various processing conditions and restrictions that are required by this
exit routine.
Related concepts
Contents of the log
The log contains the information that is needed to recover the results of program execution, the contents
of the database, and the Db2 subsystem. The log does not contain information for accounting, statistics,
traces, or performance evaluation.
Log capture routines
A log capture exit routine makes Db2 log data available for recovery purposes in real time.
Related tasks
Reading log records with IFI
You can use the READA (read asynchronously) request of the instrumentation facility interface (IFI) to
read log records into a buffer. Use the READS (read synchronously) request to read specific log control
intervals from a buffer. You can use these requests online while Db2 is running.
Related reference
The physical structure of the log
The active log consists of VSAM data sets with certain required characteristics.
Edit procedures
An edit procedure is assigned to a table by the EDITPROC clause of the CREATE TABLE statement. An edit
procedure receives the entire row of a base table in internal Db2 format. It can transform the row when it
is stored by an INSERT or UPDATE SQL statement or by the LOAD utility.
An edit procedure can be defined as WITH ROW ATTRIBUTES or WITHOUT ROW ATTRIBUTES in a
CREATE TABLE statement. An edit procedure that is defined as WITH ROW ATTRIBUTES uses information
about the description of the rows in the associated table. You cannot define an edit routine as WITH ROW
ATTRIBUTES on a table that has the following characteristics:
• The table contains a LOB, ROWID, or XML column.
• The table contains an identity column.
• The table contains a security label column.
• The table contains a column name that is longer than 18 EBCDIC bytes.
You cannot define an edit procedure as WITHOUT ROW ATTRIBUTES on a table that has LOB columns.
The transformation your edit procedure performs on a row (possibly encryption or compression) is
called edit-encoding. The same routine is used to undo the transformation when rows are retrieved; that
operation is called edit-decoding.
The edit-decoding function must be the exact inverse of the edit-encoding function. For example, if a
routine encodes 'ALABAMA' to '01', it must decode '01' to 'ALABAMA'. A violation of this rule can lead to an
abend of the Db2 connecting thread, or other undesirable effects.
Your edit procedure can encode the entire row of the table, including any index keys. However, index keys
are extracted from the row before the encoding is done, therefore, index keys are stored in the index in
edit-decoded form. Hence, for a table with an edit procedure, index keys in the table are edit-coded; index
keys in the index are not edit-coded.
The sample application contains a sample edit procedure, DSN8EAE1. To print it, use ISPF facilities,
IEBPTPCH, or a program of your own. Or, assemble it and use the assembly listing.
There is also a sample routine that does Huffman data compression, DSN8HUFF in library
prefix.SDSNSAMP. That routine not only exemplifies the use of the exit parameters, it also has potentially
some use for data compression. If you intend to use the routine in any production application, please pay
particular attention to the warnings and restrictions given as comments in the code. You might prefer to
let Db2 compress your data.
Procedure
Specify the EDITPROC clause of the CREATE TABLE statement, followed by the name of the procedure.
The procedure is loaded on demand during operation.
You can specify the EDITPROC clause on a table that is activated with row and column access control.
The rows of the table are passed to these procedures if your security administrator determines that these
procedures are allowed to access sensitive data.
An edit routine is invoked after any date routine, time routine, or field procedure. If there is also a
validation routine, the edit routine is invoked after the validation routine. Any changes made to the row by
the edit routine do not change entries made in an index.
The same edit routine is invoked to edit-decode a row whenever Db2 retrieves one. On retrieval, it is
invoked before any date routine, time routine, or field procedure. If retrieved rows are sorted, the edit
routine is invoked before the sort. An edit routine is not invoked for a DELETE operation without a WHERE
clause that deletes an entire table in a segmented table space.
At invocation, registers are set, and the edit procedure uses the standard exit parameter list (EXPL). The
following table shows the exit-specific parameter list, as described by macro DSNDEDIT.
Columns for which no input field is provided and that are not in reordered row format are always at the
end of the row and are never defined as NOT NULL. In this case, the columns allow nulls, they are defined
as NOT NULL WITH DEFAULT, or the columns are ROWID or DOCID columns.
Use macro DSNDEDIT to get the starting address and row length for edit exits. Add the row length to the
starting address to get the first invalid address beyond the end of the input buffer; your routine must not
process any address as large as that.
The following diagram shows how the parameter list points to other row information. The address of the
nth column description is given by: RFMTAFLD + (n-1)*(FFMTE-FFMT).
Return code
Parameter list
Reason code
EDITCODE: Function to be
performed
Row descriptions
Address of row description
Number of columns
Reserved in row (n)
Length of input row Address of column
list
Address of input row
Row type
Length of output row
Data type
...n
Input row Data attribute
Column name
Figure 71. How the edit exit parameter list points to row information
If EDITCODE contains 0, the input row is in decoded form. Your routine must encode it.
In that case, the maximum length of the output area, in EDITOLTH, is 10 bytes more than the
maximum length of the record. In counting the maximum length for a row in basic row format,
"record" includes fields for the lengths of varying-length columns and for null indicators. In counting
the maximum length for a row in reordered row format, "record" includes fields for the offsets to the
varying length columns and for null indicators. The maximum length of the record does not include the
6-byte record header.
If EDITCODE contains 4, the input row is in coded form. Your routine must decode it.
In that case, EDITOLTH contains the maximum length of the record. In counting the maximum length
for a row in basic row format, "record" includes fields for the lengths of varying length columns and for
null indicators. In counting the maximum length for a row in reordered row format, "record" includes
fields for the offsets to the varying-length columns and for null indicators. The maximum length of the
record does not include the 6-byte record header.
In either case, put the result in the output area, pointed to by EDITOPTR, and put the length of your result
in EDITOLTH. The length of your result must not be greater than the length of the output area, as given in
EDITOLTH on invocation, and your routine must not modify storage beyond the end of the output area.
Required return code: Your routine must also leave a return code in EXPLRC1 with the following
meanings:
If the function fails, the routine might also leave a reason code in EXPLRC2. Db2 returns SQLCODE -652
(SQLSTATE '23506') to the application program and puts the reason code in field SQLERRD(6) of the SQL
communication area (SQLCA).
Validation routines
Validation routines are assigned to a table by the VALIDPROC clause of the CREATE TABLE and ALTER
TABLE statement. A validation routine receives an entire row of a base table as input. The routine can
return an indication of whether to allow a subsequent INSERT, UPDATE, DELETE, FETCH, or SELECT
operation.
Typically, a validation routine is used to impose limits on the information that can be entered in a
table; for example, allowable salary ranges, perhaps dependent on job category, for the employee sample
table.
Although VALIDPROCs can be specified for a table that contains a LOB or XML column, the LOB or XML
values are not passed to the validation routine. The LOB indicator column takes the place of the LOB
column, and the XML indicator column takes the place of the XML column. You cannot use VALIDPROC on
a table if the table contains a column name that is longer than 18 EBCDIC bytes.
The return code from a validation routine is checked for a 0 value before any insert, update, or delete is
allowed.
Related concepts
General guidelines for writing exit routines
When you use the exit routines that Db2 supplies, consider some of the general rules, requirements, and
guidelines for using exit routines.
Procedure
Issue the CREATE TABLE or ALTER TABLE statement with the VALIDPROC clause.
You can specify the VALIDPROC clause on a table that is activated with row and column access control.
The rows of the table are passed to these routines if your security administrator determines that these
routines are allowed to access sensitive data.
You can cancel a validation routine for a table by specifying the VALIDPROC NULL clause in an ALTER
TABLE statement.
The routine is invoked for most delete operations, including a mass delete of all the rows of a table.
If there are other exit routines, the validation routine is invoked before any edit routine, and after any date
routine, time routine, or field procedure.
The following diagram shows how the parameter list points to other information.
Register 1 EXPL
Address of
Address of
EXPL Work area
work area
(256 bytes)
Address of
Length of
validation
work area
parameter list
Reserved
Return code
Parameter list
Reason code
Reserved
Address of row description Row descriptions
Reserved Number of columns
in row (n)
Length of input row to be
validated Address of column
list
Address of input row to be
validated Row type
.
.
. Column descriptions
Column length
Data type ...n
Input row
Data attribute
Column name
The following table shows the exit-specific parameter list, described by macro DSNDRVAL.
Columns for which no input field is provided and that are not in reordered row format are always
at the end of the row and are never defined as NOT NULL. In this case, the columns allow nulls, they are
defined as NOT NULL WITH DEFAULT, or the columns are ROWID or DOCID columns.
Use macro DSNDRVAL to get the starting address and row length for validation exits. Add the row length
to the starting address to get the first invalid address beyond the end of the input buffer; your routine
must not process any address as large as that.
If the operation is not allowed, the routine might also leave a reason code in EXPLRC2. Db2 returns
SQLCODE -652 (SQLSTATE '23506') to the application program and puts the reason code in field
SQLERRD(6) of the SQL communication area (SQLCA).
Example: Suppose that you want to insert and retrieve dates in a format like "September 21, 2006". You
can use a date routine that transforms the date to a format that is recognized by Db2 on insertion, such as
ISO: "2006-09-21". On retrieval, the routine can transform "2006-09-21" to "September 21, 2006".
You can have either a date routine, a time routine, or both. These routines do not apply to timestamps.
Special rules apply if you execute queries at a remote DBMS, through the distributed data facility.
Related concepts
General guidelines for writing exit routines
When you use the exit routines that Db2 supplies, consider some of the general rules, requirements, and
guidelines for using exit routines.
Procedure
To specify date and time routines:
1. Set LOCAL DATE LENGTH or LOCAL TIME LENGTH to the length of the longest field that is required to
hold a date or time in your local format.
Allowable values range from 10 to 254. For example, if you intend to insert and retrieve dates in the
form "September 21, 2006", you need an 18-byte field. You would set LOCAL DATE LENGTH to 18.
2. Replace all of the IBM-supplied exit routines.
Use CSECTs DSNXVDTX, DSNXVDTA, and DSNXVDTU for a date routine, and DSNXVTMX, DSNXVTMA,
and DSNXVTMU for a time routine. The routines are loaded when Db2 starts.
3. To make the local date or time format the default for retrieval, set DATE FORMAT or TIME FORMAT to
LOCAL when installing Db2.
This specification has the effect that Db2 always takes the exit routine when you retrieve from a
DATE or TIME column. For example, suppose that you want to retrieve dates in your local format only
occasionally; most of the time you use the USA format. You would set DATE FORMAT to USA.
What to do next
The installation parameters for LOCAL DATE LENGTH, LOCAL TIME LENGTH, DATE FORMAT, and TIME
FORMAT can also be updated after Db2 is installed. If you change a length parameter, you might need to
rebind the applications.
• When a date or time value is entered by an INSERT or UPDATE statement, or by the LOAD utility
• When a constant or host variable is compared to a column with a data type of DATE, TIME, or
TIMESTAMP
• When the DATE or TIME scalar function is used with a string representation of a date or time in LOCAL
format
• When a date or time value is supplied for a limit of a partitioned index in a CREATE INDEX statement
The exit is taken before any edit or validation routine.
• If the default is LOCAL, Db2 takes the exit immediately. If the exit routine does not recognize the data
(EXPLRC1=8), Db2 then tries to interpret it as a date or time in one of the recognized formats (EUR, ISO
JIS, or USA). Db2 rejects the data only if that interpretation also fails.
• If the default is not LOCAL, Db2 first tries to interpret the data as a date or time in one of the recognized
formats. If that interpretation fails, Db2 then takes the exit routine, if it exists.
Db2 checks that the value supplied by the exit routine represents a valid date or time in some recognized
format, and then converts it into an internal format for storage or comparison. If the value is entered into a
column that is a key column in an index, the index entry is also made in the internal format.
On retrieval, a date or time routine can be invoked to change a value from ISO to the locally-defined
format when a date or time value is retrieved by a SELECT or FETCH statement. If LOCAL is the default,
the routine is always invoked unless overridden by a precompiler option or by the CHAR function, as by
specifying CHAR(HIREDATE, ISO); that specification always retrieves a date in ISO format. If LOCAL is
not the default, the routine is invoked only when specifically called for by CHAR, as in CHAR(HIREDATE,
LOCAL); that always retrieves a date in the format supplied by your date exit routine.
On retrieval, the exit is invoked after any edit routine or Db2 sort. A date or time routine is not invoked
for a DELETE operation without a WHERE clause that deletes an entire table in a segmented table space.
The following diagram shows how the parameter list points to other information.
Register 1
EXPL
Address of
EXPL Address of
work area Work area
Address of (512 bytes)
parameter Length of
list work area
Return code
Parameter list
Address of function code
Function code:
Address of format length Function to be
performed
Address of LOCAL value
LOCAL value
Figure 73. How a date or time parameter list points to other information
If the function code is 4, the input value is in local format, in the area pointed to by DTXPLOC. Your
routine must change it to ISO, and put the result in the area pointed to by DTXPISO.
If the function code is 8, the input value is in ISO, in the area pointed to by DTXPISO. Your routine must
change it to your local format, and put the result in the area pointed to by DTXPLOC.
Your routine must also leave a return code in EXPLRC1, a 4-byte integer and the third word of the EXPL
area. The return code can have the following meanings:
In most cases, any conversion that is needed can be done by routines provided by IBM. The exit for
a user-written routine is available to handle exceptions.
Related concepts
General guidelines for writing exit routines
When you use the exit routines that Db2 supplies, consider some of the general rules, requirements, and
guidelines for using exit routines.
Procedure
Insert a row into the SYSIBM.SYSSTRINGS catalog table.
A conversion procedure does not use an exit-specific parameter list. Instead, the area pointed to by
register 1 at invocation includes three words, which contain the addresses of the following items:
1. The EXPL parameter list
2. A string value descriptor that contains the character string to be converted
3. A copy of a row from SYSIBM.SYSSTRINGS that names the conversion procedure identified in
TRANSPROC.
The length of the work area pointed to by the exit parameter list is generally 512 bytes. However, if the
string to be converted is ASCII MIXED data (the value of TRANSTYPE in the row from SYSSTRINGS is PM
or PS), then the length of the work area is 256 bytes, plus the length attribute of the string.
The string value descriptor: The descriptor has the following formats:
When converting MIXED data, your procedure must ensure that the result is well-formed. In any
conversion, if you change the length of the string, you must set the length control field in FPVDVALE
to the proper value. Over-writing storage beyond the maximum length of the FPVDVALE causes an abend.
Your procedure must also set a return code in field EXPLRC1 of the exit parameter list.
The following is a list of the codes for the converted string in FPVDVALE:
For the following remaining codes, Db2 does not use the converted string:
Exception conditions: Return a length exception (code 8) when the converted string is longer than the
maximum length allowed.
For an invalid code point (code 12), place the 1- or 2-byte code point in field EXPLRC2 of the exit
parameter list.
Return a form exception (code 16) for EBCDIC MIXED data when the source string does not conform to
the rules for MIXED data.
Any other uses of codes 8 and 16, or of EXPLRC2, are optional.
Error conditions: On return, Db2 considers any of the following conditions as a "conversion error":
• EXPLRC1 is greater than 16.
• EXPLRC1 is 8, 12, or 16 and the operation that required the conversion is not an assignment of a value
to a host variable with an indicator variable.
• FPVDTYPE or FPVDVLEN has been changed.
• The length control field of FPVDVALE is greater than the original value of FPVDVLEN or is negative.
In the case of a conversion error, Db2 sets the SQLERRMC field of the SQLCA to HEX(EXPLRC1) CONCAT
X'FF' CONCAT HEX(EXPLRC2).
Field procedures
A field procedure is a user-written exit routine that is used to transform values in a single, short string
column. You can assign field procedures to a table by specifying the FIELDPROC clause of the CREATE
TABLE or ALTER TABLE statement.
When values in the column are changed, or new values inserted, the field procedure is invoked for
each value, and can transform that value (encode it) in any way. The encoded value is then stored. When
values are retrieved from the column, the field procedure is invoked for each value, which is encoded, and
must decode it back to the original string value.
Any indexes, including partitioned indexes, defined on a column that uses a field procedure are built with
encoded values. For a partitioned index, the encoded value of the limit key is put into the LIMITKEY
column of the SYSINDEXPART table. Hence, a field procedure might be used to alter the sorting sequence
of values entered in a column. For example, telephone directories sometimes require that names like
"McCabe" and "MacCabe" appear next to each other, an effect that the standard EBCDIC sorting sequence
does not provide. And languages that do not use the Roman alphabet have similar requirements.
However, if a column is provided with a suitable field procedure, it can be correctly ordered by ORDER BY.
The transformation your field procedure performs on a value is called field-encoding. The same routine is
used to undo the transformation when values are retrieved; that operation is called field-decoding. Values
in columns with a field procedure are described to Db2 in two ways:
1. The description of the column as defined in CREATE TABLE or ALTER TABLE appears in the catalog
table SYSIBM.SYSCOLUMNS. That is the description of the field-decoded value, and is called the
column description.
2. The description of the encoded value, as it is stored in the data base, appears in the catalog
table SYSIBM.SYSFIELDS. That is the description of the field-encoded value, and is called the field
description.
Important: The field-decoding function must be the exact inverse of the field-encoding function. For
example, if a routine encodes 'ALABAMA' to '01', it must decode '01' to 'ALABAMA'. A violation of this rule
can lead to an abend of the Db2 connecting thread, or other undesirable effects.
Related concepts
General guidelines for writing exit routines
The data type of the encoded value can be any valid SQL data type except DATE, TIME,
TIMESTAMP, LONG VARCHAR, or LONG VARGRAPHIC. The length, precision, or scale of the encoded
value must be compatible with its data type.
A user-defined data type can be a valid field if the source type of the data type is a short string column
that has a null default value. Db2 casts the value of the column to the source type before it passes it to the
field procedure.
Related reference
Value descriptor for field procedures
A value descriptor describes the data type and other attributes of a value.
Procedure
Issue the CREATE TABLE or ALTER TABLE statement with the FIELDPROC clause.
The optional parameter list that follows the procedure name is a list of constants, enclosed in
parentheses, called the literal list. The literal list is converted by Db2 into a data structure called the
field procedure parameter value list (FPPVL). That structure is passed to the field procedure during the
field-definition operation. At that time, the procedure can modify it or return it unchanged. The output
form of the FPPVL is called the modified FPPVL. The modified FPPVL is stored in the Db2 catalog as part of
the field description. The modified FPPVL is passed again to the field procedure whenever that procedure
is invoked for field-encoding or field-decoding.
The following diagram shows those areas. The FPPL and the areas are described by the mapping macro
DSNDFPPB.
FPIB address
Field procedure
CVD address information
block (FPIB)
FVD address
FPPVL address
Column value
descriptor (CVD)
Field value
descriptor (FVD)
Field procedure
parameter value
list (FPPVL) or
literal list
The work area can be used by a field procedure as working storage. A new area is provided each
time the procedure is invoked. The size of the area that you need depends on the way you program your
field-encoding and field-decoding operations.
At field-definition time, Db2 allocates a 512-byte work area and passes the value of 512 bytes as
the work area size to your routine for the field-definition operation. If subsequent field-encoding and
field-decoding operations need a work area of 512 bytes or less, your field definition doesn't need to
change the value as provided by Db2. If those operations need a work area larger than 512 bytes (i.e.
1024 bytes), your field definition must change the work area size to the larger size and pass it back to Db2
for allocation.
Whenever your field procedure is invoked for encoding or decoding operations, Db2 allocates a work
area based on the size (i.e. 1024 bytes) that was passed back to it. Your field definition must not use a
work area larger than what is allocated by Db2, even though subsequent operations need the larger work
area.
The information block tells what operation is to be done, allows the field procedure to signal errors, and
gives the size of the work area. It has the following formats:
FPBWKLN 2 Signed 2-byte Length of work area; the maximum is 32767 bytes.
integer
FPBSORC 4 Signed 2-byte Reserved
integer
FPBRTNC 6 Character, 2 bytes Return code set by field procedure
FPBRSNCD 8 Character, 4 bytes Reason code set by field procedure
FPBTOKPT C Address Address of a 40-byte area, within the work area or
within the field procedure's static area, containing
an error message
At that time, the field procedure can reformat the FPPVL; it is the reformatted FPPVL that is stored in
SYSIBM.SYSFIELDS and communicated to the field procedure during field-encoding and field-decoding as
the modified FPPVL.
The FPPVL has the following formats:
Related reference
Field-definition for field procedures
A field procedure is invoked when a table is created or altered to define the data type and attributes of an
encoded value to Db2. This operation is called field-definition.
On entry
The input that provided to the field-definition operation and the output that is required are as follows:
The contents of all other registers, and of fields not listed in the following tables, are unpredictable.
The work area consists of 512 contiguous uninitialized bytes.
The FPIB has the following information:
The FPVDVALE field is omitted. The FVD provided is 4 bytes long. The FPPVL field has the information:
On exit
The registers must have the following information:
The following fields must be set as shown; all other fields must remain as on entry.
The FPIB must have the following information:
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE '23507'), which is set in the SQL
communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed
to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error
message is determined by the field procedure.
The FVD must have the following information:
Field FPVDVALE must not be set; the length of the FVD is 4 bytes only.
The FPPVL can be redefined to suit the field procedure, and returned as the modified FPPVL, subject to
the following restrictions:
• The field procedure must not increase the length of the FPPVL.
• FPPVLEN must contain the actual length of the modified FPPVL, or 0 if no parameter list is returned.
The modified FPPVL is recorded in the catalog table SYSIBM.SYSFIELDS, and is passed again to the field
procedure during field-encoding and field-decoding. The modified FPPVL need not have the format of a
field procedure parameter list, and it need not describe constants by value descriptors.
On entry
The input that is provided to the field-encoding operation, and the output that is required, are as follows:
The contents of all other registers, and of fields not listed, are unpredictable.
The modified FPPVL, produced by the field procedure during field-definition, is provided.
On exit
The registers have the following information:
The FVD must contain the encoded (field) value in field FPVDVALE. If the value is a varying-length string,
the first halfword must contain its length.
The FPIB can have the following information:
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE '23507'), which is set in the SQL
communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed
to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error
message is determined by the field procedure.
On entry
The registers have the following information:
The contents of all other registers, and of fields not listed, are unpredictable.
The work area is contiguous, uninitialized, and of the length specified by the field procedure during
field-definition.
The FPIB has the following information:
The modified FPPVL, produced by the field procedure during field-definition, is provided.
On exit
The registers have the following information:
The CVD must contain the decoded (column) value in field FPVDVALE. If the value is a varying-length
string, the first halfword must contain its length.
The FPIB can have the following information:
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE '23507'), which is set in the SQL
communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed
to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error
message is determined by the field procedure.
The routine receives data when Db2 writes data to the active log. Your local specifications
determine what the routine does with that data. The routine does not enter or return data to Db2.
Performance factor: Your log capture routine receives control often. Design it with care: a poorly designed
routine can seriously degrade system performance. Whenever possible, use the instrumentation facility
interface (IFI), rather than a log capture exit routine, to read data from the log.
“General guidelines for writing exit routines” on page 761 applies, but with the following exceptions to
the description of execution environments:
A log capture routine can execute in either TCB mode or SRB mode, depending on the function it is
performing. When in SRB mode, it must not perform any I/O operations nor invoke any SVC services or
ESTAE routines.
Procedure
Link module DSNJL004 into either the prefix.SDSNEXIT or the Db2 prefix.SDSNLOAD library.
Specify the REPLACE parameter of the link-edit job to replace a module that is part of the standard Db2
library for this release. The module should have attributes AMODE(64) and RMODE(ANY).
In two of those situations, processing operates in TCB mode; in one situation, processing operates
in SRB mode. The two modes have different processing capabilities, which your routine must be aware of.
The character identifications, situations, and modes are:
• I=Initialization, Mode=TCB
The TCB mode allows all z/OS DFSMSdfp functions to be used, including ENQ, ALLOCATION, and OPEN.
No buffer addresses are passed in this situation. The routine runs in supervisor state, key 7, and
enabled.
This is the only situation in which Db2 checks a return code from the user's log capture exit routine.
The Db2 subsystem is sensitive to a return code of X'20' here. Never return X'20' in register 15 in this
situation.
• W=Write, Mode=SRB (service request block)
The SRB mode restricts the exit routine's processing capabilities. No supervisor call (SVC) instructions
can be used, including ALLOCATION, OPEN, WTO, any I/O instruction, and so on. At the exit point, Db2 is
running in supervisor state, key 7, and is enabled.
The following is a list of the exit-specific parameters; it is mapped by macro DSNDLOGX. The parameter
list contains two 64-bit pointers that point to the standard EXPL parameter list and to the log capture exit
parameter list (LOGX).
You can enable dynamic plan allocation by using one of the following techniques:
• Use Db2 packages and versioning to manage the relationship between CICS transactions and Db2
plans. This technique can help minimize plan outage time, processor time, and catalog contention.
• Use a dynamic plan exit routine to determine the plan to use for each CICS transaction.
Recommendation: Use Db2 packages and versioning, instead of a CICS dynamic plan exit routine, for
dynamic plan allocation.
Using an exit routine requires coordination with your system programmers. An exit routine runs
as an extension of Db2 and has all the privileges of Db2. It can impact the security and integrity
of the database. Conceivably, an exit routine could also expose the integrity of the operating system.
Instructions for avoiding that exposure can be found in the appropriate z/OS publication.
Related concepts
Connection routines and sign-on routines (Managing Security)
Access control authorization exit routine (Managing Security)
With some exceptions, which are noted under "General Considerations" in the description of
particular types of routine, the execution environment is:
• Supervisor state
• Enabled for interrupts
• PSW key 7
• No MVS locks held
• For local requests, under the TCB of the application program that requested the Db2 connection
• For remote requests, under a TCB within the Db2 distributed data facility address space
• 31-bit addressing mode
• Cross-memory mode
In cross-memory mode, the current primary address space is not equal to the home address space.
Therefore, some z/OS macro services you cannot use at all, and some you can use only with restrictions.
For more information about cross-memory restrictions for macro instructions, which macros can be
used fully, and the complete description of each macro, refer to the appropriate z/OS publication.
The following are registers that are set at invocation for exit routines:
Table 117. Contents of registers when Db2 passes control to an exit routine
Register Contains
1 Address of pointer to the exit parameter list. For a field procedure, the address is
that of the field procedure parameter list.
13 Address of the register save area.
14 Return address.
15 Address of entry point of exit routine.
The parameter list for the log capture exit routine consists of two 64-bit pointers. The parameter list for all
other exit routines consists of two 31-bit pointers. Register 1 points to the address of parameter list EXPL,
described by macro DSNDEXPL. The field that follows points to a second parameter list, which differs for
each type of exit routine.
The following is a list of the EXPL parameters. Its description is given by macro DSNDEXPL:
Notes: When translating a string of type PC MIXED, a translation procedure has a work area of 256 bytes
plus the length attribute of the string.
You cannot specify edit procedures for any table that contains a LOB column. You cannot define
edit procedures as WITH ROW ATTRIBUTES for any table that contains a ROWID column. In addition, LOB
values are not available to validation procedures; indicator columns and ROWID columns represent LOB
columns as input to a validation procedure.
Similarly, you cannot specify edit procedures as WITH ROW ATTRIBUTES for any table that contains an
XML column. XML values are not available to validation procedures. DOCID and XML indicator columns
represent XML columns as input to a validation procedure.
Null values for edit procedures, field procedures, and validation routines
If null values are allowed for a column, an extra byte is stored before the actual column value.
This byte is X'00' if the column value is not null; it is X'FF' if the value is null. This extra byte is
included in the column length attribute (parameter FFMTFLEN).
Example: The sample project activity table has five fixed-length columns. The first two columns do not
allow nulls; the last three do.
The following table shows a row of the sample department table in basic row format. The first value in the
DEPTNAME column indicates the column length as a hexadecimal value.
Table 120. A varying-length row in basic row format in the sample department table
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
C01 0012 Information 00 000030 A00 00 New York
center
Varying-length columns have no gaps after them. Hence, columns that appear after varying-length
columns are at variable offsets in the row. To get to such a column, you must scan the columns
sequentially after the first varying-length column. An empty string has a length of zero with no data
following.
ROWID and indicator columns are treated like varying length columns. Row IDs are VARCHAR(17). A LOB
indicator column is VARCHAR(4), and an XML indicator column is VARCHAR(6). It is stored in a base table
in place of a LOB or XML column, and indicates whether the LOB or XML value for the column is null or
zero length.
In reordered row format, if a table has any varying-length columns, all fixed length columns are placed at
the beginning of the row, followed by the offsets to the varying length columns, followed by the values of
the varying length columns.
The following table shows the same row of the sample department table, but in reordered row format.
The value in the offset column indicates the offset value as a hexadecimal value.
Table 121. A varying-length row in reordered row format in the sample department table
DEPTNO MGRNO ADMRDEPT LOCATION Offset DEPTNAME
column
C01 00 000030 A00 00 New York 20 Information
center
The following table shows how the row would look in storage if nulls were allowed in DEPTNAME.
The first value in the DEPTNAME column indicates the column length as a hexadecimal value.
An empty string has a length of one, a X'00' null indicator, and no data following.
In reordered row format, if a table has any varying-length columns, with or without nulls, all fixed length
columns are placed at the beginning of the row, followed by the offsets to the varying length columns,
followed by the values of the varying length columns.
EDITPROCs and VALIDPROCs for handling basic and reordered row formats
You can check the row format type (RFMTTYPE) to ensure that edit procedures (EDITPROC) and validation
procedures (VALIDPROC) produce predictable results.
If you write new edit and validation routines on tables with rows in basic row format (BRF) or
reordered row format (RRF), make sure that EDITPROCs and VALIDPROCs are coded to check RFMTTYPE
and handle both BRF and RRF formats.
If an EDITPROC or VALIDPROC handles only RRF, make sure that it checks RFMTTYPE and returns an
error or warning if it detects BRF. If an EDITPROC or VALIDPROC that handles only BRF is to be used on
tables in RRF, make sure that it checks RFMTTYPE and returns an error or warning if it detects RRF.
Converting basic row format table spaces with edit and validation routines
to reordered row format
You cannot convert the table spaces with edit and validation routines from basic row format to reordered
row format directly. You must perform additional tasks to convert these table spaces.
Converting basic row format table spaces with edit routines to reordered row
format
You can convert basic row format table spaces to reordered row format. If some tables in a table space
have edit routines, you cannot convert the table space to reordered row format directly. You must take
other actions for the conversion to succeed.
Procedure
To convert a table space to reordered row format, complete the following steps for each table that
has an edit routine:
1. Use the UNLOAD utility to unload data from the table or tables that have edit routines.
2. Use the DROP statement to drop the table or tables that have edit routines.
3. Make any necessary modifications to the edit routines so that they can be used with rows in reordered
row format.
4. Use the REORG utility to reorganize the table space. Using the REORG utility converts the table space
to reordered row format.
5. Re-create tables with your modified edit routines. Also, re-create any additional related objects, such
as indexes and check constraints.
6. Use the LOAD RESUME utility to load the data into the tables that have the modified edit
routines.
Procedure
To convert a table space to reordered row format, complete the following steps for each table that
has a validation routine:
1. Use the ALTER TABLE statement to alter the validation routine to NULL.
2. Run the REORG utility or the LOAD REPLACE utility to convert the table space to reordered row format.
3. Make any necessary modifications to the validation routine so that it can be used with rows in
reordered row format.
4. Use the ALTER TABLE statement to add the modified validation routine to the converted table.
Example
To convert an existing table space from reordered row format to basic row format, run REORG
TABLESPACE ROWFORMAT BRF against the table space. To keep the table space in basic row format
on subsequent executions of the LOAD REPLACE utility or the REORG TABLESPACE utility, continue to
specify ROWFORMAT BRF in the utility statement.
Related reference
REORG TABLESPACE (Db2 Utilities)
LOAD (Db2 Utilities)
The following table shows the TIMESTAMP format, which consists of 7 to 13 total bytes.
DSNDROW defines the columns in the order as they are defined in the CREATE TABLE statement or
possibly the ALTER TABLE statement. For rows in the reordered row format, the new column order in
DSNDROW does not necessarily correspond to the order in which the columns are stored in the row. The
following is the general row description:
ROWID X'2C' 17
INDICATOR COLUMN X'30' 4 for a LOB indicator column or 6 for an
XML indicator column
To retrieve numeric data in its original form, you must Db2-decode it according to its data type.
To determine the highest supported document version for a stored procedure, specify NULL for
the major_version parameter, the minor_version parameter, and all other required parameters. The
stored procedure returns the highest supported document version as values in the major_version and
minor_version output parameters, and sets the xml_output and xml_message output parameters to NULL.
If you specify non-null values for the major_version and minor_version parameters, you must specify
a document version that is supported . If the version is invalid, the stored procedure returns an error
(-20457).
If the XML input document in the xml_input parameter specifies the Document Type Major Version
and Document Type Minor Version keys, the value for those keys must be equal to the values that
you specified in the major_version and minor_version parameters, or an error (+20458) is raised.
Related concepts
XML input documents
The XML input document is passed as input to common SQL API stored procedures and adheres to a
single, common document type definition (DTD).
XML output documents
The XML output documents that are returned as output from common SQL API stored procedures share a
common set of entries.
XML message documents
An XML message document provides detailed information about an SQL warning condition.
The Document Type Name key varies depending on the stored procedure. This example shows an XML
input document for the GET_MESSAGE stored procedure. In addition, the values of the Document Type
Major Version and Document Type Minor Version keys depend on the values that you specified
in the major_version and minor_version parameters for the stored procedure.
If the stored procedure is not running in Complete mode, you must specify the Document Type Name
key, the required parameters, and any optional parameters that you want to specify. Specifying the
Document Type Major Version and Document Type Minor Version keys are optional. If you
specify the Document Type Major Version and Document Type Minor Version keys, the values
must be the same as the values that you specified in the major_version and minor_version parameters.
You must either specify both or omit both of the Document Type Major Version and Document
Type Minor Version keys. Specifying the Document Locale key is optional. If you specify the
Document Locale key, the value is ignored.
Important: XML input documents must be encoded in UTF-8 and contain only English characters.
Related concepts
Versioning of XML documents
Common SQL API stored procedures support multiple versions of the three XML parameter documents:
XML input documents, XML output documents, and XML message documents.
XML output documents
The XML output documents that are returned as output from common SQL API stored procedures share a
common set of entries.
XML message documents
An XML message document provides detailed information about an SQL warning condition.
If the stored procedure runs in Complete mode, a complete input document is returned by the xml_output
parameter of the stored procedure. The returned XML document is a full XML input document that
includes a Document Type and sections for all possible required and optional parameters. The returned
XML input document also includes entries for Display Name, Hint, and the Document Locale.
Although these entries are not required (and will be ignored) in the XML input document, they are usually
needed when rendering the document in a client application.
All entries in the returned XML input document can be rendered and changed in ways that are
independent of the operating system or data server. Subsequently, the modified XML input document
The Document Type Name key varies depending on the stored procedure. This example shows an XML
output document for the GET_CONFIG stored procedure. In addition, the values of the Document Type
Major Version and Document Type Minor Version keys depend on the values that you specified
in the major_version and minor_version parameters for the stored procedure.
Entries in the XML output document are grouped by using nested dictionaries. Each entry in the XML
output document describes a single piece of information. In general, an XML output document is
comprised of Display Name, Value, and Hint, as shown in the following example:
<key>SQL Domain</key>
<dict>
<key>Display Name</key>
<string>SQL Domain</string>
<key>Value</key>
<string>v33ec059.svl.ibm.com</string>
<key>Hint</key>
<string />
</dict>
XML output documents are generated in UTF-8 and contain only English characters.
Related concepts
Versioning of XML documents
Common SQL API stored procedures support multiple versions of the three XML parameter documents:
XML input documents, XML output documents, and XML message documents.
XML input documents
The XML input document is passed as input to common SQL API stored procedures and adheres to a
single, common document type definition (DTD).
XML message documents
An XML message document provides detailed information about an SQL warning condition.
The stored procedure returns the string 9.1.5 in the xml_output parameter if the value of the Data Server
Product Version is 9.1.5. Therefore, the stored procedure call returns a single value rather than an XML
document.
The details about an SQL warning will be encapsulated in a dictionary entry, which is comprised of
Display Name, Value, and Hint, as shown in the following example:
XML message documents are generated in UTF-8 and contain only English characters.
Related concepts
Versioning of XML documents
Common SQL API stored procedures support multiple versions of the three XML parameter documents:
XML input documents, XML output documents, and XML message documents.
XML input documents
The XML input document is passed as input to common SQL API stored procedures and adheres to a
single, common document type definition (DTD).
XML output documents
The XML output documents that are returned as output from common SQL API stored procedures share a
common set of entries.
Procedure
To troubleshoot Db2 stored procedures, perform one or more of the following actions:
• For general information about the available debugging tools and techniques, see Debugging stored
procedures (Db2 Application programming and SQL).
• See Db2 for z/OS Stored Procedures: Through the CALL and Beyond (IBM Redbooks) if you are
troubleshooting one of the following problems:
– For problems with implementing RRS, see "RRS error samples."
– For problems with calling a particular stored procedure, you might not have the required
authorizations. See "Privileges to execute a stored procedure called statically."
– For troubleshooting Java stored procedures, see "Common problems."
– For invoking programs that receive SQLCODE -430, see "Classical debugging of stored procedures."
Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Notices 783
your own legal advice about any laws applicable to such data collection, including any requirements for
notice and consent.
For more information about the use of various technologies, including cookies, for these purposes,
see IBM’s Privacy Policy at http://www.ibm.com/privacy and IBM’s Online Privacy Statement at http://
www.ibm.com/privacy/details the section entitled “Cookies, Web Beacons and Other Technologies”
and the “IBM Software Products and Software-as-a-Service Privacy Statement” at http://www.ibm.com/
software/info/product-privacy.
Index 787
administrative task schedulers (continued) ALTER TABLE statement (continued)
data sharing environment (continued) VALIDPROC clause 234
specifying 427 ALTER TABLESPACE statement 177
synchronization 434 application changes
task execution 448 backing out
interface with quiesce point 287
security 443 application defaults module 415
JCL jobs 447 application environment
lifecycle 438 status 469
overview 423 application errors
resources backing out
security 443 without a quiesce point 287
security 442 application period
starting 433 adding 231
stopping 433 application plans
stored procedures dependent objects 239
accounting information 441 application programs
calling 446, 447 call attachment facility (CAF)
displaying results 431 running 420
SQL codes 436 coding SQL statements
SYSIBM.ADMIN_TASKS table 439 for IMS 418
SYSIBM.ADMIN_TASKS_HIST table 439 errors 286
task execution information
multi-thread 445 obtaining 456
security 444 issuing commands 410
task lists recovery procedures
recovering 435 CICS 292
task status IMS 291
listing 429, 430 RRSAF (Resource Recovery Services attachment
tasks facility)
adding 423 running 421
listing 429 running
removing 432 batch 419
sample schedule 425 CICS transactions 419
scheduling 424 error recovery 286
stopping 432 IMS 418
updating 431 TSO
time zones 448 running 418
tracing application-period temporal tables
disabling 434 creating 88, 231
enabling 434 querying 94
troubleshooting 434, 436 applications
Unicode restrictions 447 CICS
user roles 442 connections 496
user-defined table functions disconnecting 496
troubleshooting 436 archive log
administrative tasks retaining 625
scheduling 423 ARCHIVE LOG command 576
alias archive log data sets
retrieving catalog information about 163 archiving
ALTER BUFFERPOOL command 462 DFSMS (Data Facility Storage Management
ALTER COLUMN Subsystem) 573
immediate or pending 264 BSDS (bootstrap data set) 589
ALTER command deleting 574
access method services 352 dual logging 572
ALTER DATABASE statement 173 dynamic allocation 572
ALTER FUNCTION statement 267 high-level qualified
ALTER INDEX statement 49 changing 273
ALTER PROCEDURE statement 266 high-level qualifier
ALTER STOGROUP statement changing 269–273
ADD VOLUMES clause 175 locating 587
ALTER TABLE statement multivolumes 573
DATA CAPTURE clause 235 offloading 569
default column values 197 overview 572
Index 789
catalog name (continued) CDDS (compression dictionary data set)
VCAT clause (continued) recovering 721
CREATE TABLESPACE statement 64 CHANGE command
catalog tables IMS
image copies purging residual recovery entries 498
frequency 629, 630 change log inventory utility
retrieving information about bootstrap data set (BSDS) 478
primary keys 167 BSDS (bootstrap data set)
status 168 changing 591
SYSAUXRELS 169 change number of sessions (CNOS) 361
SYSCOLUMNS CHANGE SUBSYS command
updated by COMMENT ON statement 171 IMS 503
updated by CREATE VIEW statement 165 check constraints
SYSCOPY adding 213
discarding records 636 dropping 213
image copies 696 CHECK DATA utility 136
image copy information 635 check pending status
RECOVER utility 628 retrieving catalog information 168
SYSFOREIGNKEYS 167 checkpoint
SYSIBM.SYSTABLES 70 queue 603
SYSINDEXES checkpoint frequency
dropping tables 239 changing 578
SYSINDEXPART checkpoints
space allocation information 40 log records 695, 699
SYSPLANDEP 239 CICS
SYSRELS applications
describes referential constraints 167 disconnecting 496
SYSROUTINES 170 commands
SYSSEQUENCES 171 accessing databases 493
SYSSTOGROUP connecting 493
sample query 162 connecting to Db2
SYSSYNONYMS 238 authorization IDs 419
SYSTABAUTH connections
dropping tables 239 controlling 493
table authorizations 166 disconnecting from Db2 496
updated by CREATE VIEW statement 165 DSNC command 411
view authorizations 166 DSNC DISCONNECT command 496
SYSTABLEAPART dynamic plan selection
partition order 163 exit routine 760
SYSTABLEPART 175 environment
SYSTABLES planning 419
rows maintained 162 facilities
updated by COMMENT ON statement 171 diagnostic traces 541
updated by CREATE VIEW statement 165 indoubt units of recovery 493
SYSTRIGGERS 170 operating
SYSVIEWDEP outstanding indoubt units 615
view dependencies 239 terminates AEY9 298
SYSVOLUMES 22 programming
catalog tables, DB2 applications 419
image copy 625 recovery procedures
catalog, DB2 application failures 292
constraint information 168 attachment facility failures 297
database design 162, 172 CICS not operational 293
retrieving information from 162 DB2 connection failures 294
catalogs indoubt units of recovery 294
Db2 restarting 493
DSNDB06 database 635 threads
recovery procedures 349 connecting 494
point-in-time recovery 671 two-phase commit 607
recovering 671 CICS commands
CDB (communications database) DSNC DISCONNECT 493
backing up 627 DSNC DISPLAY 493
high-level qualifier DSNC DISPLAY PLAN 495
changing 273 DSNC DISPLAY TRANSACTION 495
Index 791
copying data (continued)
Db2 subsystems 279 distributed
relational databases 279 controlling connections 514
correlation IDs exchanging 98
CICS 294 inconsistencies
duplicates 294, 499 resolving 339
IMS 499 loading
outstanding unit of recovery 596 a single row 128
RECOVER INDOUBT command 502 multiple rows 129
TSO connections 490 modeling 3
CREATE AUXILIARY TABLE statement 110 moving 279
CREATE DATABASE statement 19 recovering 587
CREATE FUNCTION statement 140 restoring
CREATE GLOBAL TEMPORARY TABLE statement point-in-time recovery 648
distinctions from base tables 79 data availability
CREATE INDEX statement maximizing 631
CLUSTER clause 114 data classes
DEFINE NO clause 110 assigning 41
NOT PADDED clause 115 SMS construct 41
PADDED clause 115 data compression
USING clause 40 log records 695
CREATE PROCEDURE statement 134 logging 569
CREATE STOGROUP statement data consistency
VOLUMES('*') attribute 23, 29 maintaining 607
CREATE TABLE statement point-in-time recovery 652
examples 73 Data Facility Product (DFSMSdfp) 277
PARTITION BY clause 118 Data Facility Storage Management Subsystem (DFSMS)
XML table spaces concurrent copy 646
creating implicitly 54 copying data 646
CREATE TABLESPACE statement recovery 646
creating explicitly 55 data management
DEFINE NO clause 23, 55 automatic 631
DSSIZE clause 49, 68 data mirroring
DSSIZE option 41 recovery 386, 388
EA-enabled index spaces 68 data pages
EA-enabled table spaces 68 changes
LOCKSIZE TABLE clause 50 control information 698
NUMPARTS clause 47 data 698
partitioned table spaces 49 pointers 698
segmented table spaces 50 data set
SEGSIZE clause 50 damaged 625
USING STOGROUP clause 23, 55 data sets
created temporary table adding 352, 354
distinctions from base tables 79 backing up
created temporary tables using DFSMS 646
creating 78 copying 644
creating PBR table spaces 59 Db2-managed
CRESTART control statement extending 24, 25
indoubt units of recovery extension failures 24
resolving 621 nonpartitioned spaces 24
cron format partitioned spaces 24
UNIX 427 primary space allocation 25
current status rebuild recovering 681
failure recovery 313 secondary space allocation 25, 26, 28
phase of restart 596 deferring allocation 23
extending 352
high-level qualifier
D changing 269
damaged data managing
renaming data sets 625 DFSMShsm 29
data using DFSMShsm 28
access control with DB2 storage groups 21
START Db2 command 415 migrating 176
backing up 689 moving
Index 793
DB2 subsystem (continued) deprecated table space types (continued)
recovering (continued) creating 63
BACKUP SYSTEM utility 689 deprecated table spaces
RESTORE SYSTEM utility 689 converting 185
restarting DFSLI000 (IMS language interface module) 418
log truncation 316 DFSMS (Data Facility Storage Management Subsystem)
resolving inconsistencies 323 archive log data sets 573
restoring 684 DFSMSdfp (Data Facility Product) 277
starting 325 DFSMSdss (Data Set Services) 277
startup DFSMSdss RESTORE command
application defaults module 415 RECOVER utility 31
termination scenario 298 DFSMShsm
Db2-defined extents 184 data classes
Db2-managed data sets assigning indexes 29
enlarging 354 assigning table spaces 29
recovering 681 data sets
Db2-managed objects migrating 29
changing 276 DFSMShsm (Data Facility Hierarchical Storage Manager)
Db2-managed primary space allocation" advantages 28
status="unchanged">data sets archive logs
Db2-managed recalling 30
primary space allocation 28 BACKUP SYSTEM utility 32
DB2I (Db2 Interactive) backups 631
TSO connections 407 data sets
DB2I (DB2 Interactive) 417 moving 277
DBATS FRBACKUP PREPARE command 391
controlling 514 recovery 631
DBD01 directory table space DFSMShsm (Hierarchical Storage Manager)
quiescing 666 HMIGRATE command 277
recovery information 635 HRECALL command 277
DDF DFSMSsms (DFSMS storage management subsystem)
stopping 515 BACKUP SYSTEM utility 33
DDF (distributed data facility) diagnostic information
alerts 540 obtaining 472
failures directory
recovering 356 high-level qualifier
DDL registration tables changing 273
recovery preparation 627 image copies
DDL_MATERIALIZATION subsystem parameter frequency 629, 630
how to set 264 order of recovery
DECLARE GLOBAL TEMPORARY TABLE statement I/O errors 349
distinctions from base tables 79 point-in-time recovery 671
declared temporary table recovering 671
distinctions from base tables 79 SYSLGRNX table
declared temporary tables discarding records 636
creating 78 records log RBA ranges 635
default database directory, DB2
(DSNDB04) 273 image copy 625
high-level qualifier disability xvii
changing 273 disaster recovery
DEFINE command archive logs 363, 368
access method services data mirroring 386
re-creating table spaces 333 essential elements 668
DELETE CLUSTER command 37 image copies 363, 368
DELETE command preparation 637
access method services 333 remote site recovery 689
DELETE statement rolling disaster 386
delete rules 138 scenarios 362
deleting system-level backups 363
archive log data sets 574 tracker sites 377
denormalizing tables 15 disk dump and restore
dependent regions considerations 642
disconnecting 504 disk storage
deprecated table space types estimating 146
Index 795
DSNTIPA panel exit routine (continued)
WRITE TO OPER field 571 general considerations 761
DSNTIPL panel EXPORT command
BACKOUT DURATION field 601 access method services 277, 667
LIMIT BACKOUT field 601 expression-based 116
DSNTIPN panel expression-based index 116
LEVEL UPDATE FREQ field 346 expressions
DSNTIPS panel indexes 116
DEFER ALL field 601
RESTART ALL field 601
DSNZPxxx
F
subsystem parameters module failure symptoms
specifying 414 abend
DSNZPxxx module log problems 329
ARCHWTOR option 571 restart failure 324
dual logging BSDS (bootstrap data set) 311
active log data sets 569 CICS
archive log data sets 572 attachment abends 294
synchronization 570 task abends 297
dual recovery logs 625 waits 293
dual-BSDS mode logs
restoring 590 lost information 335
dynamic plan selection in CICS messages
exit routine. 760 DFH2206 292
DFS555 291
E DSNB207I 344
DSNJ 333
EA-enabled index spaces 68 DSNJ001I 307
EA-enabled page sets 41 DSNJ004I 301
EA-enabled table spaces 68 DSNJ100 333
edit procedures DSNJ103I 304
changing 236 DSNJ105I 301
column boundaries 764 DSNJ106I 302
overview 733 DSNJ107 333
parameter list 734 DSNJ114I 304
specifying 734 DSNM002I 288
edit routines DSNM005I 290
data type values 768 DSNM3201I 293
expected output 736 DSNP007I 352
invoking 734 DSNP012I 351
row formats 764 DSNU086I 348, 349
Enterprise Storage Server processing failure 284
backups 646 subsystem termination 298
entities fast copy function
attribute names 7 Enterprise Storage Server FlashCopy 646
attributes RVA SnapShot 646
values 9 fast log apply
entity normalization RECOVER utility 641
first normal form 10 field procedures
fourth normal form 12 changing 236
second normal form 10 control blocks 748
third normal form 11 field-decoding 756
entity relationships field-definition 747, 752
business rules 5 field-encoding 754
many-to-many relationships 5 information block (FPIB) 749
many-to-one relationships 5 invoking 747
one-to-many relationships 5 overview 746
one-to-one relationships 5 parameter list 748, 750
ERASE clause 66 specifying 747
error pages value descriptor 751
displaying 460 work area 749
exception status fixed-length rows 764
resetting 665 FlashCopy backups
exit routine incremental 33
Index 797
IMS (continued) indexes (continued)
recovery procedures (continued) backward index scan 108
indoubt units of recovery 614 clustering 114
running programs columns
batch work 418 adding 245
threads 499, 500 compressing 116
waits 288 copying 644
IMS commands creating 107
CHANGE SUBSYS 498, 503 creating implicitly 123
DISPLAY defining
SUBSYS option 505 with composite keys 109
DISPLAY OASN 289, 503 dropping 249
DISPLAY SUBSYS forward index scan 108
LTERM authorization IDs 505 implementing 104
responses 411 large objects (LOBs) 110
START REGION 504 naming 110
START SUBSYS 498 nonpartitioned 118
STOP REGION 504 nonunique 113
STOP SUBSYS 498, 507 not padded
TRACE SUBSYS 498 advantages 115
IMS terminals disadvantages 115
issuing commands 409 index-only access 115
IMS threads varying-length columns 115
displaying 499 NULL keys
IMS.PROCLIB library excluding 115
connecting overview 17
from dependent regions 504 padded 115
inconsistent data partitioned 118
identifying 319 partitioned table spaces
recovering 641 rebalancing 182
indefinite wait condition recovering 663
recovering 361 redefining 249
index reorganizing 250
catalog information about 165, 167 secondary
naming convention 110 data-partitioned secondary index (DPSI) 118
types nonpartitioned secondary index (NPSI) 118
primary 167 sorts
index attributes 111 avoiding 108
index entries stopping 461
sequence 110 storage
index space data sets allocating 152
deferred allocation 110 estimating 152, 153
index spaces structure
starting 452 index trees 152
storage leaf pages 152
allocating 40 root pages 152
with restrictions subpages 152
starting 453 unique
index types adding columns 246
unique indexes 117 version numbers
index-based partitions recycling 208
redefining 352 versions 206
index-controlled partitioning indoubt threads
converting to table-controlled 191 information
tables displaying 483
creating 74 recovering 620
indexes resolving 392
altered tables status
recovering 663 resetting 620
altering indoubt units of recovery
clustering option 248 CICS 294
varying-length columns 247 displaying 501, 509
attributes IMS
partitioned tables 118 recovering 502
Index 799
log CLOSE requests logical page list (LPL) (continued)
stand-alone 729 pages (continued)
log data recovering 459
decompressing 712 removing 459
GDPS Continuous Availability with zero data loss logs
solution 720 archiving 576
reading 712, 720 backward recovery 598
log GET requests BSDS (bootstrap data set) inventory
stand-alone 727 altering 591
log initialization phase buffer
failure recovery 313, 314 retrieving log records 574
log OPEN requests displaying
stand-alone 725 DISPLAY LOG command 579
log RBA (relative byte address) print log map utility 579
converting 582 dual
data sharing environment 583 archive logs 589
display 731 minimizing restart efforts 333
non-data sharing environment 584 synchronization 570
resetting 583, 584 establishing hierarchy 569
log record header (LRH) 702 excessive loss 335
log record sequence number (LRSN) 695 failure
log records symptoms 311
active total loss 335
gathering 711 forward recovery 597
checkpoints 699 implementing 573
contents 695 initialization phase 595
control interval definition (LCID) 703 keeping
creating 569 duration 587
data compression 695 managing 567, 575, 630
extracting 695 record retrieval 574
format 703 recovery procedures
header 702 active logs 299
interpreting 710 archive logs 304
logical 701 recovery scenario 333
physical 701 restarting 594, 598
printing 695 truncation 319
qualifying 714 lost work
RBA (relative byte address) 695 identifying 319
reading 695, 710, 711, 722, 730 LRH (log record header) 702
redo 696
relative byte address (RBA) 695
segments 701
M
structure mapping macro
page set control records 700 DSNDEXPL 762
subtype codes 708 DSNDROW 768
type codes 707 mapping macros
types 695 DSNDSLRB 722
undo 696 DSNDSLRF 727
units of recovery 697 materialized query table
WQAxxx qualification fields 714 using
log services retrieving catalog information 166
stand-alone 722, 729 materialized query tables
logging altering 232
implementing 573 attributes
logging attributes altering 233
changing 178 changing
logical data modeling base tables 233
examples 5 creating 96
recommendations 4 definitions
logical database design altering 234
Unified Modeling Language (UML) 13 implementing 70
logical page list (LPL) registering 232
displaying 458 media failures
pages recovering 383
Index 801
message by identifier (continued) null value
DXR117I 480 effect on storage space 764
DXR1211 481 numeric data 770
DXR122E 283 NUMPARTS
DXR1651 481 clause of CREATE TABLESPACE statement 67
EDC3009I 351
IEC161I 344
message processing program (MPP)
O
connections 504 obfuscation
messages routine and trigger source code 144
CICS 411 objects
route codes 411 dropped
unsolicited 411 recovering 678
MIGRATE command information
DFSMShsm (Hierarchical Storage Manager) 277 displaying 456
modeling not logged
data 3 recovering 673
MODIFY utility XML
image copies altering implicitly 268
retaining 639 offloading
monitor profiles active logs 570
starting and stopping 557 canceling 586
monitoring interruptions 572
connections 521 messages 571
idle threads 532 quiescing 575
threads 526 restarting 586
moving status
data DISPLAY LOG command 586
tools 277 trigger events 570
data sets offloads
with utilities 281 canceling 576
without utilities 280 operating environments
MPP (message processing program) CICS 419
connections 504 IMS 418
multi-site updates ORDER BY clause
examples 610 sorts
multiple systems avoiding 108
conditional restart 613 originating sequence number (OASN)
indoubt units of recovery 289
N
NetView
P
monitoring errors 539 packages
network ID (NID) invalidated
CICS 294 dropping a table 238
IMS 289, 499 dropping a view 242
NID (network ID) dropping an index 249
CICS 294 page errors
IMS 289 logical 458
thread identification 499 physical 458
non-data sharing environment page sets
RECOVER TOLOGPOINT option 656 altering 184
non-UTS control records 700
converting 185 copying 644
non-UTS table spaces page sizes
partitioned 49 calculating 148
segmented 50 pages
nonpartitioned secondary index (NPSI) 118 errors 458
nonunique indexes 113 index size 152
NOT LOGGED attribute information
table spaces 178 obtaining 458
NPSI (nonpartitioned secondary index) 118 number of records 148
NULL keys root 152
excluding 115
Index 803
R recovery (continued)
data
range-partitioned table spaces 59 moving 660
range-partitioned universal table spaces 45 data availability
RBA (relative byte address) maximizing 631
range in messages 571 data sets
RDO (resource definition online) Db2-managed 681
MSGQUEUE attribute 411 DFSMS 646
STATSQUEUE attribute 411 DFSMShsm 631
REBUILD-pending status non-DB2 dump and restore 677
for indexes 625 databases
record identifiers (RIDs) 112 active logs 695
RECORDING MAX field backup copies 627
panel DSNTIPA RECOVER TOCOPY 652
preventing frequent BSDS wrapping 332 RECOVER TOLOGPOINT 652
records RECOVER TORBA 652
performance 148 Db2 outages
size cold start 398
calculating 148 Db2 subsystem 337, 345
RECOVER BSDS command DB2 subsystem 695
copying BSDS 590 DDF (distributed data facility) failures 356
RECOVER INDOUBT command directory 671
free locked resources 294 disk failures 284
RECOVER TABLESPACE utility distributed data
modified data planning 624
recovering 333 down-level page sets 346
RECOVER TOLOGPOINT option FlashCopy image copies 664
data sharing environment 654 FlashCopy volume backups 686
non-data sharing environment 656 heuristic decisions
RECOVER utility correcting 403
catalog tables 671 implications 673
data inconsistency 639 IMS outage with cold start
deferred objects 600 scenario 397
DFSMS concurrent copies 646 IMS-related failures
DFSMSdss RESTORE command 31 during indoubt resolution 290
directory tables 671 indoubt units of recovery 289
DSNDB07 database 670 inconsistent data
fast log apply 641 resolving 329
functions 642 indexes 625
messages 642 indexes on tables
object-level recoveries 647 partitioned table spaces 664
objects 642 indoubt threads 392
options indoubt units of recovery
TOCOPY 652 CICS 294
TOLOGPOINT 286, 652 IMS 502
TORBA 652 information
recovery cycle 380 reporting 636
restrictions 643 integrated catalog facility catalog
running in parallel 641 VVDS failure 350
segmented table spaces 50 invalid LOBs 348
RECOVER-pending status LOB table spaces 661
clearing 675 logs
recovery truncating 316
acceleration tables and indexes lost status information 322
planning 624 media 642
application changes multiple systems environment 613
backing out 287 objects
backward log recovery failures 330 dropped 678
BSDS (bootstrap data set) 309 identifying 665
catalog 671 operations 628
catalog definitions outages
consistency 687 minimizing 631
communications failure 393 planning 648
compressed data 657 point in time 345, 652
Index 805
residual recovery entry (RRE) 503 resuming
resource definition online (RDO) distributed data facility (DDF) threads 516
STATUSQUEUE attribute 493 return areas
resource limit facility specifying 713
recovery preparation 627 return codes 725
resource managers RFMTTYPE
indoubt units of recovery BRF 766
resolving 617 RRF 766
Resource Recovery Services (RRS) rolling disaster 386
abend 508 root pages
connections indexes 152
controlling 508 routines
indoubt units of recovery 509 conversion procedures 743–745
postponed units of recovery 510 date routines 739–742
Resource Recovery Services attachment facility (RRSAF) edit routines 734, 736
connections field procedures 746–752, 754, 756
displaying 510 log capture routines 758
monitoring 510 obfuscation 144
disconnecting 511 time routines 739–742
resource translation table (RTT) 504 validation routines 737–739
restart writing 733
automatic 599 row format conversion
backward log recovery table spaces 767
failure during 329 row formats 765
phase 598 ROWID column
BSDS (bootstrap data set) problems 333 data
cold start situations 335 loading 126
conditional inserting 129
control record governs 600 rows
excessive loss of active log data 336 formats for exit routines 764
total loss of log 335 incomplete 739
current status rebuild phase RRDF (Remote Recovery Data Facility)
failure recovery 313 tables
forward log recovery phase altering 235
failure during 324 RRE (residual recovery entry)
implications detecting 503
table spaces 599 logged at IMS checkpoint 614
inconsistencies not resolved 614
resolving 339 purging 503
log data set problems 333 RRS connections
log initialization phase sharing locks with profiles 512
failure recovery 313 RRSAF (Resource Recovery Services attachment facility)
lost connections 614 application programs
multiple-system environment 613 running 421
normal 594 RTT (resource translation table)
recovery 603 transaction types 504
recovery preparation 667 RVA (RAMAC Virtual Array)
restart processing backups 646
deferring 601
limiting 325
restarting
S
Db2 subsystem 593, 601 sample library 125
RESTORE phase scheduled tasks
RECOVER utility 642 adding 423
RESTORE SYSTEM defining 425
recovery cycle listing 429
establishing 379 removing 432
RESTORE SYSTEM utility status
Db2 subsystem listing 429
recovering 689 multiple executions 430
restoring stopping 432
data 648 updating 431
databases 684 schema definition
Db2 subsystem 684 authorization 125
Index 807
storage groups (continued) SYSIBM.SYSSTOGROUP catalog table 38
managing SYSIBM.SYSTABLES catalog table 70
SMS 23 SYSIBM.SYSVOLUMES catalog table 38
with SMS 174 SYSIBMADM.MOVE_TO_ARCHIVE global variable
volumes effect 99
adding 174, 175 SYSLGRNX directory table
removing 175 REPORT utility information 636
Storage Management Subsystem (SMS) table space records
archive log data sets 573 retaining 636
stored procedures SYSSYNONYMS catalog table 238
administration 773 SYSTABLES catalog table 238
altering 266 system checkpoints
common SQL API monitoring 579
Complete mode 775 system period
XML input documents 774 adding 229
XML output documents 776 system-defined routine
XML parameter documents 774 implementing 143
CREATE_WRAPPED 144 system-level backups
creating 134 conditional restarts 603
debugging 472 data
diagnostic information 472 moving 660
displaying disaster recovery 363
statistics 467 object-level recoveries 647
thread information 468 system-period data versioning
dropping 135 bitemporal tables 91
external defining 229
migrating 474 restrictions 87
external SQL temporal tables 84
migrating 474 system-period temporal tables
GET_CONFIG altering 232
filtering output 776 creating 84, 229
GET_MESSAGE daylight saving time 87
filtering output 776 querying 94
GET_SYSTEM_INFO recovering 691
filtering output 776 system-wide points of consistency 639
implementing 134
information
displaying 466
T
migrating 473 table
monitoring 466 creating
native SQL description 79
migrating 473 naming convention 72
obfuscation 144 retrieving
prioritizing 475 catalog information 162
scheduling 446 comments 171
SQLCODE -430 778 types 79
troubleshooting 778 table check constraint
structures catalog information 168
hierarchy 38 table space set
subsystem member (SSM) 504 recovering 662
subsystem parameters table space types
SVOLARC 573 deprecated 185
suspending table spaces
distributed data facility (DDF) threads 516 altering 177, 186, 254, 658
SVOLARC subsystem parameter 573 assigning to physical storage 38
syntax diagram copying 644
how to read xviii creating
SYS1.LOGREC data set 298 explicitly 55
SYS1.PARMLIB library creating PBG table spaces 61
IRLM data
specifying 479 loading 125
SYSCOPY rebalancing 182
catalog table records defining
retaining 636 implicitly 52
Index 809
temporary tables (continued) tracker site (continued)
types 77 recovering (continued)
TERM UTILITY command RECOVER utility 385
restrictions 641 RESTORE SYSTEM utility 384
terminal monitor program (TMP) 420 recovery cycle
terminating RESTORE SYSTEM utility 379
Db2 setting up 378
scenarios 298 transaction managers
Db2 subsystem distributed transactions
normal restart 594 recovering 617
DB2 subsystem transactions
abend 594 CICS
multiple systems 611 accessing 494
termination entering 419
types 593 IMS
threads connecting to Db2 498
allied 514 thread attachment 499
attachment in IMS 499 thread termination 500
canceling 538 types 504
CICS trigger
displaying 495 catalog information 170
conversation-level information triggers
displaying 486 creating 139
database access (DBATs 514 obfuscation 144
displaying troubleshooting
IMS 505 QMF-related failures 297
indoubt stored procedures 778
displaying 483 truncation
monitoring 495 active logs 319, 570
monitoring by using profiles 526 TSO
termination application programs
CICS 493 conditions 418
IMS 500, 507 running 418
time routines background execution 420
expected output 742 connections
invoking 741 controlling 490
overview 739 disconnecting 492
parameter list 741 monitoring 490
specifying 740 DSNELI language interface module
TMP (terminal monitor program) link editing 418
DSN command processor 407 TSO commands
sample job 420 DSN
TSO batch work 420 END subcommand 492
TOCOPY option TSO connections
RECOVER utility 652 monitoring 490
TOLOGPOINT option TSO consoles
RECOVER utility 652 issuing commands 407
TORBA option two-phase commit
RECOVER utility 652 CICS 607
TRACE SUBSYS command coordinator 607
IMS 498 IMS 607
traces participants 607
controlling process 607
IMS 541
diagnostic
CICS 541
U
IRLM 543 UDF
tracker site catalog information 170
characteristics 377 Unified Modeling Language (UML) 13
converting unique indexes
takeover site 383–385 implementing 112
disaster recovery 377 unit of recovery ID (URID) 703
maintaining 383 units of recovery
recovering in-abort
Index 811
volume serial numbers 589 XML output documents
VSAM (virtual storage access method) common SQL API 776
control interval (CI) versioning 774
block size 572 XML parameter documents
log records 569 versioning 774
processing 677 XML table spaces
VSAM volume data set (VVDS) creating implicitly 54
recovering 350 pending states
VTAM removing 677
failures recovering 661
recovering 359 XRC (Extended Remote Copy) 391
VTAM ACB OPEN) XRF (extended recovery facility)
failure 359 CICS toleration 624
VTAM) IMS toleration 624
recovery procedures 359
VVDS (VSAM volume data set)
recovering 350, 351
Z
z/OS
W commands
DISPLAY WLM 469
wait status power failure
ending 415 recovering 284
WebSphere Application Server restart function 599
indoubt units of recovery 617 z/OS abend
WLM application environment IEC030I 305
quiescing 469 IEC031I 305
refreshing 469 IEC032I 305
restarting 469 z/OS commands
startup procedures 469 MODIFY irlmproc 479–481
stopping 469 MODIFY irlmproc,ABEND 481
WLM_REFRESH stored procedure 469 START irlmproc 479, 480
work STOP irlmproc 479, 481
submitting 417 TRACE 479
work file database z/OS console
starting 452 issuing commands to Db2 407
work file databases zLOAD 131
changing high-level qualifier
migrated installation 274
new installation 274
data sets
enlarging 355
enlarging 352
extending 355
troubleshooting 643
work file table spaces
error ranges
recovering 670
write error page range (WEPR) 458
X
XML columns
adding 227
data
loading 126
XML input documents
common SQL API 774, 775
versioning 774
XML message documents
versioning 774
XML objects
altering
implicitly 268
SC27-8844-02