DB2 For ZOS Course 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 654

V3.

1
Student Notebook

Uempty Unit 6. Network Computing

What This Unit Is About


You will learn about the new functions that enhance the network
computing usage of DB2 UDB for z/OS.

What You Should Be Able to Do


After completing this unit, you should be able to describe new
functions including member routing, DRDA data stream encryption,
and rollup of accounting data for threads.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-1


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
CDB enhancements
Requester database ALIAS
Location server ALIAS

Member routing in TCP/IP


Using SYSIBM.IPLIST
Data sharing member subsetting via DRDA TCP/IP
DDF and RRS accounting rollup
DRDA data stream encryption
Other enhancements

© Copyright IBM Corporation 2004

Figure 6-1. List of Topics CG381.0

Notes:
In this unit, we examine the following enhancements:
• Location alias names:
A location alias name is an alternative location name you can use to access a DB2
subsystem or database through the network. DB2 V8 brings in a number of changes
related to the use of aliases.
Requester location alias names allows you to access a given database on many DB2
for LUW systems, even if thousands of them exist with the same database name.
Server location alias names allow you to connect to a DB2 for z/OS subsystem by using
more than one location name.
• Member routing in TCP/IP:
Member routing in TCP/IP consists of two enhancements. The first one allows
subsetting of the members that you can access in a DB2 data sharing group as a DB2
for z/OS application requester. The second one builds on the server location alias name

6-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty support and allows you to connect to a subset of data sharing members from any DRDA
application requester.
• Rollup accounting data for DDF and RRSAF threads:
In Version 8, DB2 accounting data collection is enhanced to optionally accumulate
accounting data for DDF and RRSAF threads. This is controlled by a new DSNZPARM
parameter, ACCUMACC, and can be dynamically turned on and off.
• DRDA data stream encryption:
To achieve more effective security in a distributed computing environment, DB2 Version
8 provides the ability to authenticate via encrypted userid, or encrypted userid
password, and provide support for encrypting security sensitive data.
• Other network computing enhancements:
The following items are some of the other network enhancements in DB2 V8:
EnglishThere is a change in terminology for “Type 1 Inactive Threads” and “Type 2 Inactive Threads”

- VTAM conversation allocation requests can now time-out with RC00D31033.


- Query blocks can now be larger than 32K for DB2 as a server.
- Limited SQL functionality in private protocol is provided.
- Changes have been made to the -DISPLAY LOCATION() command.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-3


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

6-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 6.1 CDB Enhancements

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-5


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 CDB Today

SELECT * FROM SAMPLE.creator.tab1


Workstation1 Workstation2 Workstation3
TCP/IP = 9.165.70.1 TCP/IP = 9.165.70.2 TCP/IP = 9.165.70.3
DB name = DB name = DB name =
Sample Sample Sample

SYSIBM.LOCATIONS SYSIBM.IPNAMES
LOCATION LINKNAME LINKNAME IPADDR
SAMPLE WORKSTATION1 WORKSTATION1 9.165.70.1
WORKSTATION2 9.165.70.2
WORKSTATION3 9.165.70.3

Only one entry possible here!


Can only be used to retrieve rows from SAMPLE database on Workstation1
The name of in the location field (SAMPLE) is used to connect to the database

No way to access the Sample database on Workstation2 without changing the


CDB or have a different DB name on every workstation

© Copyright IBM Corporation 2004

Figure 6-2. DB2 CDB Today CG381.0

Notes:
Today the CDB, or Communications Database, does not really exist anymore. Its tables
have been integrated into the DB2 catalog’s DSNDB06.SYSDDF table space, but as this
term is still commonly used to address those catalog tables that contain information about
communicating with other DB2 systems, we continue to use the term CDB throughout this
publication.
When connecting to a DB2 UDB for z/OS and OS/390 system through DRDA, you address
the entire DB2 subsystem by using its location name. A DB2 UDB for
LINUX/UNIX/Windows database is known in the network by its database name at a
particular instance of the database server. If the requester is a DB2 UDB for z/OS system,
you must specify the database name of the DB2 UDB for LUW system you want to connect
to, in the LOCATION column of the SYSIBM.LOCATION catalog table.
Up to DB2 Version 7, there is always a one-to-one mapping between location name and
database name, since the value in the LOCATION column is used as the database name
when connecting to a DB2 for LUW system. Prior to DB2 Version 8, there is no way to
access multiple DB2 UDB for LUW databases that have the same database name (even

6-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty when they reside on different machines), unless you catalog an alternate dbalias on the
server for the database name to be accessed. Then you specify this alternate dbalias value
in the LOCATION column. Cataloguing of an extra dbalias on each server for each access
database name can become an administrative challenge.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-7


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Requester Database ALIAS

Workstation1 Workstation2 Workstation3


TCP/IP = 9.165.70.1 TCP/IP = 9.165.70.2 TCP/IP = 9.165.70.3

DB name = DB name = DB name =


Sample Sample Sample

4
3

SYSIBM.LOCATIONS SYSIBM.IPNAMES
LOCATION LINKNAME DBALIAS LINKNAME IPADDR

DB1 WORKSTATION1 SAMPLE 2 WORKSTATION1 9.165.70.1


WORKSTATION2 9.165.70.2
DB2 WORKSTATION2 SAMPLE
WORKSTATION3 9.165.70.3
DB3 WORKSTATION3 SAMPLE

SELECT * FROM DB1.creator.tab1


DB1 is mapped to linkname WORKSTATION1 and therefore to IP address 9.165.70.1
On 9.165.70.1, we use the name in the dbalias column (SAMPLE) as the database
name to connect to, not the the value in the location column (DB1). That is only used by
the application to address the machine 9.165.70.1

© Copyright IBM Corporation 2004

Figure 6-3. Requester Database ALIAS CG381.0

Notes:
This restriction regarding names is removed in DB2 Version 8.
A new column DBALIAS is added to the SYSIBM.LOCATIONS table. Here is how it works:
1. You continue to specify the value of the LOCATION name field as the first qualifier of
your three-part table name in your SELECT statement.
2. The mapped LINKNAME links you to the corresponding entry in SYSIBM.IPNAMES,
which provides the correct TCP/IP address for the workstation you want to access.
3. The entry in column DBALIAS of SYSIBM.LOCATIONS points your SELECT statement
to the real database name on the DB2 UDB for Linux, UNIX, and Windows that you
want to connect to.
You can now access the SAMPLE database on every LINUX/UNIX/Windows system, even
if thousands of them exist with the same database name.

6-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Server Location ALIAS

DB2P DB2A
Location name: Location name:
LOCDB2P LOCDB2A

Migrate 2 separate subsystems


to 1 DB2 data sharing group

Group DBP0
BSDS DB2P DB2A BSDS
Location = LOCDBP0 Location name: Location = LOCDBP0
Location name:
{ Loc Alias = LOCDB2P
Loc Alias = LOCDB2A LOCDBP0 LOCDBP0
Loc Alias = LOCDB2P
Loc Alias = LOCDB2A
}
Location aliases

Location aliases

Valid SELECT statements after migrate:


SELECT * FROM LOCDBP0.creator.tab1
SELECT * FROM LOCDB2P.creator.tab1
SELECT * FROM LOCDB2A.creator.tab1

© Copyright IBM Corporation 2004

Figure 6-4. Server Location ALIAS CG381.0

Notes:
As mentioned before, a DB2 UDB for z/OS server is known in the network by its location
name. This name is used by applications to identify a DB2 subsystem or a DB2 data
sharing group. When two or more DB2 subsystems are consolidated into a single DB2 data
sharing group, multiple locations must be consolidated into a single location (because the
entire data sharing group uses the same location name). This means that all applications
that use the old location name (used when the DB2 was still a stand-alone subsystem)
need to be changed to access the location name of the data sharing group.
Another situation is where you need to move an application from one DB2 system to
another. If this application is accessed remotely, you must change the location name in all
CONNECT statements, and the first qualifier of the three-part object names of all remote
applications using that application, to reflect the new location name of the DB2 subsystem.
To ease this type of migration, DB2 Version 8 allows you to define up to eight alias names
in addition to the location name for a DB2 subsystem or data sharing group. A location alias
is an alternative name that a requester can use to access a DB2 subsystem. DB2 accepts

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-9


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

requests from applications that identify the DB2 server with its location name or any of its
location alias names.
You do not have to change the location names in your applications programs to be able to
access your data after migrating to a new data sharing group. The only thing you must do is
to add the additional location alias names to your BSDS data sets on each member of the
data sharing group, or DB2 subsystem.
The DB2 Change Log Inventory utility allows you to define up to eight alias names in the
BSDS, in addition to the location name. The Print Log Map utility prints any location alias
names defined for the DB2 subsystem.

6-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DDF Communication Record

**** DISTRIBUTED DATA FACILITY ****


COMMUNICATION RECORD
14:26:17 SEPTEMBER 30, 2003
LOCATION=DB8O
ALIAS=DB8OGRP
LUNAME=LU1 PASSWORD=(NULL) GENERICLU=(NULL)
PORT=33744 RPORT=33745

Output from Print Log Map utility (DSNJU004)

Input statement for Change Log Inventory utility (DSNJ003)


DDF ALIAS=DB8OGRP

© Copyright IBM Corporation 2004

Figure 6-5. DDF Communication Record CG381.0

Notes:
The extra location alias names are defined in the BSDS with the Change Log Inventory
utility (DSNJU003). The syntax for the Change Log Inventory utility is:
DDF ALIAS = aliasname,aliasname2
The ALIAS keyword identifies an alias name for the local location specified by the
LOCATION keyword. Up to eight alias names can be specified with the ALIAS keyword,
separated by commas. The alias names can be added or replaced by running another
DSNJU003 Utility with the ALIAS keyword, specifying a new list of names (more or less)
which will replace the old list of names. You always need to specify the entire list of aliases
that you want to be known to the system. If you want to add an alias DB8ZGRP to the
existing alias DB8OGRP in the visual above, you must specify:
DDF ALIAS = DB8OGRP,DB8ZGRP
In order to conform to the Change Log Inventory standards, a new keyword NOALIAS is
introduced to remove all the location alias definitions from the BSDS, specified by the
ALIAS keyword.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-11


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DDF NOALIAS

Note: Alias names are never displayed in messages.


The distributed data facility communication record in the BSDS data sets has been
changed to show the location alias names you have specified for your subsystem. This
visual shows the output of the Print Log Map utility (DSNJU004). You can see the location
alias name DB8OGRP we added for our subsystem.

6-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 6.2 Member Routing in TCP/IP

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-13


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

TCP/IP Member Routing


TCP/IP member routing for DB2 for z/OS application requesters

Application Requester SYSIBM.LOCATIONS SYSIBM.IPLIST


LOCATION LINKNAME DBALIAS LINKNAME IPADDR
LOCDBP0
LOCDBP1
NODEDBP0
NODEDBP1
NODEDBP1
{ NODEDBP1
9.165.70.1
9.165.70.2
}
1 1
SYSIBM.IPNAMES
SELECT * FROM LOCDBP1.creator.tb2 LINKNAME IPADDR
NODEDBP0 LOCDBP0.SYSPLEX.COM
Either routed to DBP1
or DBP2 NODEDBP1
2
SELECT * FROM LOCDBP0.creator.tb1
1 1 2

Application Server
Group DBP0
DBP1 DBP2 DBP3
Location name: Location name: Location name:
LOCDBP0 LOCDBP0 LOCDBP0
IP: 9.165.70.1 IP: 9.165.70.2 IP: 9.165.70.3

BSDS BSDS BSDS


Location = LOCDBP0
Location = LOCDBP0 Location = LOCDBP0
Loc Alias = LOCDBP1
Loc Alias = LOCDBP1

© Copyright IBM Corporation 2004

Figure 6-6. TCP/IP Member Routing CG381.0

Notes:
Two enhancements are provided in the area of TCP/IP member routing.
• The first one applies only to DB2 for z/OS application requesters.
• The second one applies to any DRDA TCP/IP AR, including DB2 for z/OS V8
application requesters, that want to connect to a DB2 data sharing group.

Member Routing in TCP/IP for DB2 for z/OS AR using SYSIBM.IPLIST


Currently in a data sharing environment, remote TCP/IP connections are normally set up to
automatically balance connections across all members of a data sharing group. This is not
a good solution in all cases. Sometimes you want to be able to route requests from DB2
UDB for z/OS DRDA application requesters to specific members of your data sharing
group, similar to the support for SNA connections that use the SYSIBM.LULIST table.
To achieve this, we combine the server location alias feature (at the DRDA application
server), described earlier, with the use of a new table in the catalog, namely
SYSIBM.IPLIST (at the DB2 for z/OS application requester).

6-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The visual describes a sample implementation.


The location name LOCDBP0 represents all three DB2 subsystems in the data sharing
group: DBP1, DBP2, and DBP3. Previously, location LOCDBP0 is set up to route requests
using the group domain name to all available members.
Now an additional location, LOCDBP1, is to be used to route requests only to members
DBP1 and DBP2:
• At the DRDA AS, a location alias LOCDBP1, has been defined in the BSDS for data
sharing members DBP1 and DBP2. Note that this alias does not exist in the BSDS for
DBP3.
• At the DB2 for z/OS AR, we need to update the catalog tables in the following way:
- In SYSIBM.LOCATIONS, we have two entries. One for the “normal” location name
of the data sharing group (LOCDBP0) to address all members of the data sharing
group, and one for the location alias (LOCDBP1).
- In SYSIBM.IPLIST we have two entries for the link name that corresponds with the
link name in SYSIBM.LOCATIONS (NODEDBP1); one pointing to IP address
9.165.70.1, the other one to 9.165.70.2.
- In SYSIBM.IPNAMES, we also have an entry with the same link name. This is
mandatory. When using SYSIBM.IPLIST, you must also have an entry in
SYSIBM.IPNAMES with the same link name. Note, however, that you should not
specify an IP address in the IPADDR column in SYSIBM.IPNAMES for the
NODEDBP1 entry. Otherwise you receive an error (00D31203).
When resolving the remote system you want to access, DB2 first checks the IPLIST table,
and then the IPNAMES table. So when executing the SELECT * FROM
LOCDBP1.creator.tb2 statement:
1. DB2 first looks in SYSIBM.LOCATIONS for a matching location name (LOCDBP1), and
picks up the linkname (NODEDBP1).
2. DB2 then searches the new catalog table SYSIBM.IPLIST for matching linkname
entries (NODEDBP1) and picks up their IP addresses.(9.165.70.1 and 9.165.70.2).
3. DB2 then checks SYSIBM.IPNAMES for a matching linkname (NODEDBP1) and
makes sure it is there, and no IPADDR is supplied.
4. At the DRDA AS side, LOCDBP1 is checked in the BSDS, and is found to be an alias
for DBP1 and DBP2, so the request can only go to those members of the data sharing
group.
If a request comes in for location name LOCDBP0, for example SELECT * FROM
LOCDBP0.creator.tb1, the entry in SYSIBM.SYSIPNAMES is used (as there is no
matching linkname in SYSIBM.IPLIST) and the request is routed to all available members
in group DBP0.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-15


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Defining multiple location names is not confined to a TCP/IP network, but can also be used
to control which members are used to process requests for an application in a SNA data
sharing group.
When using dynamic VIPA to perform workload balancing, the IPLIST must contain the
member specific dynamic VIPA for each DB2 subsystem that is to be routed to.
The following topic shows the catalog definitions at the DB2 for z/OS AR (Table 6-1 to Table
6-3).
To route to all members, rows are inserted for location LOCDBP0 in SYSIBM.LOCATIONS
and SYSIBM.IPNAMES. The LOCATION column in SYSIBM.LOCATIONS contains the
group location name, LOCDBP0.
To route requests to only DBP1 and DBP2, rows are inserted for location LOCDBP1 in
SYSIBM.LOCATIONS, SYSIBM.IPNAMES, and SYSIBM.IPLIST. LOCDBP1 is the server
location alias name defined in the BSDS of the members of the data sharing group that you
want to address (DBP1 and DBP2).

Table 6-2 Example of TCP/IP Member Routing using SYSIBM.LOCATIONS


LOCATION LINKNAME PORT TPN DBALIAS IBMREQD

LOCDBP0 NODEDBP0

LOCDBP1 NODEDBP1

A row is inserted into SYSIBM.IPNAMES where the LINKNAME contains the same
LINKNAME used for LOCDBP0 and the IPADDR column contains the group domain name.
Another a row is inserted into SYSIBM.IPNAMES where the LINKNAME contains the same
LINKNAME used for LOCDBP1 and the IPADDR column contains blanks.

Table 6-2 SYSIBM.LOCATIONS


LINKNAME SECURITY_OUT USERNAMES IPADDR IBMREQD

MODEDPB0 LOCDPB0.SYSPLEX.COM

NODEDPB1

Lastly, rows are inserted into SYSIBM.IPLIST for DBP1 and DBP2 where the LINKNAME
contains the same LINKNAME for LOCDBP1 and the IPADDR column contains the
member-specific names (member specific DVIPA in our case).

Table 6-2 Example of TCP/IP Member Routing using SYSIBM.IPLIST


LINKNAME IPADDR IBMREQD

NODEDBP1 9.165.70.1
NODEDBP1 9.165.70.2

6-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty When an application wants to route requests to the least loaded member of the data
sharing group, it connects using location LOCDBP0. If an application wants to route
requests to member DBP1 or DBP2, it connects to location LOCDBP1.
Each member of the DBP0 DB2 data sharing group, has LOCDBP0 defined as the location
name.LOCDBP1 is defined as a location alias name using the Change Log Inventory
(DSNJU003) utility in the BSDS of DBP1 and DBP2 only.
In a sysplex environment, by using this new SYSIBM.IPLIST table on the DB2 for z/OS AR,
you can define a specific member or a subset of members in a data sharing group to route
requests to. At the DRDA AS server (the data sharing group that you want to connect to)
server location alias names need to be defined allowing applications to access the DB2
server with the alternate location names.
Note: Because the setup of the IPLIST determines the members of the data sharing
group you can connect to, this enhancement only applies to configurations where DB2
for z/OS is also the application requester. Similar support for non-DB2 for z/OS AR is
provided by the enhancement described next.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-17


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Data Sharing Member Subsetting


Data sharing member subsetting for DRDA TCP/IP clients

DRDA Application Requester using TCP/IP

CONNECT TO MQMBR;
SELECT * FROM Mine.MYTAB
Either routed to
DBP1 or DBP2

Application Server (DB2 for z/OS) Hostname: myhostname

Group DBP0
DBP1 DBP2 DBP3
Location name: Location name: Location name:
LOCDBP0 LOCDBP0 LOCDBP0
IP: 9.165.70.1 IP: 9.165.70.2 IP: 9.165.70.3

BSDS BSDS BSDS


Location = LOCDBP0 Location = LOCDBP0 Location = LOCDBP0
Port = 8000 Port = 8000 Port = 8000
Alias = MQMBR:8100 Alias = MQMBR:8100

© Copyright IBM Corporation 2004

Figure 6-7. Data Sharing Member Subsetting CG381.0

Notes:
In our description of the previous enhancement, we discussed the use of the new table in
the catalog (SYSIBM.IPLIST) to enable DB2 for z/OS application requesters to subset the
members of the data sharing group they can connect to. That enhancement does not apply
to non-DB2 for z/OS application requesters like DB2 Connect.
By using V7 functionality, you can use the following techniques to subset requests to only a
few members of the data sharing group via:
• Enabling some, but not all of the members with the dynamic VIPA/sysplex distributor
support
• Disabling DDF server support on some of the members that you want to exclude.
It is obvious that neither of these options are optimal solutions.
In V8, you can also route DRDA requests to a subset of members of a data sharing group,
from any DRDA TCP/IP application requestor, like DB2 Connect, based on DB2 location
alias names. Subsetting a data sharing group extends the ALIAS support at a DB2 for z/OS

6-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty V8 DRDA application server. When you have ALIAS names defined with valid port
numbers, they become subset location names.
To use location subsetting, at least one ALIAS name must be defined with an ALIAS
TCP/IP port number. To do this, the syntax of the DDF statement of DSNJU003 (Change
Log Inventory) has been enhanced to accept a numeric TCP/IP port number (value must
be greater than 0 and less than 65535) preceded by a colon (":") after any entered ALIAS
name.
If a value preceded by a colon (":"is not entered after an ALIAS name), then this ALIAS
name will be used by DB2 as an alternate LOCATION name for this DB2 subsystem.
Each ALIAS port number must be unique and must be different than that already specified
for, or being entered for, the PORT and RESPORT parameters. Like the current processing
of the PORT and RESPORT values, only numeric port values will be supported. Service
name values are not supported. Here is a sample ALIAS statement:
ALIAS=DB2A,MQDB2Z:8002,DB2X,MYALIAS1,SIEBEL1:10001
In this example, MQDB2Z and SIEBEL1 will be the only ALIAS names to support subsets
of the data sharing group. DB2A,DB2X, and MYALIAS1 are also ALIAS names, that can
only be used as alternate LOCATION names for this DB2 subsystem.
Example 6-1 shows how the subset location names show up in the output of the print log
map utility. In this example, location DSNT2 has two subset location names, DSHT2PROD,
listening on port 50251,and DT25 listening on port 50265.
Example 6-1. Print Log Map Output for Subset Location Names

**** DISTRIBUTED DATA FACILITY ****


COMMUNICATION RECORD
00:25:55 MARCH 30, 2004
LOCATION=DSNT2
ALIAS=DSNT2PROD:50251,DT25:50265
LUNAME=STBDT25 PASSWORD=(NULL) GENERICLU=(NULL) PORT=50200 RPORT=50205
DSNJ401I DSNUPBHR BACKUP SYSTEM UTILITY HISTORY RECORD NOT FOUND
______________________________________________________________________
Currently with both DB2 UDB for z/OS V7 and V8 (without alias port numbers), when DDF
is started for a subsystem member of a data sharing group, only a single WLM resource
group representing all of the “active” members of a DB2 data sharing group is created.
At DDF start up time, with subsetting support installed in V8, for each ALIAS name defined
to be a subset location name that also has an ALIAS port value, the subsetting support will
perform the following actions (only if DDF is being started in a subsystem member of a data
group):
• Issue calls to WLM sysplex routing services to register ALIAS names in addition to the
normal call to register the DB2 data sharing group LOCATION name.
• Add the ALIAS port numbers to the TCP/IP socket SELECT call for the SQL port
listener.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-19


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

To have the list of members participating in the ALIAS subset automatically managed by
the operating system, WLM sysplex routing services are used to maintain a “weighted”
server list of the DB2 data sharing group for each ALIAS name being used as a subset. (An
ALIAS name is used for subsetting if an ALIAS port value exists in the BSDS DDF record.)
If the WLM request to register is unsuccessful for any ALIAS, then (existing) error message
DSNL044I is issued to inform you of the error. DDF startup processing continues, but since
the ALIAS name was not successfully registered, this DB2 subsystem will not have its
information added to the list of members participating in the subset.
As mentioned earlier, the ALIAS port numbers are added to the TCP/IP SELECT socket
call for the SQL port listener. This will permit the use of the z/OS Sysplex Distributor (if set
up) to send requests destined for an ALIAS port to only the members participating in the
ALIAS subset. It will also permit the member to respond to requests made against the
ALIAS port.
Prior to issuing the TCP/IP SELECT socket call, DDF attempts to issue a TCP/IP bind
against each ALIAS port number. If the bind fails, then the (existing) message DSNL515I
will be issued with the ALIAS port value as part of the message. DDF continues its startup,
and continues issuing the TCP/IP binds for other ALIAS ports. DDF will then issue the
TCP/IP SELECT socket call for the SQL request listener only for its PORT and those
ALIAS ports whose TCP/IP bind call was successful. If the TCP/IP SELECT socket call is
unsuccessful, then the (existing) error message DSNL512I is issued.
Once DDF startup is complete, with subsetting support installed and active in a member of
a data sharing group, the SQL listener will now accept TCP connection requests against its
primary SQL PORT and any of the ALIAS ports that did not cause a TCP/IP bind error.
Once DDF in all the members of a data sharing group has completed its startup, multiple
WLM sysplex routing services resource groups will exist; one for the LOCATION name of
the data sharing group which should have all members registered in it, and one each for
the ALIAS names that had ALIAS ports defined in any of the members.
This visual shows how we can use the location alias support to subset two members
(DBP1 and DBP2) of the data sharing group (DBP0) for the MQ related workload (because
these are the only two members that have the DB2- MQ support installed). As shown
below, you need to define DB2 location alias names in the BSDS, as well as an additional
port number.
Thus, to define the subset for the above requirement, each of the members would have the
following definitions:
• DBP1: LOCATION=LOCDBP0,PORT=8000,RESPORT=8001,ALIAS=MQMBR:8100
• DBP2: LOCATION=LOCDBP0,PORT=8000,RESPORT=8001,ALIAS=MQMBR:8100
• DBP3: LOCATION=LOCDBP0,PORT=8000,RESPORT=8001
For DB2 Connect to access any of the subsets, one would define its node, db, and dcs db
profiles as follows:
• db2 catalog tcpip node locdbp0 remote myhostname server 8000

6-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • db2 catalog tcpip node mqmbr remote myhostname server 8100
• db2 catalog dcs db locdbp0 as locdbp0 parms ',,,,,sysplex'
• db2 catalog dcs db mqmbr as mqmbr parms ',,,,,sysplex'
• db2 catalog db locdbp0 as locdbp0 at node locdbp0 authentication server
• db2 catalog db mqmbr as mqmbr at node mqmbrs authentication server
Assuming that the subsetting support is installed and setup/activated, a DRDA application
requester connection to an alias name comes into DDF as before. DDF analyzes the
request. If the ALIAS name being referenced is being used to form a subset, and both the
WLM registering and TCP/IP bind were successful, then DDF will ask WLM for the current
“weighted” server list for the provided ALIAS name, and return it to the DRDA AR.
Since DDF can be stopped dynamically via the STOP DDF command or when a new
system parameters module is loaded which specifies that MAXDBAT=0, then if the
subsystem was previously successfully registered in a WLM sysplex routing services
resource group, DDF will appropriately de-register the subsystem from all ALIAS resource
groups.
Restriction: This enhancement is only provided for DRDA connections using TCP/IP.
Support for VTAM/APPC and DB2 private protocol is not provided. In addition, to activate
this subsetting feature, you must define a location alias with a TCP/IP alias port. The
system that you are “subsetting” must be a DB2 data sharing group, of course.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-21


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

6-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 6.3 DDF and RRS Accounting Rollup

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-23


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Rollup Accounting Data


Optionally accumulate accounting data for DDF and RRSAF threads
New DSNZPARMs - ACCUMACC and ACCUMUID
Can be deactivated (and activated) dynamically
To allow for more detailed monitoring when required

Accounting data accumulated by either, or combination of


User ID
Transaction name
Workstation name

Reduces the need for CMSTAT=ACTIVE


Reduces number of SMF records written

© Copyright IBM Corporation 2004

Figure 6-8. Rollup Accounting Data CG381.0

Notes:
With WebSphere, WebLogic, and other e-business application servers or distributed
applications that connect to DB2 for OS/390 via DDF, to optimize the use of DB2’s
resources, it is recommended to use CMTSTAT=INACTIVE (DSNZPARM). This normally
allows DB2 to separate the DBAT from the connection (at commit time) and reuse the
DBAT to process somebody else’s work while your connection is inactive. This is called
DB2 inactive connection support (sometimes wrongly called 'type 2 inactive' threads).
A side effect of using CMTSTAT=INACTIVE is that DB2 cuts an accounting record when
the connection becomes inactive (that is normally on every COMMIT or ROLLBACK). If
you are running high volume OLTP in this environment, the large volume of accounting
records can be a problem, as you can end up flooding SMF, compromising your ability to do
charge-back or do performance monitoring and tuning.
When using the RRS attach, you can run into a similar problem. WebSphere on z/OS
drives the RRS signon interface on each new transaction, and DB2 cuts an accounting
record when this signon happens. Some of these transactions are very short (for example,

6-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty just one SELECT statement followed by a COMMIT), but still result in a DB2 accounting
record being produced.
In Version 8, DB2 accounting data collection is enhanced to optionally accumulate
accounting data for DDF and RRSAF threads.
Two new installation options are added to activate this behavior. The DB2 installation panel
DSNTIPN - Tracing Panel, contains two new fields to accommodate for this enhancement:
• “DDF/RRSAF Accum”. This new field specifies whether DB2 accounting data should be
accumulated by end user for DDF and RRSAF threads. The related DSNZPARM
parameter is ACCUMACC.
- If NO is specified (the default), then DB2 continues to write (as in V7) write an
accounting record when a DDF thread is made inactive, or when signon occurs for
an RRSAF thread.
- If 2-65535 is specified, then DB2 writes an accounting records every 'n' occurrences
of the “end user” on the thread, where 'n' is the number specified for this parameter.
(The meaning of “end user” is discussed next).
• “Aggregation Fields”. This new field specifies which aggregation criteria is to be used
for DDF and RRS accounting record rollup. The corresponding DSNZPARM is
ACCUMUID. This field can take a value from 0 to 6. Aggregation is based on the
following three fields that have to be provided by the application (for example,
WebSphere):
- ID of the end user (QWHCEUID, VARCHAR 128). Note that the end user ID does
not necessarily have to be the authorization ID that is used to connect to DB2, but it
will be set to it by default (especially from DB2 Connect clients).
- End user transaction name (QWHCEUTX, 32 bytes)
- End user workstation name (QWHCEUWN, 18 bytes)
These values can be set by DDF threads via “Server Connect” and “Set Client”
(SQLESETI) calls, RRSAF threads via the RRSAF SIGN, AUTH SIGNON, and
CONTEXT SIGNON functions, and as properties in Java programs when using the new
Java Universal Driver.
Aggregation is done based on any of the following fields of combinations thereof:
• 0: End user ID, AND end user transaction name, AND end user workstation
name
• 1: End user ID
• 2: End user transaction name
• 3: End user workstation name
• 4: End user ID AND end user transaction name
• 5: End user ID AND workstation name
• 6: End user transaction name AND workstation name
The default value is 0 (zero). The ACCUMUID value is ignored if ACCUMACC=NO (no
DDF/RRS rollup).

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-25


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

If data accumulation is activated, then, when a DDF unit of recovery (UR) ends
(end-commit or end-abort), or a RRSAF signon occurs, instead of immediately writing an
accounting record at that time, DB2 adds the accounting values to the current values for
this end user's use of the thread (“end user” is defined by the six criteria above). If the
thread does not already have accumulated accounting values for this end user, then a new
entry is created.
Note, however, that all of the fields that are specified as aggregation fields have to be
present for aggregation to occur. For example, assume that we have specified
ACCUMUID=0, and at the end of the transaction, no value for the end-user workstation
name has been provided. In that case, a normal accounting record is produced and this
transaction is not rolled up.
DB2 externalizes the end user's accumulated accounting data when the number of
occurrences for this “end user” value reaches the threshold value specified in
ACCUMACC.
Even when you specify a value between 2 and 65535 for ACCUMACC, DB2 may choose to
write an accounting record prior to the nth occurrence of the “end user” in the following
cases:
• When a storage threshold is reached for the accounting rollup blocks.
• When no updates have been made to the rollup block for 10 minutes, that is, the “end
user” has not performed any activity for over 10 minutes that can be detected in
accounting.
There are certain cases where detailed accounting data for the DDF and RRSAF threads is
desired, such as detailed performance monitoring. The ACCUMMACC DSNZPARM can be
dynamically altered to activate or deactivate accounting data aggregation on the fly.
Today, some installations avoid the extra overhead of cutting an accounting record every
time a thread becomes inactive in an environment with very high transaction rates, by
disabling thread pooling, and using CMTSTAT=ACTIVE. (If so, you should ensure that
some kind of client-side pooling is active (such as WebSphere connection pooling), so that
threads are kept open for the application connections, and the applications performs
regular commits, but do not disconnect.) This enhancement to DB2 accounting will provide
some relief in this area as well. Instead of writing an accounting record every time a thread
gets pooled, an accounting record is only written after ‘n’ occurrences of ACCUMUID,
where ‘n’ is the value of the new ACCUMACC DSNZPARM.

6-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 6.4 DRDA Data Stream Encryption

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-27


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DRDA Data Stream Encryption


DB2 V8 as a requester supports these authentication mechanisms
Encrypted User ID and encrypted password
Encrypted User ID and encrypted security-sensitive data
Encrypted User ID, encrypted password, and encrypted security-sensitive data

DB2 V8 as a server supports these authentication mechanisms


Encrypted User ID and password (already available in V7)
Encrypted User ID and encrypted security-sensitive data
Encrypted User ID, encrypted password, and encrypted security-sensitive data
Only if z/OS ICSF is installed, configured, and active

© Copyright IBM Corporation 2004

Figure 6-9. DRDA Data Stream Encryption CG381.0

Notes:
As a server, DB2 z/OS and OS/390 Version 7 accepts encrypted userIDs and passwords
(The userIDs and passwords are encrypted using 56-bit DES and a shared private key
generated using the Diffie-Hellman distribution algorithm). However, DB2 z/OS V7
requesters always send the userIDs and passwords in clear text. Also, security-sensitive
user data is always sent in the clear.
To achieve more effective security in a distributed computing environment, DB2 for z/OS
Version 8 provides the ability to authenticate via encrypted userID, or encrypted userID and
password, and provides support for encrypting security-sensitive data, based on the
security option specified.
Prior to Version 8, DB2 for z/OS servers provided software support for the DES decryption
by loading a required BSAFE service (licensed by IBM) into the distributed address space.
DDF directly invoked the BSAFE service for userid-password decryption and
Diffie-Hellman services.
In Version 8, the encrypted security mechanisms will use the z/OS Integrated
Cryptographic Service Facility (ICSF) for encryption, decryption, and Diffie-Hellman

6-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty services. If ICSF is not installed and configured properly, then DB2 will continue to use the
existing BSAFE services, but only for the security mechanisms supported by DB2 servers
in prior releases.
Two new DRDA security options are added to the SECURITY_OUT column of
SYSIBM.IPNAMES table:
• The option 'D' implies that the userID and security-sensitive data are encrypted.
• The option 'E' implies that the userID, password, and security-sensitive data are
encrypted.
The SECURITY_OUT column option 'P' (Password) is modified to encrypt userID and
password, if the server supports encryption. If the server does not support encryption, then
the userID and password will flow in the clear as before.
For data stream encryption, for performance reasons, we do not encrypt the entire network
stream. Only security-sensitive user data items is encrypted. The following DRDA objects
are encrypted:
• SQL statements that are being prepared, executed, or bound into an RDB package.
• SQL Program Variable Data consisting of input data to SQL statement during open or
execute. This also includes a description of the data.
• SQL Data Reply Data consisting of output data from RDB processing of an SQL
statement. It also includes a description of the data.
• Query Answer Set Data consisting of the answer set data resulting from a query.
• Input or output LOB data.
• Description of the data returned from the server during DESCRIBE.
The RDB name, package name, section, consistency token, etc. are not encrypted. The
SQLCA is not encrypted.
DB2 Version 8, as a DRDA application requester, supports the following DRDA
authentication mechanisms, if TCP/IP protocols are used:
• Encrypted User ID and encrypted password
• Encrypted User ID and encrypted security-sensitive data
• Encrypted User ID, encrypted password, and encrypted security-sensitive data
DB2 Version 8, as a DRDA application server, supports the following DRDA authentication
mechanisms if TCP/IP protocols are used:
• Encrypted User ID and encrypted password
• Encrypted User ID and encrypted security-sensitive data
• Encrypted User ID, encrypted password, and encrypted security-sensitive data.
Encryption Mechanism
During connect processing, requester and server connection keys are exchanged and a
shared private key is generated. The connection keys and the shared private key are

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-29


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

generated using the standard Diffie Hellman distribution algorithm. The 56-bit encryption
key is generated from the shared private key.
Integrated Cryptographic Services Facility
The Integrated Cryptographic Services Facility is a software element of z/OS that works
with a required hardware cryptographic feature and the Security Server (RACF) to provide
secure, high-speed cryptographic services in the z/OS environment. ICSF supports
cryptography by IBM's Common Cryptographic Architecture (CCA) which is based on the
DES algorithm. See the z/OS ICSF Administrator's Guide, SA22-7521, for more
information.
Restrictions:
• DRDA data stream encryption is only supported when using TCP/IP connections. The
function will not be supported over SNA connections.
• A DB2 for z/OS requester can support the DRDA data stream encryption security
mechanisms only if z/OS ICSF is installed, configured, and active. If ICSF is not
installed and configured properly, then the DB2 for z/OS server will use the existing
BSAFE services only for the encryption security mechanisms supported in prior
releases, and the DB2 z/OS server will not support data stream encryption.
• In addition, this function is only available in new function mode.

6-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 6.5 Other Network Enhancements

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-31


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Other Network Enhancements


Terminology change
"Type 1 Inactive Thread" is now "Inactive DBAT"
"Type 2 Inactive Thread" is now "Inactive Connection"
"Pooled DBAT" is a DBAT that is not associated with a connection
VTAM conversation allocation requests can now timeout (like TCP/IP)
DRDA query blocks larger than 32K
Does not apply to DB2 Private Protocol

Add DDF accounting string to RRSAF


New-function mode SQL cannot be used in DB2 Private Protocol
Write accounting record if KEEPDYNAMIC(YES)
DB2 Universal Driver for SQLJ and JDBC
-DISPLAY LOCATION command change

© Copyright IBM Corporation 2004

Figure 6-10. Other Network Enhancements CG381.0

Notes:
In this topic, we list a number of other V8 enhancements that are related to network
computing.

Terminology Changes for Inactive Thread Support


DB2 Version 8 uses the term “Inactive DBAT” instead of “Type 1 Inactive Thread”, and
uses the term “Inactive Connection” instead of “Type 2 Inactive Thread”. The DBAT that is
decoupled from the connection, when the connection becomes inactive, is called a “pooled
DBAT”, as it returns to a pool of DBATs where it can be reused by other connections.
These terms are much more descriptive of the actual status of the threads and connection,
and brings the terminology more in line with DB2 UDB on other platforms.
The DB2 Version 8 installation clist panels and DB2 documentation reflect this new
terminology.

6-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Time-out for Allocate Conversation Requests


In DB2 for z/OS Version 8, DDF will search for threads waiting for a VTAM allocate
conversation request to complete every three minutes. When an allocate request has
waited for a session for more than three minutes, DDF issues a deallocate abend
conversation to VTAM to force VTAM to abnormally terminate the request. The remote SQL
statement fails with a SQLCODE of -904 with the reason code of 00D31033.
This is an indicator that there may be a network problem and the network administrator
should be notified of the communication failure.

Larger Query Blocks


A query block is a group of rows that fit in to a (query) block and is sent as a single network
message. The default query block size used by DB2 in Version 7 is 32K. The number of
rows that DB2 puts into a query block depends on the row length, as well as the OPTIMIZE
FOR n ROWS clause. Blocking is used by both DB2 Private Protocol and DRDA. When
blocking is not used, DB2 will send a single row across the network.
To support the larger network bandwidths and customer requirements to control the
amount of data returned on each network request, the DRDA architecture has been
enhanced to allow query blocks up to 2M. This way, the requester can ask for a query block
size of up to 2M. This allows a requester to better manage and control the size of blocks
returned from DB2. DB2 as a requester continues to always request a block size of 32K,
but as a server it can support any block size. DB2 Connect Version 8, by default, also
continues to use its standard RQRIOBLK size of 32767bytes.

Limitations on SQL Statements Allowed in DB2 Private Protocol


It is the DB2 direction to sunset the DB2 distributed Private Protocol and encourage
customers to use the Distributed Relational Database Architecture (DRDA) protocol. The
reasons are many, the most obvious ones are that DB2 Private Protocol is single platform
only, the available functions are limited, not easily portable, and the performance is not on
par with the applications that use DRDA protocol.
In Version 8, DB2 for z/OS limits the kind of SQL statements allowed in an application that
uses DB2 Private Protocol, to SQL statements supported in V8 compatibility mode. This is
equivalent to limiting the SQL functionality to DB2 Version 7. Using SQL statements that
are only supported in new function mode, return an SQLCODE -142, when the application
is using DB2 Private Protocol.

Write Accounting Records at the Server when Using


KEEPDYNAMIC(YES)
In V7, there are a number of reasons that indicate to DB2 that it is time to produce a DB2
accounting record, for example, at thread deallocation time, or when a connection
becomes inactive (DDF). When using inactive connections (CMTSTAT=INACTIVE), a

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-33


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

connection cannot become inactive, and its associated DBAT pooled at commit time, when
it touches a package that is bound with KEEPDYNAMIC(YES). There is also no accounting
record produced, and the WLM enclave, that governs the priority of the work that thread is
doing, remains active.
This is not an ideal situation for two reasons:
• An accounting record is only produced when the thread is deallocated and potentially
contains a large number of executions of the program. This makes monitoring and
charge back accounting difficult. Users need an accounting record written at every
transaction to give them better granularity for monitoring and charge back.
• The WLM enclave also remains active for a long time, which means that period goals
are not effective. Executions that happen after the thread has been running for some
long time will run in the lowest period, and do not get the service they need.
The intent of this enhancement is not to make the connection become inactive (Type 2
inactive thread). Since there are KEEPDYNAMIC sections that must be maintained and
remain associated with the client application's connection, the connection cannot become
inactive.
However, with this enhancement, when a UOW completes (after commit or rollback) in a
DRDA application that has touched a package that is bound with KEEPDYNAMIC(YES),
DB2 will now produce an accounting record and delete the WLM enclave, as if the
connection had become inactive, IF there is nothing else that would otherwise prevent the
connection from becoming inactive (like an open held cursor, or a declared temporary table
that has not been dropped). Private Protocol connections will not be affected.
When another request arrives from the client system, a new WLM enclave is created and a
new accounting interval is started.
With this enhancement, DB2 “pretends” the connection went inactive, even if there are
KEEPDYNAMIC sections present, but there are no open held cursors or active declared
temporary tables.
Users will notice that more accounting records are produced and more enclaves are being
established, where they were not before. When classifying DDF work in WLM, the use of
period goals for this type of applications can now be considered.

Add DDF Accounting String to RRSAF


A new accounting string parameter is provided on the RRS SIGNON, AUTH SIGNON,
CONTEXT SIGNON and the SET_CLIENT_ID function. This new parameter can be
specified with the existing client user ID, application program name, workstation name, and
accounting token parameters. This RRSAF function sets the character strings that are
passed to DB2 when the next SQL request is made.
The effect of this new parameter is that accounting strings can be set using an inexpensive
RRSAF function, and are available on the next SQL call.

6-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty DB2 UDB Universal Driver for SQLJ and JDBC


DB2 UDB for Multiplatforms Version 8 was the first member of the DB2 Family to introduce
the new JDBC driver, called the IBM DB2 Universal Driver for SQLJ and JDBC. This new
Java driver architecture is the future basis of all DB2-related Java efforts. It supports a
so-called JDBC type 2 and type 4 driver.
These drivers are currently supported on DB2 UDB for Linux, Unix, and Windows and DB2
for z/OS Version 8. They are also available for DB2 for z/OS and OS/390 Version 7,
through the maintenance stream via UQ85607.
Please refer to Figure 4-2 "Universal Driver for SQLJ and JDBC" on page 4-6 for more
information.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-35


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

-DISPLAY LOCATION
Prior to Version 8
Displays details for ALL Locations
-DISPLAY LOCATION
-DISPLAY LOCATION()
-DISPLAY LOCATION(*)

DB2 Version 8
Now behaves the same as -DISPLAY DATABASE command
-DISPLAY LOCATION()
Fails with message DSN9010I

© Copyright IBM Corporation 2004

Figure 6-11. -DISPLAY LOCATION CG381.0

Notes:
In DB2, prior to Version 3, the -DISPLAY LOCATION command allowed no parameters
(-DISPLAY LOCATION), and displayed all locations. In Version 3, the -DISPLAY
LOCATION command was enhanced to accept parameters and the DETAIL keyword was
added.
When this enhancement was introduced, the idea was that providing a blank parameter,
-DISPLAY LOCATION(), should provide the same output as before the parameter was
introduced. That is “-DISPLAY LOCATION()” should behave as “-DISPLAY LOCATION”.
Hence, -DISPLAY LOCATION, -DISPLAY LOCATION(), and -DISPLAY LOCATION(*)
display all locations, whereas adding a specific parameter (-DISPLAY LOCATION
(WTSCPOK) displays only matching locations.
However, this behavior of -DISPLAY LOCATION with an empty parameter is different from
the behavior of -DISPLAY DATABASE with an empty parameter. If you do a -DISPLAY DB()
SPACENAME(), the command fails with message DSN9010I.
In an effort to make all commands behave in a more predicable and similar manner, DB2
Version 8 changes the semantics of the -DISPLAY LOCATION command with an empty

6-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty parameter. -DISPLAY LOCATION() will behave the same way as -DISPLAY DATABASE().
Both commands will fail with message DSN9010I.

© Copyright IBM Corp. 2004 Unit 6. Network Computing 6-37


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

6-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Unit 7. Application Enablement

What This Unit Is About


You will learn that to enhance the application enablement capabilities
of DB2, Version 8 adds some stored procedure and UDF functions
and increases the ability to use ODBC with UNIX System Services.

What You Should Be Able to Do


After completing this unit, you should be able to:
• Describe the new functions for stored procedures and UDF
routines
• Relate the use of the new capabilities of ODBC for UNIX System
Services

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-1


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
General stored procedure and UDF enhancements
MAXFAILURES
Better WLM resource management

Deprecation of some stored procedure features


No more COMPJAVA stored procedures
No new DB2 managed stored procedures
SQL stored procedure enhancements
DB2 V8 Development Center integration
Implicit RRSAF connections
New CURRENT PACKAGE PATH and SCHEMA special register
ODBC for USS enhancements

© Copyright IBM Corporation 2004

Figure 7-1. List of Topics CG381.0

Notes:
In this unit, we discuss the DB2 for z/OS Version 8 enhancements that are related to
applications enablement. There are numerous enhancements in this area, and they usually
fit into a number of different categories. We address some specific topics in this unit and
provide pointers to other units that discuss related topics as well.

7-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.1 General Stored Procedure and UDF Enhancements


In this topic, we use the term stored routines. This signifies both stored procedures and
user-defined functions (UDFs).

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-3


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Stored Procedure and


UDF Enhancements (1 of 2)
Specify maximum number of failures at stored routine level
More granular control over stored procedure and user-defined function
failure management
User can specify for each stored procedure or user-defined function
how many times a routine can experience failures before it is stopped
Failure is a program exception or unusual termination
Specified at routine level, instead of subsystem level
(MAX ABEND COUNT - DSNZPARM STORMXAB)
Specified on CREATE/ALTER FUNCTION/PROCEDURE
STOP AFTER SYSTEM DEFAULT FAILURES
STOP AFTER n FAILURES
CONTINUE AFTER FAILURE
Does not apply to sourced and SQL scalar UDFs

© Copyright IBM Corporation 2004

Figure 7-2. Stored Procedure and UDF Enhancements (1 of 2) CG381.0

Notes:
In V7, you have a DB2 DSNZPARM called STORMXAB that allows you to specify the
number of times a stored procedure or an invocation of a user-defined function is allowed
to terminate abnormally, after which SQL CALL statements for the stored procedure or
user-defined function are rejected. As it is a DSNZPARM, the value applies to all stored
routines.
In V8, you can specify that a stored routine is to be put in a stopped state after some
number of failures, at the stored routine level. Next we list the SQL statement options you
can use to do this.
STOP AFTER nn FAILURES
Specifies that this routine should be placed in a stopped state after nn failures. The value
nn should be an integer from 1 to 32767.
STOP AFTER SYSTEM DEFAULT FAILURES

7-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Specifies that this routine should be placed in a stopped state after the number of failures
indicated by the system parameter MAX ABEND COUNT (DSNZPARM STORMXAB). This
is the default.
CONTINUE AFTER FAILURE
Specifies that this routine should not be placed in a stopped state after any failure.
These options must not be specified for SQL functions, or sourced functions (SQLSTATE
42849, SQLCODE -20102).
To support this enhancement, a new column MAX_FAILURE is added to the
SYSIBM.SYSROUTINES catalog table, where:
nn Specifies the allowable failures for this routine before it is stopped.
The value nn can be an integer from 1 to 32767.
-1 Indicates that the DB2 DSNZPARM STORMXAB is used.
0 (zero) If zero is specified, the routine will never be stopped.

Adding the Number of Failures to DISPLAY Command Output


The output of the DISPLAY FUNCTION SPECIFIC and DISPLAY PROCEDURE
commands is changed to display the number of times the execution of a stored routine
failed. This makes it easier to monitor the well-being of your stored routines environment. A
sample command result is shown in Example 7-1.
Example 7-1. -DIS PROC Output with New FAIL Column

DSNX940I -DB8A DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -

------- SCHEMA=SYSPROC
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
DSNWZP
STARTED 0 0 1 0 0 DB8AWLM1
DSNACCMG
STARTED 0 0 1 0 0 DB8AWLM1
DSNUTILS
STARTED 0 0 1 0 0 WLMENV1
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
DSN9022I -DB8A DSNX9COM '-DISPLAY PROC' NORMAL COMPLETION
______________________________________________________________________
The FAIL column indicates the number of times a stored routine has failed. DB2 resets this
value to 0 each time you execute the START FUNCTION or START PROCEDURE
command.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-5


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Stored Procedure and


UDF Enhancements (2 of 2)
Better WLM resource management for stored routines
Better resource management by exploiting capabilities of z/OS
workload manager
WLM determines appropriate resource utilization and recommends
changes in number of tasks operating in stored procedure address
space
TCBs are added/removed based on WLM recommendation
Attaching a new TCB is much cheaper that starting a new
address space
MAXTCB parm is used as a maximum
Recommendation: Specify a fairly large number for MAXTCB and
let WLM do the rest

© Copyright IBM Corporation 2004

Figure 7-3. Stored Procedure and UDF Enhancements (2 of 2) CG381.0

Notes:

Better WLM Resource Management for Stored Routines


DB2 V8 is enhanced to exploit Workload Manager (WLM) functions that will allow System
Resource Manager and Workload Manager to determine appropriate resource utilization
and recommend changes in the number of tasks operating in a WLM managed stored
procedure address space. DB2 will add or delete tasks based on WLM's
recommendations.
The NUMTCB parameter is provided to Workload Manager as a maximum task limit. We
recommend that the customer specify a reasonable number in NUMTCB, except when 1 is
required. For a full discussion of NUMTCB, see DB2 for z/OS Stored Procedures: Through
the CALL and Beyond, SG24-7083.
Note that V8 introduces an additional DSNZPARM related to stored procedures, called
MAX_ST_PROC. It specifies the maximum number of “active” stored procedures a thread
is allowed to have. The default is 2000. If you have more than allowed by this DSNZPARM,
an SQLCODE -904 is returned.

7-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.2 Deprecation of Some Stored Procedure Features


In this topic, we discuss two changes that may affect your existing stored procedures. In
V8:
• COMPJAVA stored procedures are no longer supported
• You cannot create new DB2-established stored procedures, or alter stored procedures
to become DB2-established stored procedures.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-7


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Other Stored Procedure Related Changes


LANGUAGE COMPJAVA stored procedures
COMPJAVA uses HPJ (High Performance Java compiler)
VA Java no longer support compiled Java link library files
No longer supported in V8
Use LANGUAGE JAVA instead
With better performance as well
Deprecation of DB2-established stored procedures
Cannot create new DB2-established stored procedure in V8
Remove the 'NO WLM ENVIRONMENT' option SQLCODE -199
Existing DB2-established stored procs continue to run
After ALTER PROC to WLM-managed, you cannot ALTER it back
Recommendation: Migrate all your stored procs to be WLM managed

© Copyright IBM Corporation 2004

Figure 7-4. Other Stored Procedure Related Changes CG381.0

Notes:
As the HPJ (High Performance Java) compiler is no longer supported, DB2 can no longer
support LANGUAGE COMPJAVA stored procedures, and V8 does not allow you to run or
create LANGUAGE COMPJAVA stored procedures.

Deprecation of DB2-established Stored Procedures


DB2 Version 8 removed the “NO WLM ENVIRONMENT” option on the CREATE
PROCEDURE statement. This means that you can no longer create any new
DB2-established (sometimes also called DB2-managed) stored procedures in V8.
However, if you have existing DB2-established stored procedures, they continue to run in
V8. You just cannot create any new ones (or alter WLM-managed stored procedures back
to DB2-managed).
Therefore, the recommendation is to convert to WLM-managed stored procedures as soon
as possible.

7-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Migration to LANGUAGE JAVA
Migrate SP from LANGUAGE COMPJAVA to LANGUAGE JAVA
Ensure WLM environment is set up and required JVM installed
Use ALTER PROCEDURE to change LANGUAGE and WLM
ENVIRONMENT
(EXTERNAL NAME clause must also be specified;
DB2 must verify it, even if it is not changed)
.class file identified in EXTERNAL NAME clause
Contained in a JAR installed to DB2 with invocation of the
INSTALL_JAR stored procedure
.class file is in a directory name in CLASSPATH ENVAR of
data set named on JAVAENV DD card of WLM SP JCL

Example:
ALTER PROCEDURE MINE.JAVASP
LANGUAGE JAVA
EXTERNAL NAME 'mine.display.main'
WLM ENVIRONMENT WLMENVJ;

© Copyright IBM Corporation 2004

Figure 7-5. Migration to LANGUAGE JAVA CG381.0

Notes:
As mentioned earlier, after migrating to Version 8, you can no longer define or run
COMPJAVA stored procedures. The only type of Java stored procedures that are
supported are LANGUAGE JAVA (interpreted Java) stored procedures. You can convert
LANGUAGE COMPJAVA stored procedures to LANGUAGE JAVA by following these steps:
1. Ensure that the WLM environment is configured and that the required JVM is installed.
2. Use ALTER PROCEDURE to change the LANGUAGE and the WLM ENVIRONMENT.
The EXTERNAL NAME clause must also be specified. For example:
ALTER PROCEDURE MINE.JAVASP LANGUAGE JAVA EXTERNAL NAME ’mine.display.main’ WLM
ENVIRONMENT WLMENVJ;
3. Ensure that the .class file that is identified in the EXTERNAL NAME clause of the
ALTER PROCEDURE is present in one of the following places:
- In a JAR that was installed to DB2 by an invocation of the INSTALL_JAR stored
procedure

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-9


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

- In a directory in the CLASSPATH ENVAR of the data set that is named on the
JAVAENV DD statement of the WLM stored procedures address space JCL

7-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Migrating to WLM-managed SP
Define JCL procedures for the WLM-managed SP address spaces
Define application environments in WLM for these SPs
Stop currently running DB2-managed SP (-STO PROC(...))
ALTER PROCEDURE ... WLM ENVIRONMENT ...
Relink your stored procedures with the RRS attach DSNRLI
(Manually start WLM-managed SP address space in case you are
still running in WLM compatibility mode)
Remember that z/OS V1.3 requires WLM goal mode
Restart the procedure -STA PROC(...)

© Copyright IBM Corporation 2004

Figure 7-6. Migrating to WLM-managed SP CG381.0

Notes:
If you have existing stored procedures that use DB2-established address spaces, you need
to move as many as possible to a WLM environment. To move stored procedures from a
DB2-established environment to a WLM-established environment, follow these steps:
1. Define JCL procedures for the WLM stored procedures address spaces. Member
DSNTIJMV of data set DSNxxx.SDSNSAMP contains sample JCL procedures for
starting WLM-established address spaces.
2. Define WLM application environments for groups of stored procedures and associate a
JCL startup procedure with each application environment.
3. Enter the DB2 command STOP PROCEDURE(proc-name) to stop the DB2-established
stored procedure that you are about to change.
4. For each stored procedure, execute ALTER PROCEDURE with the WLM
ENVIRONMENT parameter to specify the name of the application environment.
5. WLM managed stored procedures use Resource Recovery Services attachment facility
(RRSAF). Therefore you must relink all of your existing DB2-established stored

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-11


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

procedures with DSNRLI, the RRSAF language interface module. See the linkage
editor control statements below for an example:
//SYSLIN DD *
ENTRY MYSTPROC
REPLACE DSNALI(DSNRLI)
INCLUDE SYSLMOD(MYSTPROC)
NAME MYSTPROC(R)
6. If WLM is operating in compatibility mode, start the new WLM-established stored
procedures address spaces by using this z/OS command:
START address-space-name

Note: If you make these changes under DB2 V7, and z/OS is at V1.2 or below and is
running in WLM compatibility mode, you must manually start the new address spaces. If
you do this conversion when you are already running in V8, you will be in goal mode,
because DB2 V8 requires z/OS V1.3 (or above) and z/OS V1.3 only runs in goal mode.
7. If WLM is operating in goal mode, the address spaces start automatically.
8. Restart the stored procedure in DB2, using the -START PROCEDURE(proc-name)
command.

7-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.3 SQL Stored Procedure Enhancements

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-13


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

SQL Stored Procedure Enhancements


Benefits .....
Enhances usability and power of SQL procedure language (PSM)
DB2 Family compatibility
Conforms to SQL standards
SELECT COL1
V8 Enhancements FROM T1
WHERE
New SQL procedure statements to return status information ...........1

from SQL procedure to the calling application


RETURN statement
SIGNAL/RESIGNAL support
GET DIAGNOSTICS
ITERATE
Enhanced LOB and variable support
With V8 SQL statement limit extended to 2 MB
SQL Procedure must be completely stated in a single
SQL statement
Integrated debugger

© Copyright IBM Corporation 2004

Figure 7-7. SQL Stored Procedure Enhancements CG381.0

Notes:
DB2 Version 8 introduces many enhancements to SQL stored procedures, sometimes
called PSM (Persistent Stored Modules). Numerous enhancements were introduced to
enhance DB2 Family compatibility and conformance to the SQL standards. This will help
people to port existing SQL stored procedures to the zSeries platform.
These enhancements are discussed in more detail in the following topics:
• RETURN statement
• SIGNAL/RESIGNAL support
• GET DIAGNOSTICS support
• ITERATE statement support
• Enhanced LOB and variable support
• Long SQL statements (up to 2 MB)
• Support for using the integrated debugger with SQL stored procedures

7-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
RETURN Statement (1 of 2)
Before Version 8, two methods available for returning status
information from an SQL procedure
Define extra parameter for status - CREATE PROCEDURE statement
includes an additional parameter (OUT or INOUT) for status information
as part of parameter list. Invoking application must provide this
additional parameter on CALL statement.
Leave conditions (errors or warnings) unhandled - Any conditions not
handled in procedure are returned to caller in SQLCA
V8 provides an additional method to return status information to
caller
Return an integer value to the invoking application via new RETURN
statement that you can code in the SQL procedure
Compatibility with other platforms
GET DIAGNOSTICS statement (extended) to return status information
from a RETURN statement of an SQL procedure

© Copyright IBM Corporation 2004

Figure 7-8. RETURN Statement (1 of 2) CG381.0

Notes:
Before V8, there were two methods for returning error information from an SQL stored
procedure.
The first method was to use an additional parameter (OUT or INOUT) that gets passed
back from the stored procedure to the caller. Invoking applications must be aware of the
existence of this extra parameter.
The second approach was to leave error or warning conditions unhandled. Any conditions
not handled by the procedure are returned to the caller via the SQLCA, and the calling
program must deal with the problem. (This support was introduced via PQ56323 for V6 and
V7. Before this APAR, unhandled errors on SQL statements issued inside SQL procedures
were not returned to the caller.)

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-15


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

RETURN Statement (2 of 2)
For SQL procedure, it Example:
BEGIN
RETURN value of 0 if successful, and -2 if not

returns an integer status .........


GOTO FAIL;
value to the invoking
application SUCCESS:
FAIL:
RETURN;
RETURN -2;
END

The caller may access the value using


RETURN_STATUS of GET DIAGNOSTICS
Directly from SQLCA in SQLERRD(0)
Return value parameter marker in the escape clause CALL syntax in
CLI or ODBC applications

Additional GET DIAGNOSTICS information


MESSAGE_TEXT clause provides the current message text
MESSAGE_LENGTH clause used to obtain length of the current
message text

For SQL scalar functions, it returns the result of the function (V7)

© Copyright IBM Corporation 2004

Figure 7-9. RETURN Statement (2 of 2) CG381.0

Notes:
DB2 Version 8 introduces a third method: the use of the RETURN statement. The
statement is also available on other platforms.
You can use the RETURN statement in an SQL procedure to return an integer status value.
If you include a RETURN statement, DB2 sets the SQLCODE in the SQLCA to 0 and the
caller must retrieve the return status of the procedure in either of the following ways:
• By using the RETURN_STATUS item of GET DIAGNOSTICS statement to retrieve the
return value of the RETURN statement
• By retrieving SQLERRD(0) of the SQLCA, which contains the return value of the
RETURN statement
If you do not include a RETURN statement in an SQL procedure, by default, DB2 sets the
return status to 0 for an SQLCODE that is 0 or positive and sets it to -1 for a negative
SQLCODE.
The MESSAGE_TEXT clause of the GET DIAGNOSTICS can be used to obtain additional
information about the current message text (or the SQLERRDC in the SQLCA).

7-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Note that the RETURN statement is also used in SQL scalar functions. In that case, it is not
used to pass back a return code. It is used to pass back the result of the SQL scalar
function.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-17


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

SIGNAL/RESIGNAL Statement
Allow notification of an application when an exception occurs in
SQL procedure so that program can take action (that is, terminate)
Other DB2 platforms support SIGNAL/RESIGNAL statements within SQL
procedures
Part of the SQL standard
Before Version 8
SQL stored procedure cannot signal to the calling program that an exception
occurred
With Version 8
SIGNAL/RESIGNAL statements allow SQL procedure to specify a specific
SQLSTATE and message text to raise a condition within the SQL procedure
Existing support for SIGNAL statement within trigger body unchanged

© Copyright IBM Corporation 2004

Figure 7-10. SIGNAL/RESIGNAL Statement CG381.0

Notes:
Before V8, SQL procedures did not have an easy way to signal the calling program with a
specific message and SQLSTATE that an error occurred (unless you use a separate
parameter, of course).
DB2 V8 allows the calling application to be notified with a specific SQLSTATE and provide
a message text, when an error occurs in the SQL procedure. To enable this feature, two
new statements are introduced for the SQL procedure language:
• SIGNAL
• RESIGNAL
Both statements are explained in more detail in the following topics.
Note: The existing support for the SIGNAL statement inside a trigger body is unchanged.

7-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
SIGNAL Statement (1 of 3)
Used to signal an exception condition (error or warning condition)
Causes an error or warning to be returned with the specified
SQLSTATE, along with optional message text
SQLCODE depends on the specified SQLSTATE
SQLCODE set to +438, for SQLSTATE class of '01' or '02'
SQLCODE set to -438, otherwise
An optional MESSAGE_TEXT can be specified
String returned in SQLERRMC of SQLCA (first 70 bytes)
If string > 70 bytes, truncated without warning
Untruncated message text available with GET DIAGOSTICS statement
GET DIAGNOSTICS statement
MESSAGE_TEXT clause provides the current message text
MESSAGE_LENGTH clause to obtain length of the current message text

© Copyright IBM Corporation 2004

Figure 7-11. SIGNAL Statement (1 of 3) CG381.0

Notes:
In DB2 for z/OS Version 8, you can use the SIGNAL statement anywhere in a SQL
procedure to set a specific SQLSTATE along with an optional message text.
In the following code fragment, DB2 generates an SQLSTATE 23503 (SQLCODE -530)
when you attempt to insert an order for a missing customer. You can intercept the error in a
handler, and pass a more meaningful message back to the caller as the code fragment in
Example 7-2 shows.
Example 7-2. Using SIGNAL

DECLARE EXIT HANDLER FOR SQLSTATE VALUE ‘23503’


SIGNAL SQLSTATE ‘75001’
SET MESSAGE_TEXT = ‘Customer is unknown’;
INSERT INTO ORDERS (....)
VALUES (....);

______________________________________________________________________

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-19


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

As shown in the example above, instead of returning a general SQLSTATE 23503, the
procedure returns a specific SQLSTATE 75001, and a more meaningful message
indicating that the customer is not found. The provided message text is returned in the
SQLCA SQLERRMC field (up to 70 bytes), or the full message can be obtained by using
the MESSAGE_TEXT clause on the GET DIAGNOSTICS statement.
The capability to set a specific SQLSTATE in case of an error is useful for packaged
applications such as DB2 extenders, which have their own SQLSTATEs that they want to
return to the invoking application.
Note that when you use SIGNAL (or RESIGNAL) to set the SQLSTATE, the value of
SQLCODE returned to the invoking application is a constant (you cannot set it yourself). It
is based on the class code (first 2 bytes) of the SQLSTATE:
• Class code 00 is not allowed
• Class code 01 or 02 causes SQLCODE +438
• All other class codes cause SQLCODE -438

7-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
SIGNAL Statement (2 of 3)
SIGNAL statement - schematic flow

In Proc
Yes Body? No

Handler
Yes Exists? No

Warning?
Yes No

Continue
Activate Terminate
w/
Handler Proc
Next Stmt

© Copyright IBM Corporation 2004

Figure 7-12. SIGNAL Statement (2 of 3) CG381.0

Notes:
The figure above shows the schematic flow of what happens depending on where the
SIGNAL statement appears in the SQL procedure and whether it signals an error or a
warning:
• If the SIGNAL is in the procedure body, but not part of a handler, and a handler exists,
the handler is activated.
• If the SIGNAL is in the procedure body and there is no handler defined for this
condition, then the procedure:
- Continues if a warning is signaled
- Exits if an exception is signaled
• If SIGNAL is part of a handler, then the procedure:
- Continues if a warning is signaled
- Exits if exception is signaled

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-21


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

SIGNAL Statement (3 of 3)
Example:

CREATE PROCEDURE SUBMIT_ORDER


(IN ONUM INTEGER, IN CNUM INTEGER, IN PNUM INTEGER,
IN QNUM INTEGER)
LANGUAGE SQL
MODIFIES SQL DATA

BEGIN
DECLARE EXIT HANDLER FOR SQLSTATE VALUE '23503'
SIGNAL SQLSTATE '75002'
SET MESSAGE_TEXT = 'Customer number is not known';
INSERT INTO ORDERS (ORDERNO, CUSTNO, PARTNO, QUANTITY)
VALUES (ONUM, CNUM, PNUM, QNUM);
END

© Copyright IBM Corporation 2004

Figure 7-13. SIGNAL Statement (3 of 3) CG381.0

Notes:
In the figure above, we provide a small example of the usage of the SIGNAL statement.
Assume that a PK-FK relationship exists between ORDERS and CUSTOMERS.
SQLSTATE 23503 indicates that there is no entry in the PK table (customers) for the FK
table (ORDERS) row that we are inserting. Instead of returning SQLSTATE 23503, we
return an SQLSTATE 75002, and a helpful message, “Customer number is not known”.
Note: The SQLSTATE ‘value’ can be both an sqlstate-string-constant, as in the visual
above, or an SQL variable-name, declared within the compound-statement. The value of
the SQL variable has to be a valid 5-byte SQLSTATE.

7-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
RESIGNAL Statement (1 of 2)
Used within a handler to resignal an exception condition
Causes an error or warning to be returned with the specified
SQLSTATE, along with optional message text
Set SQLSTATE to a specific value
SQLCODE set to +438 if SQLSTATE class is '01' or '02'
SQLCODE set to -438 otherwise
An optional MESSAGE_TEXT can be specified
String returned in SQLERRMC of SQLCA (first 70 bytes)
If string > 70 bytes, truncated without warning
Untruncated message text available with GET DIAGOSTICS statement
GET DIAGNOSTICS statement
MESSAGE_TEXT clause provides the current message text
MESSAGE_LENGTH clause to obtain length of the current message text

© Copyright IBM Corporation 2004

Figure 7-14. RESIGNAL Statement (1 of 2) CG381.0

Notes:
The RESIGNAL statement is (only) used inside a handler. It can be used to change a
previously encountered SQLSTATE into a new one (that is more meaningful in the context,
or is specific to a certain product).You can use the RESIGNAL command within the body of
a handler as shown in Example 7-3.
Example 7-3. Using RESIGNAL

DECLARE OVERFLOW CONDITION FOR SQLSTATE VALUE ‘22003’;


DECLARE EXIT HANDLER FOR OVERFLOW
RESIGNAL SQLSTATE ‘22375’
SET MESSAGE_TEXT ‘Attempt to divide by zero’;

______________________________________________________________________
Note that when you use RESIGNAL (or SIGNAL) to set the SQLSTATE, the value of
SQLCODE returned to the invoking application is a constant (you cannot set it) based on
the class code (first 2 bytes) of the SQLSTATE:
• Class code 00 is not allowed

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-23


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Class code 00 or 01 causes SQLCODE +438


• All other class codes cause SQLCODE -438
As with the SIGNAL statement, you can also provide a message text. The provided
message text is returned in the SQLCA SQLERRMC field (up to 70 bytes), or the full
message can be obtained by using the MESSAGE_TEXT clause on the GET
DIAGNOSTICS statement.

7-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
RESIGNAL Statement (2 of 2)
Example:

CREATE PROCEDURE divide ( IN numerator INTEGER,


IN denominator INTEGER,
OUT divide_result INTEGER)
LANGUAGE SQL CONTAINS SQL
BEGIN
DECLARE overflow CONDITION FOR SQLSTATE '22003';
DECLARE EXIT HANDLER FOR overflow
RESIGNAL SQLSTATE '22375';
IF denominator = 0 THEN
SIGNAL overflow;
ELSE
SET divide_result = numerator / denominator;
END IF;
END

© Copyright IBM Corporation 2004

Figure 7-15. RESIGNAL Statement (2 of 2) CG381.0

Notes:
In the visual above, we use the RESIGNAL statement in an EXIT handler. We can also use
it in a CONTINUE handler.
Note: The SQLSTATE ‘value’ can be both an sqlstate-string-constant, as in the visual
above, or an SQL variable-name, declared within the compound-statement. The value of
the SQL variable has to be a valid 5-byte SQLSTATE.
The full power of the RESIGNAL statement comes to its rights when using multiple nested
compound statements in your SQL procedure. Error codes at a deeper nesting level can be
analyzed and changed (RESIGNAL) at a higher level. For more elaborate examples, see
DB2 SQL Procedural Language for Linux, UNIX, and Windows, ISBN 0-13-100772-6.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-25


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

ITERATE Statement
ITERATE statement now also supported in DB2 for z/OS (V8)
Already supported in DB2 for iSeries and DB2 for LUW
ITERATE statement causes the program to return to the beginning
of a labeled loop
label you specify must reference a FOR, LOOP, REPEAT or WHILE
statement
ITERATE is now a reserved word in SQL statements
Example: ins_loop:
LOOP
FETCH hv_dept...
IF hv_dept ^='D11' THEN
ITERATE ins_loop;
ELSEIF ...

© Copyright IBM Corporation 2004

Figure 7-16. ITERATE Statement CG381.0

Notes:
Example 7-4 shows the use of the ITERATE statement. An ITERATE statement causes the
flow of control to be passed back to the top of the LOOP statement, so you do not have to
use a GOTO statement instead.
Example 7-4. Using the ITERATE Statement

CREATE PROCEDURE ITERATOR ()


LANGUAGE SQL MODIFIES SQL DATA
BEGIN
DECLARE v_dept CHAR(3);
DECLARE v_deptname VARCHAR(29);
DECLARE v_admdept CHAR(3);
DECLARE at_end INTEGER DEFAULT 0;
DECLARE not_found CONDITION FOR SQLSTATE 02000 ;
DECLARE c1 CURSOR FOR
SELECT deptno,deptname,admrdept FROM department ORDER BY deptno;
DECLARE CONTINUE HANDLER FOR not_found SET at_end = 1;
OPEN c1;
ins_loop:
LOOP

7-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty FETCH c1 INTO v_dept, v_deptname, v_admdept;


IF at_end = 1 THEN LEAVE ins_loop;
ELSEIF v_dept = 'D11' THEN ITERATE ins_loop;
END IF;
INSERT INTO department (deptno,deptname,admrdept)
VALUES('NEW', v_deptname, v_admdept);
END LOOP;
CLOSE c1;
END
______________________________________________________________________

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-27


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Enhanced Label, LOB and Variable Support


In DB2 for z/OS V8, you can have a statement label at the
beginning of any statement within an SQL procedure
Enhances functionality of SQL procedures
Enhances DB2 Family compatibility (with LUW and iSeries)
SQL control statement

Label: SQL statement

LOB SQL variables are supported in DB2 for z/OS V8


LOB parameters already supported in V7
Now also as a variable, for example, DECLARE hv_book CLOB (2M);

Relaxing restrictions on variable names


In V8, SQL variables and parameters can have the same name
If SQL var and parm have same name, an unqualified reference to the
name is assumed to be the variable (since declared more locally than
parm)
In V8, SQL variables can be reserved words
When ambiguous, use delimited version, such as SET "PATH" = 'ABC';

© Copyright IBM Corporation 2004

Figure 7-17. Enhanced Label, LOB and Variable Support CG381.0

Notes:
In the following topics, we discuss support for enhanced labels, LOBs, and variables.

Enhanced Label Support


When SQL stored procedures were introduced in DB2 for OS/390, they included limited
support for statement labels within SQL procedures. More specifically, V6 and V 7 only
support labels on the assignment statement, the compound statement, LOOP, REPEAT,
and WHILE statements within an SQL procedure.
The label provides a destination for the GOTO statement, for example:
label1: SET x = y;
...
IF y = 1 GOTO label1;
With this enhancement, DB2 for z/OS introduces support for a statement label at the
beginning of any statement within an SQL procedure.
This enhances compatibility with both DB2 LUW and DB2 for iSeries.

7-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty LOB SQL Variables


The initial support for SQL procedures in DB2 for OS/390 did not include support for either
LOB parameters or SQL variables. Version 7 added support for LOB parameters, however,
there was still no support for defining a LOB SQL variable within an SQL procedure. V8
adds support for LOB SQL variables. You can now declare variables in an SQL procedure
as CLOB, BLOB, or DBCLOB, as shown in Example 7-5.
Example 7-5. Declaring LOB SQL Values

CREATE PROCEDURE PROCESS_BOOK (IN bookNumber INTEGER, OUT bookPages INTEGER)


LANGUAGE SQL
BEGIN
DECLARE v_numRecords INTEGER DEFAULT 1;
DECLARE v_counter INTEGER DEFAULT 0;
DECLARE deptNumber INTEGER DEFAULT 0;
DECLARE v_book CLOB(2M);
DECLARE v_binary BLOB(2M);
DECLARE v_dbcs DBCLOB(2M);
.....
END
______________________________________________________________________
DB2 LUW and DB2 for iSeries also support LOB data types in the definition of SQL
variables within SQL procedures.

Relaxed Restrictions on Variable Names


In the following topics, we describe some restrictions that have been relaxed in V8.
SQL Variable and Parameter Can Have the Same Name
An additional change is to remove a restriction that an SQL variable cannot have the same
name as a parameter for that procedure. This restriction was previously only enforced by
DB2 for OS/390; it was not enforced by the other DB2 platforms.
If an SQL variable has the same name as an SQL parameter, then an unqualified reference
to the name is assumed to be a variable. This makes sense because the SQL variable is
declared more locally than the parameter.
In the following procedure (Example 7-6), the SQL variable P1 takes precedence over the
SQL parameter of the same name because it is declared more locally than the parameter.
Example 7-6. Using the Same Name for a Variable and a Parameter

CREATE PROCEDURE CLAIRE2 (OUT P1 INTEGER, OUT P2 INTEGER) LANGUAGE SQL


RICK1: BEGIN
DECLARE P1 INT;
SET CLAIRE2.P1=5;
SET RICK1.P1=7;
SET P1=10;
SET P2=P1;
+ END
______________________________________________________________________

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-29


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The values returned from this procedure are:


• P1: 5
• P2: 10
The results indicate that the SQL parameter P1 was only set by the assignment statement
which referred to P1 by the qualified name RICK1.P1. The assignment statement which
referred to P1 without a qualifier resulted in the value 10 (‘implicitly qualified by RICK1)
indicating that the SQL variable overwrote the qualified assignment of the value 7 to the
SQL variable P1.
SQL Procedure Variable Names Can Be Reserved Words
In DB2 V7, the names of variables and parameters within SQL procedures cannot be
reserved words.
This enhancement increases DB2 Family compatibility with DB2 for LUW and DB2 for
iSeries.
In DB2 for z/OS V8, if an SQL variable with a name of a reserved word is used in a context
where its use would be ambiguous, then you specify the name of the variable as a
delimited identifier. The following coding (Example 7-7) is now allowed.
Example 7-7. Using Reserved Words in SQL Procedures

CREATE PROCEDURE P1 LANGUAGE SQL


BEGIN
DECLARE PATH CHAR(8);
SET PATH = 'ABC'; <-- This refers to the special register
SET "PATH" = 'ABC'; <-- This refers to the SQL variable
END
______________________________________________________________________

Potential SET CURRENT PATH and CONNECT Behavior Change


Before V8, SET CURRENT PATH and CONNECT statements were not behaving as
documented in the manuals in some cases.
In V8, DB2 behaves as described in the SQL Reference regarding an unresolved name,
and whether it resolves to an identifier, SQL parameter, or SQL variable.
• In the SET PATH statement, the name is checked as an SQL parameter name or SQL
variable name. If not found as an SQL variable or SQL parameter name, it will then be
used as an identifier.
• In the CONNECT statement, the name is used as an identifier.
This way DB2 for z/OS behaves as documented, and the same way as DB2 for LUW and
DB2 for iSeries.

7-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.4 DB2 V8 Development Center Integration

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-31


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 UDB V8 Development Center (1 of 3)


Functionality
Evolution of Stored Procedure Builder
Support for entire family of DB2 servers
From mainframe point of view, it can be used
to create stored procedures
Enhanced z/OS support including specialized SQL IDs (package owner,
build owner, secondary ID and advanced build options)
Support for developing SQL and JAVA stored procedures on zSeries
Support for viewing live database tables, views, triggers, stored procedures
and user-defined functions
and more....

© Copyright IBM Corporation 2004

Figure 7-18. DB2 UDB V8 Development Center (1 of 3) CG381.0

Notes:
The DB2 Development Center (DC), included in the V8.1 UDB Application Development
Client (ADC) component, is the follow-on product to the DB2 Stored Procedure Builder
(SPB) in the DB2 V7.2 UDB Application Development Client.
Development Center supports the entire family of DB2 servers using the DRDA
architecture. It communicates with the DB2 UDB V8.1 distributed servers (Linux, UNIX, and
Windows), DB2 UDB for OS/390 V6 and V7 and DB2 UDB for z/OS V8, as well as currently
supported DB2 UDB releases on iSeries.
From a mainframe perspective, the tool can be used to build stored procedures.
Development Center supports creating SQL stored procedures on all supported versions of
DB2 for OS/390 and z/OS (currently V6, V7, and V8). Java stored procedures can be
created with Development Center on DB2 V7 and V8.
The Development Center Online Help is an excellent source for additional information on
the Development Center.

7-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DB2 UDB V8 Development Center (2 of 3)
Testing and debugging of SP on zSeries
Test (invoke) stored procedure and user-defined function written in any
language
Saved object test settings, including parameter values and pre-execution
and post-execution SQL scripts
Via a wizard, you can create and generate code for new SQL and Java
stored procedures
Enhanced debugging of SQL stored procedures with variable value
change support using an integrated SQL debugger
New feature of wizard
Ability to insert code fragments into the generated code

© Copyright IBM Corporation 2004

Figure 7-19. DB2 UDB V8 Development Center (2 of 3) CG381.0

Notes:
Using the Development Center (DC) you invoke and test stored procedures written in any
language. To make it easier to test, you can save object test settings, including parameter
values. You can also use pre- and post-execution SQL scripts.
Via a wizard, you can create and generate code for new SQL and Java stored procedures.
You can also import existing SQL and Java stored procedures.
You can remotely (from within Development Center) debug SQL stored procedures that
execute on DB2 UDB for z/OS servers using the SQL Debugger. The SQL Debugger is
integrated into various client development platforms including DB2 Development Center.
With the SQL Debugger, you can observe the execution of SQL procedure code, set break
points for lines, and view or modify variable values.
With the new wizard you have the ability to insert code fragments into the generated code.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-33


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 UDB V8 Development Center (3 of 3)


Including SQL code fragments
Code fragments
User-defined sections of source code or comments inserted at
predefined locations in the generated source code
Text files that you create, which you and your team members can reuse
Especially useful when standard set of error handling functions, headers,
variable declarations, etc. included in all stored procedures for a given
project
Debugging
Tightly integrated into Development Center
Implementation of the SQL debugger includes:
Greater stability
Support for variable value change while debugging
Viewing sections of large variables, such as large objects (LOBs)

© Copyright IBM Corporation 2004

Figure 7-20. DB2 UDB V8 Development Center (3 of 3) CG381.0

Notes:
The new DC allows you to include your own code fragments into the generated code. You
can insert user-defined sections of source code (or comments) at pre-defined locations in
the generated code. These are text files that you create, and can be shared among a work
group, such as all developers. A good example of this would be a standard set of error
handling functions, headers, or variable declarations. This way, they can be coded once
and included by everybody.

Debugging
As mentioned before, you can use the integrated debugger to debug SQL stored
procedures on z/OS. The prerequisites and setup for debugging SQL stored procedures on
z/OS are:
• Workstation:
- DB2 Connect
- DB2 Development Center UDB V8.1.4 (FixPak 4)

7-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • z/OS:
- DB2 UDB for z/OS Version 8
- Run installation job <hlq>.SDSNSAMP(DSNTPSMP)
- C Compiler
- Run installation job <hlq>.SDSNSAMP(DSNTIJSD).
This job provides six new DB2-supplied stored procedures. It is recommended to
process these stored procedure in the same WLM AE with the DB2-supplied stored
procedures used by the SQL Assist set up in <hlq>.SDSNSAMP(DSNTIJMS).
This job includes support for the debugger providing:
• New DDL definitions
• New BINDs
• New authorizations
- A WLM procedure must be defined for executing the SQL stored procedure. This
WLM procedure optionally includes a //PSMDEBUG DD statement used to collect
information when debugging problems with the SQL Debugger. The //PSMDEBUG
statement defines a physical sequential data set with RECFM=VBA, LRECL=4096.
This data set should only be included in the WLM procedure when requested by IBM
Level 2 as the //PSMDEBUG statement presence causes records to be written to it
for SQL debugger problems, which will impact performance.
Here are some reference Web documents:
• DB2 Development Center — The Next-Generation AD Tooling for DB2
http://www7b.software.ibm.com/dmdd/library/techarticle/0207alazzawe/0207alazzawe.html
• DB2 Integrated SQL Debugger IBM DB2 Stored Procedure Builder V7.2
http://www7b.software.ibm.com/dmdd/library/techarticle/alazzawe/0108alazzawe.html

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-35


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

7-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.5 Implicit RRSAF Connections

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-37


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Implicit RRSAF Database Connections


Allow RRSAF applications to exploit implicit connections to
DB2 for z/OS
Similar to implicit connection support in CAF

Today only explicit RRSAF connection


Issue IDENTIFY, CREATE THREAD, SIGNON to establish a database
connection
(TERMINATE THREAD - TERMINATE IDENTIFY to end)

V8 implicit RRSAF connection


Just start issuing SQL statements and IFI calls
RRSAF will set up the connection for you
Subsystem name used depends on SSID in DSNHDECP
Plan name used depends on DBRM issuing first SQL call
Authorization ID used is that of the address space or ACEE
(if exists)

© Copyright IBM Corporation 2004

Figure 7-21. Implicit RRSAF Database Connections CG381.0

Notes:
With this enhancement, DB2 applications using RRSAF (RRS attach facility) can make
implicit connections to DB2, simply by including SQL statements or IFI calls. RRSAF will
make the required connection to DB2.

Comparing with CAF


A CAF (call attach facility) application can use two ways to connect to DB2, using an
explicit, or an implicit connection.
• When using an explicit connection, you issue a CONNECT and OPEN to connect to the
DB2. To disconnect, you issue a CLOSE and DISCONNECT.
• When using an implicit connection, a CAF application just issues SQL statements or IFI
calls. CAF establishes the implicit connection to DB2 using default values for
subsystem name (from DSNHDECP) and plan name (DBRM name associated with the
first SQL call).

7-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty In V7, a RRSAF application can only use explicit connections. It issues an IDENTIFY,
optionally a SIGNON, and a CREATE THREAD. To end a connection explicitly, you use
TERMINATE THREAD, TERMINATE IDENTIFY.
This enhancement changes RRSAF to allow the application to implicitly connect, just by
issuing SQL statements or IFI calls (similar to CAF). When an implicit connect is requested,
RRSAF will issue an IDENTIFY and CREATE THREAD using default values for the
subsystem name and the plan name.
• The default value for the subsystem name is name specified by the SSID parameter in
DSNHDECP. RRSAF uses the installation default DSNHDECP, unless your own
DSNHDECP is in a library provided in a STEPLIB or JOBLIB concatenation, or in the
link list. In a data sharing group, the default subsystem name is the group attachment
name.
• The default value for the plan name will be the name of the database request module
(DBRM) associated with the module making the (first) SQL call. If your program can
make its first SQL call from different modules with different DBRMs, then you cannot
use a default plan name. You must use an explicit call using the CREATE THREAD
function.
• The authorization ID is set from the 7-byte user ID associated with the address space,
unless an authorized function has built an ACEE for the address space. If an authorized
function has built an ACEE, DB2 passes the 8-byte user ID from the ACEE.
If your application includes both SQL and IFI calls, you must issue at least one SQL call
before you issue any IFI calls. This ensures that your application uses the correct plan, as
described above for the plan name.
As before, you must make sure the RRSAF language interface load module, DSNRLI is
available.
Tip: As with CAF, using an explicit connection using RRSAF gives you more control over
the behavior of the database connection. If this is important for your application, you
should not use implicit connections.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-39


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

7-40 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.6 New CURRENT PACKAGE PATH and SCHEMA Special


Register

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-41


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

SET [CURRENT] SCHEMA


Problem: Today use CURRENT SQLID as implicit qualifier for dynamic SQL
However, using CURRENT SQLID also affects authorization:
CURRENT SQLID must be authorized to do dynamic CREATE, ALTER, GRANT REVOKE
CURRENT SQLID must be primary or secondary auth ID (if not SYSADM or SYSCTRL)
Static SQL uses the BIND option QUALIFIER for this purpose
Only affects unqualified SQL

Solution: SET SCHEMA / CURRENT SCHEMA special register for dyn SQL
CURRENT SCHEMA is a special register of VARCHAR(128) with same value as
CURRENT SQLID at initialization time
Use SET SCHEMA to change
Only used as qualifier
Can be any value (not limited to primary or secondary auth ID like CURRENT SQLID

,
CURRENT =
SET SCHEMA schema-name
CURRENT_SCHEMA USER
host-variable
string-constant
DEFAULT

© Copyright IBM Corporation 2004

Figure 7-22. SET [CURRENT] SCHEMA CG381.0

Notes:
Customers often run applications with different qualifiers in effect for unqualified names,
without having to have separate copies of the application code itself. They then deploy the
application in different environments (sometimes on the same DB2 subsystem) without
have to change any of the application code.
With DB2 for z/OS and OS/390 V7, an application can be coded without qualifiers for object
names (also known as unqualified SQL), and an implicit qualifier will be used for
unqualified names.
For static SQL statements, the DB2 QUALIFIER option for BIND can be used, to specify
the implicit qualifier for unqualified object names (except for those contexts which use the
SQL PATH to resolve the name of an object). The QUALIFIER bind option affects the
implicit qualification of unqualified object names. If the QUALIFIER bind option is not
specified, the OWNER of the plan or package is used as the implicit object qualifier for
static SQL statements.
For dynamic SQL statements, the CURRENT SQLID special register can be used to
specify the implicit qualifier for unqualified object names (except for those contexts which

7-42 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty use the SQL PATH to resolve the name of an object). However, unlike the QUALIFIER bind
option, the use of the CURRENT SQLID special register has other effects, as it is also used
for authorization checking on dynamic CREATE, ALTER, GRANT, and REVOKE
statements, and the value is also used as the owner (or definer) of objects created with
dynamic CREATE statements.
The current (V7) support for specifying an implicit qualifier for dynamic SQL statements
(CURRENT SQLID) is not quite the same as the support for static SQL statements (the
QUALIFIER bind option).
The bundling of qualification of object name with the authorization checking, and ownership
of objects is not an ideal solution for dynamic SQL statements.
Again DB2 for z/OS Version 8 comes to the rescue. V8 introduces a new SET (CURRENT)
SCHEMA SQL statement, and a new special register CURRENT SCHEMA. It is only used
to qualify object names in unqualified dynamic SQL statements (when
DYNAMICRULES(RUN) are in effect, the default).
The CURRENT SCHEMA special register is initialized with the value of the CURRENT
SQLID at initialization time. You can then change it in your application using the SET
SCHEMA SQL statement. You can set the CURRENT SCHEMA special register to any
valid string (as long as the string you provide is a VARCHAR(128)). This is different from
changing the CURRENT SQLID. There you have to specify you your primary authorization
ID or one of your secondary authorization IDs (unless you are SYSADM or SYSCTRL). The
CURRENT SCHEMA special register is only used for qualifying unqualified SQL in your
program, nothing else.
Note: The QUALIFIER BIND option is used to qualify unqualified static SQL statement,
or dynamic statement when DYNAMICRULES(BIND) is in effect.
Currently the SET SCHEMA statement is not supported in the SQL procedure language.
Setting the CURRENT SCHEMA special register does not affect any other special register.
Therefore, the CURRENT SCHEMA is not to be included in the SQL path that is used to
resolve the schema name for unqualified references to function, procedures, and
user-defined types in dynamic SQL statements. To include the current schema value in the
SQL path, whenever the SET SCHEMA statement is issued, also issue the SET PATH
statement including the schema name from the SET SCHEMA statement.
When the name of the object to be created is specified as an unqualified name, in case of a
dynamic CREATE statement, the value of CURRENT SCHEMA must be the same as the
CURRENT SQLID special register. Otherwise an SQLCODE -20283 is issued, as shown in
Example 7-8.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-43


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example 7-8. SQLCODE -20263

---------+---------+---------+---------+---------+---------+---------+---------+-
SET SCHEMA = 'TEST' ;
---------+---------+---------+---------+---------+---------+---------+---------+-
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0
---------+---------+---------+---------+---------+---------+---------+---------+-
CREATE TABLE TESTTB (COLA CHAR(10));
---------+---------+---------+---------+---------+---------+---------+---------+-
DSNT408I SQLCODE = -20283, ERROR: A DYNAMIC CREATE STATEMENT CANNOT BE
PROCESSED WHEN THE VALUE OF CURRENT SCHEMA DIFFERS FROM CURRENT SQLID
DSNT418I SQLSTATE = 429BN SQLSTATE RETURN CODE
DSNT415I SQLERRP = DSNXODDL SQL PROCEDURE DETECTING ERROR
DSNT416I SQLERRD = 2 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION
DSNT416I SQLERRD = X'00000002' X'00000000' X'00000000' X'FFFFFFFF'
X'00000000' X'00000000' SQL DIAGNOSTIC INFORMATION
---------+---------+---------+---------+---------+---------+---------+---------+-
______________________________________________________________________

7-44 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CURRENT PACKAGE PATH Special Register
What is it? .....
Used for package (collection) resolution
Means for an application to specify a list of collections to
DB server as a search sequence (similar to PKLIST on
the BIND PLAN)
DB server (rather than application requester) can search
through list and find first package that exists with
specified package name
Control for applications that do not run under a DB2 plan
Benefits .....
Reduce network traffic and improve CPU/elapsed time
for applications
Allows nested procedure, user-defined function to be
implemented without concern for invoker's runtime
environment and allows multiple collections to be
specified
Easier to switch to/from JDBC and SQLJ

© Copyright IBM Corporation 2004

Figure 7-23. CURRENT PACKAGE PATH Special Register CG381.0

Notes:
CURRENT PACKAGE PATH is a new special register in DB2 V8. It it used when DB2 is
looking for a matching package (same name, version number as in the invoking program)
to load. Today, plans have the possibility of specifying a search sequence of collections for
DB2 to look in to find a matching package, via the PKLIST BIND option. Today in V7, there
is no PKLIST option for applications that do not run under a DB2 plan.
With the CURRENT PACKAGE PATH special register, DB2 V8 introduces a way for the
application to specify a list of collections for DB2 to look for a certain package. This
enhancement is especially important for SQLJ applications to provide a list of collections to
look for a matching package.
The advantage of the CURRENT PACKAGE PATH is that it contains a list of collections.
This is in contrast to the CURRENT PACKAGESET special register that can only contain
the name of a single collection. In the case where the package is not found in that
collection, the application has to change the CURRENT PACKAGESET special register
and try again.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-45


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

By using CURRENT PACKAGE PATH, you can specify a list of collections for DB2 to look
in to find the package. Only after all collections have been searched unsuccessfully, is an
SQLCODE -805 returned. This can be especially important in a distributed environment
where every time you need to change the CURRENT PACKAGESET special register, it
requires a trip across the wire. Because the CURRENT PACKAGE PATH can contain a list
of collections, only a single trip is required.

7-46 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
SET CURRENT PACKAGE PATH

,
=
SET CURRENT PACKAGE PATH collection -id
USER
CURRENT PACKAGE PATH
host-variable
string-constant

CURRENT PATH
CURRENT_PATH

USER, CURRENT PACKAGE PATH, and CURRENT PATH


can only be specified once
Note that you can specify the existing CURRENT PACKAGE PATH
and concatenate additional collections, ahead or after
SET :oldCPP = CURRENT PACKAGE PATH;
SET CURRENT PACKAGE PATH = CURRENT PACKAGE PATH, prodcoll ;
CALL PRODSP (:hv1, :hv2);
SET CURRENT PACKAGE PATH = :oldCPP;

© Copyright IBM Corporation 2004

Figure 7-24. SET CURRENT PACKAGE PATH CG381.0

Notes:
The SET CURRENT PACKAGE PATH statement assigns a value to the CURRENT
PACKAGE PATH special register. The statements must be embedded in an application. It
is an executable statement that cannot be dynamically prepared. The CURRENT
PACKAGE PATH is a VARCHAR(4096) value.
No validation that the collections exist is made at the time that the CURRENT PACKAGE
PATH special register is set. For example, a collection ID that is misspelled is not detected,
and this could affect the way subsequent SQL operates. At package execution time,
authorization to the specific package is checked, and if this authorization check fails, the
next collection is checked.
The SET CURRENT PACKAGE PATH statement is executed by the database server to
which the application is currently connected, and is therefore classified as a non-local SET
statement in DRDA. The SET CURRENT PACKAGE PATH statement requires a new level
of DRDA support:

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-47


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• If the application is connected to the local server when the SET CURRENT PACKAGE
PATH statement is issued, the CURRENT PACKAGE PATH special register at the local
server is set.
• Otherwise, when the application is connected to a remote server when the SET
CURRENT PACKAGE PATH is issued, the CURRENT PACKAGE PATH special
register at the remote server is set.

Combining CURRENT PACKAGE PATH and CURRENT PACKAGESET


These are the rules for combination:
• If you set the special register CURRENT PACKAGE PATH or CURRENT
PACKAGESET, DB2 skips the check for programs that are part of a plan and uses the
values in these registers for package resolution.
When CURRENT PACKAGE PATH is set, the server that receives the request ignores
the collection that is specified by the request and instead uses the value of CURRENT
PACKAGE PATH at the server to resolve the package. Specifying a collection list with
the CURRENT PACKAGE PATH special register can avoid the need to issue multiple
SET CURRENT PACKAGESET statements to switch collections for the package
search, as you would have to in V7.
• If you set CURRENT PACKAGE PATH, DB2 uses the value of CURRENT PACKAGE
PATH as the collection name list for package resolution. For example, if CURRENT
PACKAGE PATH contains the list COLL1, COLL2, COLL3, COLL4, then DB2 searches
for the first package that exists in the following order:
COLL1.PROG1.timestamp
COLL2.PROG1.timestamp
COLL3.PROG1.timestamp
COLL4.PROG1.timestamp
• If you set CURRENT PACKAGESET and not CURRENT PACKAGE PATH, DB2 uses
the value of CURRENT PACKAGESET as the collection for package resolution. For
example, if CURRENT PACKAGESET contains COLL5, then DB2 uses
COLL5.PROG1.timestamp for the package search.
Table 7-1 shows examples of the relationship between the CURRENT PACKAGE PATH
special register and the CURRENT PACKAGESET special register.
Table 7-1 Scope of SET CURRENT PACKAGE PATH statement
SQL Statements What happens

SET CURRENT PACKAGESET The collections in PACKAGESET determine


SELECT ...FROM T1 ... which package is invoked.

SET CURRENT PACKAGE PATH The collections in PACKAGE PATH determine


SELECT ...FROM T1 ... which package is invoked.

7-48 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty SQL Statements What happens

SET CURRENT PACKAGESET The collections in PACKAGE PATH determine


SET CURRENT PACKAGE PATH which package is invoked.
SELECT ...FROM T1 ...

SET CURRENT PACKAGE PATH The local server sends one collection at a time
CONNECT TO S2 ... from PACKAGE PATH at the local server to
SELECT ...FROM T1 ... remote server S2 that is to be used for package
resolution.1

SET CURRENT PACKAGE PATH = ’A,B ’ The collections in PACKAGE PATH that are set at
CONNECT TO S2 ... server S2 determine which package is invoked.
SET CURRENT PACKAGE PATH = ’X,Y ’
SELECT ...FROM T1 ...

SET CURRENT PACKAGE PATH Three-part table name. On implicit connection to


SELECT ...FROM S2.QUAL.T1 ... server S2, PACKAGE PATH at server S2 is
inherited from the local server. The collections in
PACKAGE PATH at server S2 determine which
package is invoked.

Notes:
1
. When CURRENT PACKAGE PATH is set at the requester (and not at the remote
server), DB2 passes one collection at a time from the list of collections to the remote server
until a package is found or until the end of the list. Each time a package is not found at the
server, DB2 returns an error to the requester. The requester then sends the next collection
in the list to the remote server.

CURRENT PACKAGE PATH and Stored Procedures and UDFs


When a stored procedure calls another program, DB2 determines to which collection the
called program’s package belongs, in one of the following ways:
• If the stored procedure executes SET CURRENT PACKAGE PATH, the called
program’s package comes from the list of collections in the CURRENT PACKAGE
PATH special register. For example, if CURRENT PACKAGE PATH contains the list
COLL1, COLL2, COLL3, COLL4, then DB2 searches for the first package (in the order
of the list) that exists in these collections.
• If the stored procedure does not execute SET CURRENT PACKAGE PATH and instead
executes SET CURRENT PACKAGESET, the called program’s package comes from
the collection that is specified in the CURRENT PACKAGESET special register.
• If the stored procedure does not execute SET CURRENT PACKAGE PATH or SET
CURRENT PACKAGESET:
- If the stored procedure definition contains NO COLLID, DB2 uses the collection ID
of the package that contains the SQL statement CALL.
- If the stored procedure definition contains COLLID collection-id, DB2 uses
collection-id.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-49


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

7-50 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 7.7 ODBC for USS Enhancements


In this topic, we describes the ODBC/CLI enhancements in DB2 V8:
• ODBC SQLConnect user and password support
• ODBC support for long names
• ODBC support for SQL statements up to 2 MB
• SQLCancel support
• ODBC Unicode support

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-51


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

ODBC for USS Enhancements


We discuss enhancements to ODBC for programs running in UNIX
System Services on z/OS
Userid and password authentication (validation, not just syntax checking)
on SQLConnect and SQLDriverConnect
ODBC support for long names
ODBC support for SQL statements up to 2 MB
SQLCancel() support
ODBC Unicode support
Update, insert, delete and fetch Unicode data through ODBC
application variables
Unicode strings within the ODBC application programming
interface (which allow you to use Unicode SQL statements
in your ODBC application)

© Copyright IBM Corporation 2004

Figure 7-25. ODBC for USS Enhancements CG381.0

Notes:

ODBC SQLConnect User and Password Support


When you run an ODBC application on z/OS, you use RRS or CAF to connect to the
database. (In case of a DB2 UDB for z/OS, the database to which you connect represents
an entire DB2 subsystem.)
Actually, the DB2 thread is created when allocating the connection handle. After you have
created the connection handle, the ODBC application can start the DRDA communication
with the DB2 subsystem. To do this, you must use the SQLConnect or SQLDriverConnect
API.
You have always been able to specify a user ID and password argument on the
SQLConnect and SQLDriverConnect API calls. However, they were not passed to the DB2
UDB for z/OS system. These keywords were only checked to validate if they were
syntactically correct, which basically means that they were not allowed to exceed the
length restrictions for user ID and password and that they did not contain blank values.

7-52 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The user ID that was used to establish the thread and checked for authorization in DB2
was the user ID that was used to logon to the system. At that time, the user ID and
password were verified with the system’s security software, for example, RACF.
In terms of compatibility with other DB2 platforms, this behavior has been changed in DB2
V8. Now the values for the user ID and password arguments on the input to SQLConnect
and SQLDriverConnect APIs are propagated to the target DB2 system. For compatibility
with existing application programs, the user authentication is only performed when both a
user ID and password are provided on the API call.
Attention: Applications which try to connect to a local DB2 system with an invalid user ID
or password fail with SQLCODE -922. That means that if you used values in your existing
applications that did not represent real user IDs, because they have not been checked
until now, you must make sure that those values are either valid or set to blank or NULL.
When connecting to a remote DB2, and you specify a user ID but no password, you
receive an SQLCODE -1403, or SQLCODE -30082 when the user ID or password is
wrong.

This function has been made available in DB2 V7 through the maintenance stream, via
APAR PQ58787 (PTF UQ67626).

ODBC Long Name Support


ODBC for z/OS has been enhanced to be able to support the long names in V8 new
function mode. This means changes to the following ODBC functions.
Support for Longer Names in the INI File Variables
The keyword strings for the following initialization keywords will be extended to support
longer names:
• CLISCHEMA
• COLLECTIONID
• CURRENTFUNCTIONPATH
• CURRENTSQLID
• SCHEMALIST
• SYSSCHEMA

ODBC Catalog Functions


As a result of the name changes in the catalog, the ODBC catalog API needs to change the
lengths of the host variables and data types in the SQLDA when fetching result sets from
catalog tables. However, most of these changes are transparent to the external API. Only
the data type of the REMARKS column returned by SQLColumns() and SQLTables() is
changed to VARCHAR(762).

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-53


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Support for Statements up to 2 MB


ODBC also supports long SQL statements, up to 2 MB.

7-54 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
ODBC for USS Unicode Support
The following DB2 ODBC elements support this new functionality:
A new initialization keyword CURRENTAPPENSCH (in the .INI file) to
specify the current encoding scheme (EBCDIC, ASCII, or Unicode).
When you set this keyword to Unicode, generic ODBC APIs support
UTF-8 data.
New APIs with the suffix W, called wide APIs, are introduced to support
UCS-2 data. (V6 PTF UQ60475 and in V7 with PTF UQ60476 and V7
as well)
Wide APIs accept Unicode UCS-2 string arguments only, and requires
that the CURRENTAPPENSCH keyword is set to Unicode.
For example: The equivalent wide API for the SQLConnect ()
function call is SQLConnectW().
The non-wide functions, for example SQLColumnPivileges, have been
changed to accept UTF-8 string arguments and return all character string
data in the result set in UTF-8 encoding scheme.
New SQL_C_WCHAR data type to support UCS-2 data
Additional SQLGetInfo() attributes to query the CCSID settings of the DB2
subsystem in each encoding scheme, for example, SQL_ASCII_SCCSID

© Copyright IBM Corporation 2004

Figure 7-26. ODBC for USS Unicode Support CG381.0

Notes:
As we mentioned before, one of the major enhancements of DB2 V8 is the exploitation of
Unicode in many different areas. ODBC is one of the important interfaces to DB2.
Up to DB2 V7, only the EBCDIC encoding scheme was fully supported for ODBC. There
was no support for Unicode and only partial support for ASCII encoding scheme. DB2 V8
now provides you with the ability to:
• Update, insert, delete, and fetch Unicode data through ODBC application variables.
• Use Unicode strings within the ODBC application programming interface (which allow
you to use Unicode SQL statements in your ODBC application)
The following DB2 ODBC elements support this new functionality:
• DB2 V8 introduces a new initialization keyword CURRENTAPPENSCH (in the .INI file)
to specify the current encoding scheme (EBCDIC, ASCII, or Unicode). When you set
this keyword to Unicode, generic ODBC APIs support UTF-8 data.
• In addition to the normal ODBC API calls, a set of new APIs with the suffix W, called
wide APIs, are introduced to support UCS-2 data.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-55


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Wide APIs accept Unicode UCS-2 string arguments only. The equivalent wide API for
the SQLConnect () function call is SQLConnectW(). Wide APIs were enabled in V6 with
PTF UQ60475 and in V7 with PTF UQ60476, and are available in the V8 base code.
(The use of the wide API does not require that the CURRENTAPPENSCH keyword is
set to Unicode. Actually, when using the wide API or SQL_C_WCHAR as the symbolic
C data type, CURRENTAPPENSCH is not checked. When using these, UCS-2 is
always assumed.)
• The non-wide functions, for example, SQLColumnPrivileges, have been changed to
accept UTF-8 string arguments and return all character string data in the result set in
UTF-8 encoding scheme.
• A new SQL_C_WCHAR data type to support UCS-2 data is now available.
• Additional SQLGetInfo() attributes to query the CCSID settings of the DB2 subsystem in
each encoding scheme, for example, SQL_ASCII_SCCSID, are provided.

7-56 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CCSID Precompiler Option
Good news
You can code all valid characters for a CCSID

You can write Instead of

struct { struct ??<


short len; short len;
char data[10]; char data??(10??)
} ??>

or or

SELECT C1 FROM T1 SELECT C1 FROM T1


WHERE C1 ¬= 'A'; WHERE C1 <> 'A';

© Copyright IBM Corporation 2004

Figure 7-27. CCSID Precompiler Option CG381.0

Notes:
In DB2 Version 8, the precompiler works in Unicode, irrespective of the mode DB2 is
running in. Therefore, if the SQL statements of your source program are not in Unicode
UTF-8 (which is most likely the case in most traditional programming languages), the DB2
Version 8 precompiler converts them to UTF-8 for parsing.
A new precompiler keyword CCSID(n) tells the precompiler the CCSID the source program
is written in, so the precompiler can convert from that CCSID to CCSID 1208 (UTF-8). (If
you want to prepare a source program that is written in a CCSID that cannot be directly
converted to or from CCSID 1208, you must create an indirect conversion. For information
about indirect conversions, see z/OS Support for Unicode: Using Conversion Services,
SA22-7649.)

Coding Characters in Your “own” CCSID


Using the new CCSID(n) option during precompilation allows you to specify the numeric
value “n” of the CCSID in which the source program is written. The number “n” must be
either 65535, or in the range 1 through 65533.

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-57


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The default setting is the EBCDIC system CCSID (SCCSID DSNDECP value) as specified
on the panel DSNTIPF during installation.
Your source program is converted by the precompiler from the CCSID value that you
specify, to CCSID 1208 (UFT-8) that is used by the precompiler in V8. After the
precompilation, the program is converted back to the original CCSID. In general, the
precompiler produces the following output:
• An output listing (SYSPRINT data set) in the CCSID of the source program.
• A modified source program (written to SYSCIN, and input to the compiler or assembler),
written in the CCSID of the source program.
• A DBRM, where the SQL statements and the list of host variable names use the
following character encoding schemes:
- EBCDIC, for the result of a DB2 Version 8 precompilation with NEWFUN NO or a
precompilation in an earlier release of DB2.
- Unicode UTF-8, for the result of a DB2 Version 8 precompilation with NEWFUN
YES.
The advantage of this option is that you can code your source program in the CCSID of
your terminal emulator. As shown in the figure above, you no longer have to use “??<” to
indicate curly braces in a C-program, but you can use the “{“ directly.

7-58 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CCSID Precompiler Option - Considerations
Be careful!
Some compilers expect the source to be in a certain CCSID
COBOL and PL/I expect the source to be in CCSID 37

EXEC SQL
WHENEVER SQLWARNING
GO TO ERR_HANDLE ;

CCSID(500) Precompiler

IF SQLCODE > 0 & IF SQLCODE > 0 &


SQLCODE ¬= 100 | SQLCODE [= 100 ]
SQLWARN0 = 'W' THEN SQLWARN0 = 'W' THEN
GO TO ERR-HANDLE; GO TO ERR-HANDLE;14

CCSID(37) PL/I Compiler

© Copyright IBM Corporation 2004

Figure 7-28. CCSID Precompiler Option - Considerations CG381.0

Notes:
However, you have to be careful when using the CCSID precompiler option.
In the figure above, we code our program in CCSID 500. When invoking the precompiler,
we specify the CCSID(500) option. The precompiler does its job and produces a modified
source that looks like the box on the left. Note that the precompiler substituted the
“WHENEVER SQLWARNING” statement with a number of PL/I statements. In there, we
see the “¬ =” (not equal) and “ | “ (or) symbols, all produced in CCSID(500).
The PL/I compiler assumes CCSID(37) when compiling a program.The special characters
mentioned above are at different code points in CCSID(37), and the compiler produces an
error. The statements are seen by the PL/I compiler as the box on the right, and the
characters “ [ “ and “ ] “ are not valid in PL/I.
For example, when using CCSID(500) to compile sample program DSNTEP2, you get the
following error (Example 7-9).

© Copyright IBM Corp. 2004 Unit 7. Application Enablement 7-59


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example 7-9. PL/I Compilation Error

Compiler Messages
Message Line.File Message Description
IBM1357I E 1687.1 Character with decimal value 186 does not belong to
the PL/I character set. It is assumed to be a NOT
symbol.
Failing statement
1687.1 IF SQLCODE>0 & SQLCODE =100 SQLWARN0='W'

Failing statement in hex


1687.1 IF SQLCODE>0 & SQLCODE =100 SQLWARN0='W'
FFFF4F44444444CC4EDDCDCC6F454EDDCDCCB7FFF4B4EDDECDDF77E7
1687B1000000009602833645E00002833645AE1000B028361950ED6D
______________________________________________________________________
Note that in the hex representation of the statement, we find x’BA’ and x’BB’. They are the
“¬” (not) sign and “ | “ (or sign) in CCSID(500), but not in CCSID(37) that is used by the PL/I
compiler to interpret these statements, and hence an error is produced.
When you are using precompiler services, the compiler directly calls the DB2 precompiler,
then the COBOL compiler passes the CCSID to DB2.

7-60 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Unit 8. Utility Enhancements

What This Unit Is About


DB2 UDB for z/OS Version 8 has enhanced several utility functions to
support the new availability enhancements and to meet customer
requirements for these utilities.

What You Should Be Able to Do


After completing this unit, you should be able to:
• Describe the new features of LOAD and UNLOAD
• Discuss the methods for rebalancing partitions
• Explain the changes to the REORG, COPY, REPAIR, and
DSN1COPY utilities
• Relate the utility changes to support DPSIs

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-1


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
New utilities BACKUP SYSTEM and RESTORE SYSTEM
Delimited data support for LOAD and UNLOAD
RUNSTATS enhancements
Defaults for better performance
REORG TABLESPACE enhancements
REBUILD INDEX enhancements
COPY enhancements
REPAIR enhancements
Changes to utilities to support online schema evolution
Offline utility (DSN1*) enhancements
Unicode utility statements

© Copyright IBM Corporation 2004

Figure 8-1. List of Topics CG381.0

Notes:
This unit describes the enhancements to DB2 utilities. It consists of the following topics:
• New utilities:
- BACKUP SYSTEM
- RESTORE SYSTEM
• Delimited data support for:
- LOAD
- UNLOAD
• RUNSTATS enhancements
• Defaults for better performance
• REORG TABLESPACE enhancements
• REBUILD INDEX enhancements
• COPY enhancements

8-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • REPAIR enhancements


• Changes to utilities to support:
- Online schema evolution
- Multi-level security

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-3


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.1 New Utilities (BACKUP SYSTEM and RESTORE SYSTEM)

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-5


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

System Level Point-in-Time Recovery


Easier, more flexible, less disruptive, faster backup and recovery
Handle large numbers of table spaces and indexes
Two new utilities are introduced
BACKUP SYSTEM for fast volume-level backups
DB2 databases and logs
Data sharing group scope
z/OS V1R5 required for new COPYPOOL function and fast replication
RESTORE SYSTEM
To an arbitrary point-in-time
Handles CREATEs, DROPs, LOG NO events
Data sharing group scope

© Copyright IBM Corporation 2004

Figure 8-2. System Level Point-in-Time Recovery CG381.0

Notes:
Enhancements to system level point-in-time recovery for DB2 provide improved usability,
more flexibility, and faster backup and recovery. You can recover your data to any
point-in-time, regardless of whether you have uncommitted units of work. Data recovery
time improves significantly for large DB2 systems that contain many thousands of objects.
Two new utilities are used for system level point-in-time recovery:
• The BACKUP SYSTEM utility provides fast volume-level copies of DB2 databases and
logs for an entire DB2 subsystem or DB2 data sharing group. It relies on new
DFSMShsm services in z/OS Version 1 Release 5 that allow for fast volume level
backups. BACKUP SYSTEM is less disruptive than using the SET LOG SUSPEND
command for copy procedures. An advantage for data sharing environments is that
BACKUP SYSTEM has a group scope (compared to SET LOG SUSPEND, which has a
member scope).
• The RESTORE SYSTEM utility recovers a complete DB2 system or a data sharing
group to an arbitrary point-in-time. RESTORE SYSTEM automatically handles any

8-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty creates, drops, and LOG NO events that may have occurred between the point the
backup is taken and the point-in-time that you recover to.
More details on this feature can be found in Figure 2-87, "System Level Point-In-Time
Recovery", on page 2-128.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-7


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.2 Delimited Data Support for LOAD and UNLOAD

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-9


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

LOAD / UNLOAD Delimited Input / Output


LOAD / UNLOAD utilities will accept / produce delimited files
Benefits of these enhancements include:
Eases the import / export of large amounts of data from DB2 for z/OS
to other operating system platforms and vice versa
Eliminates the requirement to write a program to convert non-z/OS
platform data into the positional format for the DB2 for z/OS LOAD
utility, or to use INSERT processing
Unloads data from DB2 for z/OS in delimited file format and loads /
imports it into another RDBMS

© Copyright IBM Corporation 2004

Figure 8-3. LOAD / UNLOAD Delimited Input/Output CG381.0

Notes:
Most relational database management systems, including DB2 on Linux, UNIX, and
Windows (LUW) platforms, are capable of unloading data in delimited format, where each
record is a row, and columns are separated by commas, and optionally delimited with
double quote (“) marks, for example. On the other hand, most other systems cannot unload
data into the positional format required by the DB2 for z/OS and OS/390 Version 7 LOAD
utility. If you want to move data to DB2 for z/OS and OS/390, you must therefore, either
write a program to convert the data into the positional format that the LOAD utility
understands, or use insert processing, thereby not exploiting the performance advantages
of the LOAD utility.
The DB2 for z/OS Version 8 LOAD utility is enhanced to accept data from a delimited file.
The UNLOAD utility is also enhanced to produce a delimited file when unloading the data.
These enhancements help to simplify the process of moving/migrating data into and out of
DB2 for z/OS.

8-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty For example, you can save the data from your spreadsheet as a comma separated value
(CSV) file, and load the saved data into a DB2 for z/OS table using the FORMAT
DELIMITED option.
This function is totally compatible with other members of the DB2 family.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-11


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Delimited Files - Reminder


A delimited file is a sequential file with row and column delimiters
Is a string of characters consisting of cell values ordered by row,
then by column
Row (Record) delimiters not needed since the end of record is
inherent in the file structure
Columns are separated by column delimiters
Character strings are delimited by character delimiters
In z/OS, a row is a single BSAM record
Examples: "Bart", "Steegmans", "ITSO"
"Ravi", "Kumar", "ITES"

© Copyright IBM Corporation 2004

Figure 8-4. Delimited Files - Reminder CG381.0

Notes:
A delimited file, in general, is a sequential file with row and column delimiters. Each
delimited file is a string of characters consisting of cell values ordered by row, and then by
column. Columns within each row are separated by column delimiters. Rows are separated
by row delimiters. The beginning and ending of each individual cell value may be indicated
by character delimiters. In z/OS a row is a BSAM record.

8-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
LOAD/UNLOAD Delimited Input/Output Syntax

LOAD

FORMAT DELIMITED

COLDEL - coldel CHARDEL -chardel DECPT -decpt

UNLOAD

DELIMITED

COLDEL - coldel CHARDEL -chardel DECPT -decpt

© Copyright IBM Corporation 2004

Figure 8-5. LOAD/UNLOAD Delimited Input/Output Syntax CG381.0

Notes:
This visual shows the changes to the syntax of LOAD and UNLOAD utilities to support this
enhancement.
Note: It is interesting to note that LOAD uses FORMAT DELIMITED, whereas UNLOAD
uses only the DELIMITED keyword.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-13


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Delimited Files - LOAD / UNLOAD


LOAD / UNLOAD allow specification of new keywords
[FORMAT] DELIMITED
COLDEL - Column delimiter (default is comma)
CHARDEL - Character delimiter (default is quotation marks)
DECPT - Decimal point (default is period)
These are characters to be found in the input file,
or to be produced in output file
Double character delimiter recognition is supported
(see example)
Applies to CHAR, CLOB, and VARCHAR only

"what a " "nice" " day"


LOADs as -> what a "nice" day

I am 6" tall.
UNLOADs as -> "I am 6"" tall."

© Copyright IBM Corporation 2004

Figure 8-6. Delimited Files - LOAD / UNLOAD CG381.0

Notes:
A delimited file on z/OS is a sequential file consisting of one or more fixed or variable length
records. Since the end of the record is inherent in the file structure, record delimiters, such
as CRLF (carriage return line feed), are not used. The LOAD utility syntax has been
changed and an additional option DELIMITED is added for the keyword FORMAT, or just
the DELIMITED keyword during UNLOAD.
The DELIMITED option specifies that the input file is a delimited file. This is a BSAM file
with column and character data string delimiters. In this format, all fields in the input data
set are character strings or numeric data in external format. Each column value is
separated from the next by a column delimiter character (the default is a comma).
When you specify DELIMITED, you can optionally specify COLDEL, CHARDEL and
DECPT to indicate the delimiter characters that are different from the defaults.
• COLDEL specifies a single column delimiter character (the default is comma) that is
used in the input file for LOAD, or in the output file for UNLOAD when DELIMITED is
specified.

8-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • CHARDEL specifies a single character string delimiter (the default is double quote) that
is used in the input file for LOAD or in the output file for UNLOAD when DELIMITED is
specified. The character string delimiter is permitted within character string input fields.
Two successive character delimiters within the enclosing character delimiters are
interpreted as a single character that is part of the character string.
For example, when using the default double quote as a character delimiter:
- The LOAD utility loads “what a ““nice”” day” as, what a “nice” day
- The UNLOAD utility unloads I am 6” tall as, “I am 6”” tall”
• DECPT specifies a single decimal point character (the default is period) that is used in
the input file for LOAD or in the output file for UNLOAD when DELIMITED is specified.
UNLOAD / LOAD (FORMAT) DELIMITED only supports unloading data from, and loading
data into a single table. This is compatible with the rest of the DB2 Family. Thus,
(FORMAT) DELIMITED specified with multiple FROM TABLE specifications, or not
specifying FROM TABLE with the table space specification when the table space contains
multiple tables, results in a syntax error.
For example, consider the following situations:
• A segmented table space DELIMITD.DELIMITS has two tables, EMP1 and EMP2, and
you want to unload data from table EMP1.
You can unload data from table EMP1 in one of the following two ways:
UNLOAD DATA FROM TABLE EMP1 DELIMITED ... or
UNLOAD TABLESPACE DELIMITD.DELIMITS FROM TABLE EMP1 DELIMITED ...
The FROM TABLESPACE specification results in syntax error if FROM TABLE EMP1 ... is
not included in the utility control statement.
• A segmented table space DELIMITD.DELIMITS has two tables, EMP1 and EMP2, and
you want to unload data from both tables EMP1 and EMP2.
You can unload data from tables EMP1 and EMP2 in one of the following two ways:
UNLOAD DATA FROM TABLE EMP1 DELIMITED ...
UNLOAD DATA FROM TABLE EMP2 DELIMITED ... or

UNLOAD TABLESPACE DELIMITD.DELIMITS FROM TABLE EMP1 DELIMITED ...


UNLOAD TABLESPACE DELIMITD.DELIMITS FROM TABLE EMP2 DELIMITED ...
The FROM TABLE and FROM TABLESPACE specifications result in syntax error if within a
single UNLOAD statement, there is more than one DELIMITED or FROM TABLE
specification.

Considerations when Using DELIMITED LOAD/UNLOAD


You should be aware of the following considerations when using the delimited file format:

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-15


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• LOAD:
- When you specify the DELIMITED option, the utility ignores the POSITION keyword.
The utility overrides field data type specifications according to the specifications of
the delimited format. (For example, length values for CHAR, VARCHAR, GRAPHIC,
VARGRAPHIC, CLOB, DBCLOB, and BLOB data are the delimited lengths of each
field in the input data set, and the utility expects all numeric types in external format.
- There is no length field associated with a VARCHAR column when using a delimited
input file. Only the actual unload value appears in the input file. When loading the
data into a VARCHAR column, DB2 will calculate the length of the field during load.
- The keyword MIXED can be specified for CHAR, VARCHAR, and CLOB data types
to indicate that the input field contains mixed (SBCS and DBCS) data. If MIXED is
specified, any required CCSID conversions use the mixed CCSID for the input data.
If MIXED is not specified, any such conversions use the SBCS CCSID for the input
data.
- If no field specifications are supplied, the input data is assumed to be in the mixed
CCSID if any columns in the table are FOR MIXED. Otherwise it is assumed to be
SBCS.
- For Unicode input, the input data must be in CCSID 1208, UTF-8.
- CONTINUEIF is not allowed with FORMAT DELIMITED.
- INCURSOR is not allowed with FORMAT DELIMITED.
- The WHEN keyword is not allowed with FORMAT DELIMITED.
- As described above, with FORMAT DELIMITED, multiple INTO TABLE statements
are not allowed. FORMAT DELIMITED can only be used to load a single table at a
time.
• UNLOAD:
- For delimited output, UNLOAD does not add trailing padded blanks to variable
length columns, even if you do not specify the NOPAD option. For fixed length
columns, the normal padding rules apply.
For example, if a VARCHAR(10) field contains ABC, UNLOAD DELIMITED unloads
the field as “ABC”. However, for a CHAR(10) field that contains ABC, UNLOAD
DELMITED unloads it as “ABC “.
Also note that there is no length field associated with an unloaded VARCHAR
column. Only the actual unload value appears in the output file.
- The default for HEADER is HEADER NONE.
- HEADER OBID and ROWID are not valid output fields for the delimited output file
format. Neither OBID nor ROWID are generated in the delimited output file. Since
the header is not allowed, output must be from a single table.

8-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - When you specify the DELIMITED option, the utility ignores the POSITION keyword.
The utility overrides field data type specifications according to the specifications of
the delimited format. (For example, length values for CHAR, VARCHAR, GRAPHIC,
VARGRAPHIC, CLOB, DBCLOB, and BLOB data are the delimited lengths of each
field in the output data set, and the utility unloads all numeric types in external
format.)
- When the delimited output format is used, field POSITION is ignored if you specify it.
Field data type specifications, if supplied, are overridden by the requirements of the
delimited format; that is, the lengths of CHAR, VARCHAR, GRAPHIC,
VARGRAPHIC, CLOB, DBCLOB, and BLOB fields are the delimited lengths of each
field in the output data set, and all numeric types are unloaded in external format.
- A NULL value is indicated by the absence of a cell value where one would normally
occur (that is, two successive column delimiters, or missing columns at the end of a
record). There is no NULL indicator byte present.

Delimited File Data Type Forms for LOAD and UNLOAD


Table 8-1 shows the acceptable data type forms for the delimited file format.
Table 8-1 Acceptable Data Type Forms For the Delimited File Format
Data type Form acceptable to LOAD utility Form in file created by UNLOAD utility

CHAR, A delimited or non-delimited character Character data enclosed by character


VARCHAR string delimiters. There are no length bytes
preceding the data in the string for
VARCHAR.

GRAPHIC- A delimited or non-delimited character Data is unloaded as a delimited


Any Type stream character string. There are no length
bytes preceding the data in the string for
VARGRAPHIC.

INTEGER - A stream of characters representing a Numeric data in external format


Any Type number in external format

DECIMAL - A character string that represents a A string of characters representing a


Any Type number in external format number

FLOAT (1-21) Representation of number in single A string of characters representing a


or REAL precision in external format number in floating point notation

FLOAT (22-53) Representation of number in double A string of characters representing a


or DOUBLE precision in external format number in floating point notation

BLOB, CLOB A delimited or non-delimited character Character data enclosed by


string character delimiters. There are no length
bytes preceding the data in the string.

DBCLOB A delimited or non-delimited character Character data enclosed by character


string delimiters. There are no length bytes
preceding the data in the string.

DATE A delimited or non-delimited character Character string representation of a date


string containing a date value in
external format

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-17


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Data type Form acceptable to LOAD utility Form in file created by UNLOAD utility

TIME A delimited or non-delimited character Character string representation of a time


string containing a time value in
external format

TIMESTAMP A delimited or non-delimited character Character string representation of a


string containing a timestamp value in timestamp
external format

Note: All numeric fields are in EXTERNAL format for LOAD. Field specifications of
INTEGER or SMALLINT will be treated as if they were INTEGER EXTERNAL,
specifications of DECIMAL, DECIMAL PACKED, or DECIMAL ZONED are treated as
DECIMAL EXTERNAL, and specifications of FLOAT, REAL, or DOUBLE are treated as
FLOAT EXTERNAL.

8-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Delimited File - Restrictions
A delimiter cannot be binary zero
A default decimal point (.) cannot be a string delimiter
The default delimiter value differs by encoding scheme
For ASCII / Unicode
Double quote (") for character string X'22'
Period (.) for decimal X'2E'
Comma (,) for column delimiter X'2C'
For EBCDIC
Double quote (") for character string X'7F'
Period (.) for decimal X'4B'
Comma (,) for column delimiter X'6B'
For Unicode input / output, data must be in CCSID 1208, UTF-8
The delimiter must be specified in code page of source / target data
It can be a character or hex constant

© Copyright IBM Corporation 2004

Figure 8-7. Delimited Files - Restrictions CG381.0

Notes:
Note that the default character used for DECPT cannot be specified as either CHARDEL or
COLDEL, even if DECPT is specified with some other character.
In most EBCDIC code pages, the hex values that are specified on the visual are a double
quotation mark(“) for the character string delimiter, a period(.) for the decimal point
character, and a comma(,) for the column delimiter.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-19


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example of UNLOAD Statement

UNLOAD TABLESPACE databasename.tablespacename


DELIMITED CHARDEL '#' COLDEL ';' DECPT '.'
PUNCHDDN SYSPUNCH Optional keywords
UNLDDN SYSREC EBCDIC
FROM TABLE tablename

(LNAME POSITION(*) VARCHAR(15),


DEPTNO POSITION(*) CHAR(8), POSITION(*)
SEX POSITION(*) CHAR(1), keywords and
COUNTRY POSITION(*) DBCLOB(11), char field lengths
optional
SALARY POSITION(*) DECIMAL(8,2),
SALARYRATE POSITION(*) FLOAT )

No trailing blanks
for VARCHAR
Unloaded data looks like:
#warren#; #D10 #; #M#; # U S A #; #6500.00#; #.5E+1#

Note that field lists are optional for LOAD / UNLOAD and are primarily used
for selecting a subset of columns or selecting columns in a different order

© Copyright IBM Corporation 2004

Figure 8-8. Example of UNLOAD Statement CG381.0

Notes:
Field list is optional for LOAD/UNLOAD. If you don't specify it, the utility loads/unloads all
valid columns from the table. Specifying a field list is primarily used to selectively
load/unload columns and data in any order you choose.
When delimited is specified for UNLOAD, the NOPAD option is in effect for variable length
column output, even if NOPAD is not specified. Therefore, trailing padded blanks are not
used for variable length columns. For example, if a VARCHAR (100) field contains the data
ABC in it, UNLOAD DELIMITED unloads it as “ABC”.
For fixed length columns, the normal padding rules applies. For example a CHAR(100)
column containing ABC, UNLOAD DELIMITED unloads as “ABC <97 blanks>” (with 97
blanks actually following ABC).
The UNLOAD utility generates the delimited LOAD utility control statement in the data set
specified in PUNCHDDN.

8-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Delimited Input/Output Case Study
A spreadsheet has 54 cells in each row
Moving data from spreadsheet to DB2 for z/OS
Save the data in comma separated value (CSV) format
Upload the saved data to the host, either in binary (that is, no conversion
of data), or in text (that is, data is converted)
Create a DB2 table either in EBCDIC (default option), or in ASCII with the
appropriate number of columns and data types
Use LOAD utility with FORMAT DELIMITED specified
Moving data from DB2 to spreadsheet
Use UNLOAD utility with DELIMITED specified
Download the unloaded data from the host to spreadsheet

© Copyright IBM Corporation 2004

Figure 8-9. Delimited Input/Output Case Study CG381.0

Notes:
A Lotus 1-2-3 spreadsheet (on a Windows machine) is used in this case study.
To move the data from the spreadsheet to DB2 for z/OS, the following steps are required:
• Save the spreadsheet data in comma separated value (CSV) format.
• Upload the saved data to the host in one of the following formats:
- Binary (this means the data is stored in ASCII — because we are running the
spreadsheet on in a Windows environment — as on the host, without any
conversion).
- Text (this means the data is converted from ASCII to EBCDIC before it is stored on
the host).
To upload the data, you can either use your terminal emulator’s file transfer program, or
use FTP.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-21


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Create a DB2 table with the appropriate number of columns and data type (consistent
with the number of cells and the type of data stored in the spreadsheet), either as an
EBCDIC (default option), or an ASCII table.
• Use LOAD utility with FORMAT DELIMITED specified as follows:
- If the spreadsheet data is uploaded to the host in text format:
LOAD DATA INTO TABLE tablename
FORMAT DELIMITED
- If the spreadsheet data is uploaded to the host in binary format:
LOAD DATA INTO TABLE tablename
FORMAT DELIMITED ASCII
If you do not specify COLDEL, CHARDEL, and DECPT delimiters in the LOAD utility
control statement, the data is moved to the table using the default values for COLDEL,
CHARDEL, and DECPT delimiters according to the data type you specified in the LOAD
utility control statement.
As mentioned before, the default character for COLDEL is a comma (,). In ASCII, this is
X'2C', and in EBCDIC, this is X'6B'.
The default character for CHARDEL is a double quotation mark (“). In ASCII, this is X'22',
and in EBCDIC, this is X'7F'.
The default character for DECPT is a period (.). In ASCII, this is X'2E', and in EBCDIC, this
is X’4B’.
To move the data from DB2 to a spreadsheet, the following steps are required:
• Use the UNLOAD utility with DELIMITED specified as follows:
- If you want the unloaded data to be in ASCII format:
UNLOAD DATA FROM TABLE tablename
ASCII DELIMITED
The data and the delimiters are unloaded in ASCII. Transmit the data to the
spreadsheet in binary format.
- If you want the unloaded data to be in EBCDIC format:
UNLOAD DATA FROM TABLE tablename
EBCDIC DELIMITED
The data and the delimiters are unloaded in EBCDIC. Transmit the data to the
workstation to be used by the spreadsheet in text format.

8-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.3 RUNSTATS Enhancements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-23


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

RUNSTATS Enhancements
Non-uniform distribution statistics on non-indexed columns
Ability to update statistics history tables with the latest
information without updating the statistics used by the
optimizer
Facilitates monitoring and analysis
RUNSTATS TABLESPACE DB1.TS1
UPDATE NONE HISTORY ALL
RUNSTATS with UPDATE NONE REPORT NO to invalidate
dynamic SQL cache without collecting any statistics

© Copyright IBM Corporation 2004

Figure 8-10. RUNSTATS Enhancements CG381.0

Notes:
When the DB2 optimizer can calculate more accurately the filter factor of the predicates of
a query, this normally leads to better optimization choices, and helps to improve query
performance.
RUNSTATS has been enhanced in Version 8 to gather the following additional statistics,
allowing the optimizer to do a better job:
• Frequency value distributions for non-indexed columns or groups of columns
• Cardinality values for groups of non-indexed columns
• LEAST frequently occurring values, along with MOST for both index and non-indexed
column distributions
This information could not be gathered by RUNSTATS in previous versions of DB2. More
details are discussed in the Performance unit, more precisely in, Figure 9-51, "The Need
for Extra Statistics", on page 9-88.
Version 8 relaxes the Version 7 requirement that history statistics can only be collected if
the main catalog statistics (used by the optimizer) are also updated. This greater flexibility

8-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty allows the user to keep track of statistics changes over time, without the concern that the
main optimizer statistics change as well and can result in access path changes, especially
for dynamic SQL. Thus, in Version 8, you can specify RUNSTATS UPDATE NONE
HISTORY ALL. This way, only all statistics are collected (ALL), but only the history tables
are updated (UPDATE NONE).
In Version 7, the easiest way to invalidate statements in the dynamic statement cache that
reference a certain object is to use RUNSTATS REPORT YES UPDATE NONE for that
object. However, using this statement, the statements in the dynamic statement cache
referencing the RUNSTATSed object are invalidated, but RUNSTATS still goes out and
collects all the statistics for the report.
In Version 8, you can also specify RUNSTATS object REPORT NO UPDATE NONE. This
causes invalidation of those statements in the dynamic statement cache that reference the
table space and/or index space specified in RUNSTATS utility control statement, but no
statistics are gathered by RUNSTATS. Therefore, using RUNSTATS to invalidate
statements in the dynamic statement cache is much cheaper. RUNSTATS will use hardly
any CPU, as it only invalidates the referenced objects in the dynamic statement cache.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-25


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.4 Defaults for Better Performance


In this topic, we discuss some changes to utility defaults, providing more usability and
better performance of DB2 utilities.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-27


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

New Defaults for Better Performance


RESTART is new default for Utilities (also in V7 with PQ72337)
SORTKEYS is used by default for LOAD, REORG, and REBUILD
SORTDATA is used by default for REORG
SORTDATA now allowed for 32K records with DFSORT
REORG will use implicit clustering index
If no clustering index, first index defined is used
If table space has no indexes, SORTDATA operates as in pre-V8
releases

© Copyright IBM Corporation 2004

Figure 8-11. New Defaults for Better Performance CG381.0

Notes:

Automatic utility Restart


When a DB2 utility runs into a problem, it normally abends and the utility is put into a
stopped state (when you do a -DIS UTILITY(utility-id)). After correcting the problem, you
normally want to restart that DB2 utility (from where it left off, using a phase or current
restart).
Currently, users must modify their job control statements (JCL) or DSNUTILS stored
procedure parameters, to indicate that a stopped DB2 utility is to be restarted.
DB2 V8 has been enhanced to allow DB2 for z/OS online utilities to be restarted without
having to specify the RESTART, RESTART(PHASE) or RESTART(CURRENT) keyword.
When a utility job is started, DB2 will check if it is an initial invocation or a restart, by looking
into SYSUTIL for a matching utility-id:
• If it is an initial invocation and the RESTART keyword is specified, the RESTART
keyword is ignored.

8-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • If the utility needs to be restarted, and the RESTART keyword is present, the specified
keyword is used.
• If the utility needs to be restarted, and no RESTART keyword is present, DB2 will set a
default RESTART value based on the utility type, the utility phase, and the specified
keywords. As a general rule, DB2 will try to do current restart whenever possible, or
revert to phase restart in the cases where current restart is not possible. For more
details, please refer to the DB2 for z/OS Version 8 Utility Guide and Reference,
SC18-7427.
Even though with this enhancement the RESTART keyword is optional, you still need to
address the problem that initially caused the utility execution to stop, before resubmitting
the utility job. For example, if the utility abended because of an out-of-space condition, you
must make your data sets larger before restarting the utility.
You also have to make sure to use proper data set disposition parameters that allow for
proper restart. To avoid data set disposition problems, the use of TEMPLATEs is highly
recommended.
Tip: This enhancement has also been made available through the PTF for APAR
PQ72337. In DB2 V7 the implicit utility restart feature is controlled by a new DSNZPARM
UTLRSTRT. The default is OFF. The DSNZPARM does not exist in V8. The feature is
always enabled.

SORTKEYS and SORTDATA Used by Default in V8


Starting with Version 5, the SORTKEYS keyword has been added to the LOAD, REORG
TABLESPACE and REBUILD INDEX utilities, and SORTDATA and NOSYSREC keywords
to the REORG TABLESPACE utility. These keywords activate methods providing better
performance. To achieve this better performance, it is necessary to explicitly specify these
keywords in the utility control statements. (The only exception to this is REORG
TABLESPACE SHRLEVEL CHANGE, which does not require you to specify the
parameters SORTDATA, NOSYSREC, or SORTEYS. It always operates as if these
keywords are specified.)
However, changing hundreds of utility jobs to add these keywords is impractical in many
installations. Also, it was not obvious to new utility users that these keywords should be
specified for optimal performance.
In Version 8, SORTKEYS is the default for LOAD, REORG TABLESPACE, and REBUILD
INDEX, and SORTDATA is the default for REORG TABLESPACE. NOSYSREC is the
default for REORG TABLESPACE (SHRLEVEL CHANGE) as in prior versions.
In Version 8, you can use SORTDATA for 32K records, which was not always possible in
prior versions. Using SORTDATA requires adding the clustering key to the record to be
sorted. In previous versions, if the row length is close to 32K, adding the clustering key to
the sort record can lead to a record that is greater than 32K. You would receive a message
(DSNU291I for REORG SHRLEVEL NONE, ignoring the SORDATA keyword, or

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-29


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DSNU294I for REORG SHRLEVEL REFERENCE or CHANGE, ending the utility


execution).
Now that DFSORT supports larger sort records (provided that the proper maintenance PTF
UQ57144 is installed), this is no longer a problem for DB2, and SORTDATA can be used on
any 32K table space.

REORG’s Usage of the Implicit Clustering Index


REORG TABLESPACE no longer requires an index defined CLUSTER YES (explicit
clustering index) to order the data. If no clustering index exists, the first index created is
used, as is the case for DB2 inserts. Along with this, the Version 8 online schema support
provides the ability to alter the cluster attribute of indexes, with an ALTER INDEX
statement. Refer to Figure 2-38, "Online Schema Changes", on page 2-55 for more
information on online schema evolution.
DFSORT is ALWAYS shipped with z/OS, even though you may not be licensed for it. DB2
V8 provides a “special” license for DB2 to use DFSORT, without the user needing to
acquire an actual DFSORT license. In V8, users will always see DFSORT messages in
DB2 utility outputs rather than other OEM messages. This enhancement allows DB2 to
exploit particular functions and features of DFSORT without having to support a more
generic sort interface. This means better sort performance and more robust DB2 utilities.

Review of SORTKEYS, SORTDATA, and NOSYSREC


The following paragraphs provide a review of the SORTKEYS, SORTDATA, and
NOSYSREC keywords.

SORTKEYS
The SORTKEYS keyword is a performance related option that can improve performance of
LOAD and REORG utilities by having impact during both of the following situations:
• The index key sort elapsed time — by reducing I/O and overlapping phases
• The index load elapsed time — by activating index load parallelism
When using SORTKEYS, during the index key sort, index keys are passed in memory
rather than written to the SYSUT1 and SORTOUT work files. Avoiding this I/O to the work
files improves REORG/LOAD performance. It also reduces disk space requirements for the
SYSUT1 and SORTOUT data sets. Using the SORTKEYS option reduces the elapsed time
from the start of the reload phase to the end of the build phase.
Of course, if the index keys are already in sorted order, or there are no indexes,
SORTKEYS does not provide any advantage. Remember that if SORKEYS is activated
and the job abends, during the reload, sort, or build phase, it will always need to restart
from the beginning of the reload phase.

8-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty More information on the usage and the performance of SORTKEYS for this functionality is
reported in the standard DB2 manuals and in the Redbooks DB2 for OS/390 Version 5
Performance Topics, SG24-2213, and DB2 for z/OS and OS/390 Version 7 Utilities Suite,
SG24-6289.
You can reduce the elapsed time of a LOAD job for a table space or partition with more
than one defined index by invoking parallel index build. We have seen that DB2 V5
introduced the SORTKEYS option to eliminate multiple I/Os to access the keys that are
needed to build the indexes. The keys are passed in storage to the sort process, and then
directly to the build phase. But, since there is only a single sort and build subtask, the
indexes are built serially.
With DB2 V6, when SORTKEYS is specified, DB2 provided multiple pairs of sort and build
subtasks so that indexes are built in parallel, thereby improving the elapsed time of LOAD
and REORG.
You can use dynamic allocation (SORTDEVT and SORTNUM keywords) to allocate the
sort work data sets, or you can allocate them by specifying the DDNAMEs in the form
SWnnWKmm, where nn is the subtask pair number and mm is the number of data sets for
that subtask pair. Using manual allocation of sort work data sets is a way to control and
limit the amount of parallelism, by restricting the number of these data sets.
More information on the usage and the performance of SORTKEYS for this functionality is
reported in the standard DB2 manuals and in the Redbook DB2 for OS/390 Version 6
Performance Topics, SG24-5351.

SORTDATA
The SORTDATA parameter of the REORG utility invokes the execution of an (external) sort
(no DB2 sort) on the data that are columns of the clustering index. This allows a consistent
execution time for REORG, and better performance. Without the SORTDATA keyword, the
data is unloaded through the clustering index. This means that the more the data is
disorganized, the better REORG using SORTDATA will perform, since going through the
clustering index will take progressively longer with more disorganized rows.

NOSYSREC
After unloading and sorting the rows by the UNLOAD phase, the rows of the table space
are normally contained in the unload data set (SYSREC). The next phase of the REORG
utility, the RELOAD phase, must retrieve the rows (again) from the unload data set to load
them back into the table space. The storing and retrieval of the unloaded rows into and
from the SYSREC data set requires I/O operations and raises the question if the two
phases could not better cooperate and avoid the intermediate storage of the rows.
The intermediate storage of the rows can be avoided by specifying NOSYSREC for the
REORG utility.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-31


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

As a consequence of NOSYSREC, the UNLOAD phase does not use the unload data set
(SYSREC). Rows that must be sorted are passed to the RELOAD phase via (DF)SORT
exits, after they have been sorted by DFSORT (or an equivalent sort utility). Rows that do
not need to be sorted, can immediately be passed to the RELOAD phase.
NOSYSREC eliminates the I/O for the unload data set. It does not cause additional I/O by
DFSORT (or an equivalent sort utility). The fact that different sort work data sets are used
does not change the I/O behavior of DFSORT. Thus, NOSYSREC represents a true
performance improvement.
However, you must be aware that the REORG utility cannot be restarted if NOSYSREC
has been specified.

8-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REORG REBALANCE
Sets new partition boundaries for even distribution of rows across
the partitions being reorganized

Before REORG TABLESPACE

Part 1 Part 2 Part 3 Part 4 DPSI

Data
Part 1 Part 2 Part 3 Part 4 Partitions
LK='50000' LK='80000'

After REORG TABLESPACE ... REBALANCE


Part 1 Part 2 Part 3 Part 4 DPSI

Data
Part 1 Part 2 Part 3 Part 4 Partitions
LK='30000' LK='80000'

© Copyright IBM Corporation 2004

Figure 8-12. REORG REBALANCE CG381.0

Notes:
In this topic, we discuss the new REBALANCE option that you can use to have REORG
automatically rebalance the data in your partitions.

Rebalancing Overview
Since DB2 V6, you can alter the partition boundaries of a partitioned table space by using
the ALTER INDEX index-name PART x VALUES (‘new-limit-key’) SQL statement. This puts
partition x and the next partition in REORG- pending (REORP) state. A subsequent
execution of the REORG utility redistributes the rows between both partitions according to
the new limit-key.
In V8, the REORG TABLESPACE utility has a new keyword, REBALANCE, which indicates
that the rows in the table space or the partition ranges being reorganized, should be evenly
distributed for each partition range when they are reloaded.
REBALANCE specifies that REORG TABLESPACE should set new partition boundaries so
that all the rows participating in the reorganization are evenly distributed across the

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-33


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

partitions being reorganized. The SYSTABLEPART and SYSINDEXPART tables are


updated during this process so that they contain the new limit key values. This way, you no
longer have to figure out what value to specify as the new limit-key to obtain an even
distribution among the partitions involved.

8-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
What Does Rebalance Do?
1. Unload rows from the table space or partition range
2. Sort rows by partitioning column(s) and divide by number of
parts
Is not perfect if lots of duplicate keys exist
3. Reload the data
4. Update limit key values in the catalog
5. Invalidate plans, packages and dynamic statement cache
When clustering does not match partitioning, REORG must be
run twice:
First to move rows to the right partition
Second to sort in clustering sequence

© Copyright IBM Corporation 2004

Figure 8-13. What Does Rebalance Do? CG381.0

Notes:
As during normal REORG, the data is unloaded. The data is sorted by the partitioning
column(s) of the partitioned table space. (Remember that in V8, that partitioning columns
no longer have to be supported by an index. Also, the partitioning index does not have to
be the clustering index in V8).
Before reloading the data, REORG TABLESPACE with REBALANCE calculates the
approximate number of pages expected to be populated across all the partitions, taking
into account the percent free space allowed on each page as well as the free pages
specified (as that can differ between partitions). During reload, REORG TABLESPACE
notes the value of the partitioning columns as each partition reaches the page count
threshold. At the end of a successful REORG, the catalog and directory are updated with
the new partition boundaries.
Perfect rebalancing is not always possible if the columns used in defining the partition
boundaries have many duplicate values within the data row. As all keys with a certain value
have to go into the same partition, a key with many duplicates can lead to a partition that is
bigger than the other partitions.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-35


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

You cannot specify the keyword REBALANCE together with SHRLEVEL CHANGE, nor
with SCOPE PENDING, OFFPOSLIMIT, INDREFLIMIT, REPORTONLY, UNLOAD ONLY,
and UNLOAD EXTERNAL keywords.
When using REBALANCE on a table where the clustering index does not match the
partitioning key, REORG must be run twice on the partition range to ensure that the rows
are in optimal clustering order. The first reorganization (with the REBALANCE option)
moves data rows to the appropriate partition.
After the first reorganization, the table space is put into an AREO* state to indicate that
another reorganization is recommended. The second reorganization (without the
REBALANCE keyword) orders each data row in clustering order (based on the partitioning
index) within the appropriate partition.
You cannot specify the REBALANCE option for partitioned table spaces with LOB columns.
Be aware that when the clustering sequence does not match the partitioning sequence,
REORG must be run twice; once to move rows to the correct partition and again to sort in
clustering sequence. DB2 will leave the table space in AREO* (Advisory Reorg Pending)
state after the first reorganization, indicating that a second one is recommended.
Note: The partition range that you specify on the REORG REBALANCE statement is a
range of physical partitions. When rebalancing, DB2 unloads, sorts, and reloads the data
based on logical partition numbers. If, because of earlier partition rotations or adding
additional partitions, logical and physical partitions no longer match up, REORG
REBALANCE may not be possible. When reorganizing, the physical partition numbers
associated with the logical partition number that DB2 uses to reload data into, must be
within the physical partition range that you specify on the REORG REBALANCE
statement; otherwise you receive a DSNU1129I message, and REORG terminates.

8-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REORG Rebalance and
ALTER + REORG Comparison
REORG TABLESPACE ... REBALANCE works out new partitioning
values for you
One step automated process
Supported with SHRLEVEL REFERENCE and NONE
Data available (read only) almost all of the time
Not supported for table spaces with LOB columns
ALTER INDEX or ALTER TABLE ALTER PART
Gives you more control to allow for future skewed growth
Leaves affected partitions in REORP
Data unavailable until REORG completes

© Copyright IBM Corporation 2004

Figure 8-14. REORG Rebalance and ALTER + REORG Comparison CG381.0

Notes:
Assume that a table space that contains a transaction table named TRANS is divided into
10 partitions, and each partition contains one year of data. Partitioning is defined on the
transaction date, and the limit key value is the end of the year.

Manually Changing the Partition Boundaries


Assume that the year 2003 resulted in more data than was projected, so that the allocation
for partition 10 almost reaches its maximum of 4 GB. The year 2002, on the other hand,
resulted in less data than was projected. Therefore, we want to change the boundary
between partition 9 and partition 10 so that some of the data in partition 10 becomes part of
the data in partition 9.
To change the boundary, issue the following statement:
ALTER TABLE TRANS ALTER PART 9 VALUES (03/31/2003);

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-37


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Now the data in the first quarter of the year 2003 will be part of partition 9. The partitions on
either side of the new boundary (partitions 9 and 10) are placed in REORG-pending
(REORP) status and are not available until the partitions are reorganized.
Note that we use a table controlled partitioned table space in this example. When using an
index controlled partitioned table space, you could have used this statement as well:
ALTER INDEX IXTRANS PART 9 VALUES(03/31/2003);

Using REORG REBALANCE to Spread Data


Alternatively, you can rebalance the data in partitions 9 and 10 by using the REBALANCE
option of the REORG utility:
REORG TABLESPACE dbname.tsname PART(9:10) REBALANCE
This method avoids putting the partitions in a REORP state, and making the data
unavailable for applications until the REORG has completed. When you use REORG
SHRLEVEL REFERENCE to rebalance the partitions, the data is available for readers
almost all of the time. Only during the switch phase, the data is not available. It is also
during the switch phase that DB2 updates the limit keys to reflect the new partition
boundaries.
Note: When altering the partition boundaries manually, it is your job to determine the new
partition boundaries. When using REBALANCE, DB2 will figure out the new partition
boundaries for you.

8-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.5 REORG TABLESPACE Enhancements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-39


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REORG TABLESPACE - SCOPE PENDING


Reorganizes only the table space part(s) that are in
REORG-pending (REORP) or advisory REORG-pending state
(AREO*)
When specifying a partition range, the adjacent high and low
parts that are not included in the range must not be in REORP
SYSCOPY records are only written for those partitions that are
actually reorganized

© Copyright IBM Corporation 2004

Figure 8-15. REORG TABLESPACE - SCOPE PENDING CG381.0

Notes:
As mentioned before, many enhancements have been implemented across various utilities
to improve utility usability. This is another example.
REORG TABLESPACE is extended with the new keyword SCOPE to indicate the scope of
the reorganization for the table space or partition range. The default is SCOPE ALL, which
results in the reorganization of the entire table space or the partition range. When you
specify SCOPE PENDING, you indicate that only the partitions in a REORP or AREO*
state for a specified table space or partition range are to be reorganized.
If you want to reorganize a range of partitions and specify SCOPE PENDING, make sure
that the adjacent partitions, outside the specified range, are not in a REORP state. The
REORG terminates with an error otherwise.
Rows are inserted into the SYSCOPY catalog table only for those partitions that are
reorganized.
You cannot specify SCOPE PENDING together with the REBALANCE, OFFPOSLIMIT,
INDREFLIMIT, REPORTONLY, UNLOAD ONLY, or UNLOAD EXTERNAL keywords.

8-40 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REORG TABLESPACE -
SCOPE PENDING Example 1
REORG TABLESPACE ... SCOPE PENDING
ts DBET State
Part 1 Jan_1998

Part 2 Feb_1998 AREO* Parts 2,3,13,14


be REORG'd
Part 3 Mar_1998 AREO*

Part 13 Oct_2002 REORP

Part 14 Nov_2002 REORP

Part 15 Dec_2002

© Copyright IBM Corporation 2004

Figure 8-16. REORG TABLE - SCOPE PENDING Example 1 CG381.0

Notes:
In this example, SCOPE PENDING causes only partitions 2, 3, 13, and 14, which are either
in REORP or AREO* state, to be reorganized.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-41


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REORG TABLESPACE -
SCOPE PENDING Example 2
REORG TABLESPACE ... SCOPE PENDING PART 2:14
ts DBET State
Part 1 Jan_1998

Part 2 Feb_1998 AREO* Adjacent part 15


in REORP blocks REORG
Part 3 Mar_1998 AREO*

Part 13 Oct_2002 REORP


DSNU271I
Part 14 Nov_2002 REORP on Part 15

Part 15 Dec_2002 REORP

© Copyright IBM Corporation 2004

Figure 8-17. REORG TABLESPACE - SCOPE PENDING Example 2 CG381.0

Notes:
In this example, partitions 2, 3, 13, 14, and 15 are either in REORP or AREO* state.
Specifying REORG TABLESPACE ... SCOPE PENDING would cause all these partitions to
be reorganized. However, we specify SCOPE PENDING, PART 2:14 instead. Since the
adjacent partition 15 is not included in the partition range to be reorganized (2:14), but has
a REORP state, REORG terminates with DSNU271I message and a return code of 8.
DSNU271I ... REORG PENDING ON FOR TABLE SPACE ...PART 15 PROHIBITS PROCESSING
The message indicates that an attempt was made to execute a REORG utility to
redistribute data in a partitioned table space. The partition number stated in the message
was found to have the REORP state on, but was not specified on the PART 2:14 partition
range parameter of the REORG utility.
It is, of course, not required that you should reorganize all partitions that are in AREO* or
REORP state at the same time. For example, there are two ranges, partition range 2 to 3
and partition range 13 to 15 which are in AREO* or REORP state. You can specify only the
first range, for example, REORG TABLESPACE ... SCOPE PENDING PART 2:3, without
reorganizing partition range 13 to 15 at the same time.

8-42 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REORG TABLESPACE ... DISCARD
Now supported by REORG ... SHRLEVEL CHANGE
During the window when discarding the data rows in REORG,
these data rows cannot be modified
If a data row that matches the discard criteria gets updated
while REORG is in the process of discarding, REORG stops
with RC=8 and a DSNU1127I message

© Copyright IBM Corporation 2004

Figure 8-18. REORG TABLESPACE ... DISCARD CG381.0

Notes:
Since DB2 V5, you can specify the DISCARD keyword as part of the REORG
TABLESPACE utility control statement, to indicate that rows that meet the specified WHEN
conditions should be discarded during the reorganization. However, specifying the
DISCARD keyword is only allowed with REORG SHRLEVEL NONE and SHRLEVEL
REFERENCE.
In Version 8, you can also specify the DISCARD keyword with REORG SHRLEVEL
CHANGE. However, if you use discard processing with a SHRLEVEL CHANGE REORG,
while DB2 is discarding data rows (during the UNLOAD phase), modifications to data rows
that match the discard criteria are not permitted. When REORG TABLESPACE encounters
such a situation, the utility execution is terminated with an error (condition code 8) and a
DSNU1127I message.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-43


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-44 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.6 REBUILD INDEX Enhancements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-45


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REBUILD INDEX - SCOPE PENDING


Rebuilds only the index or part(s) that are in rebuild-pending
(RBDP), recovery pending (RECP), or advisory REORG pending
(AREO*) state
Unlike REORG TABLESPACE, the adjacent high and low parts not
included in the range are not checked for RBDP

You can specify the index space name, instead of the index name
in the following utilities:
REBUILD INDEX ... or REBUILD INDEXSPACE ...
REORG INDEX ... or REORG INDEXSPACE...
RECOVER INDEX ... or RECOVER INDEXSPACE ...

© Copyright IBM Corporation 2004

Figure 8-19. REBUILD INDEX - SCOPE PENDING CG381.0

Notes:
In DB2 V8, similar to REORG, the REBUILD INDEX syntax has been extended with the
new SCOPE keyword, to indicate the scope of the rebuild. The default is SCOPE ALL,
which results in the reorganization of all the specified indexes. You specify SCOPE
PENDING to indicate that the specified indexes should be rebuilt only if they are in a RBDP,
RECP, or AREO* state.
Unlike REORG TABLESPACE, the adjacent high and low parts not included in the range
are not checked for the RBDP state.
In V8, REBUILD INDEX also accepts index space names, instead of just index names as
the object name to be rebuilt. By the way, this enhancement also applies to RECOVER
index and REORG index.

8-46 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REBUILD INDEX - SCOPE PENDING Example

REBUILD INDEX ... SCOPE PENDING PART 2:14


ts pi DBET State
Part 1 Jan_1998

Part 2 Feb_1998 RBDP REBUILD


Parts 2, 3, 13, 14
Part 3 Mar_1998 RBDP

Part 13 Oct_2002 RBDP

Part 14 Nov_2002 RBDP

Part 15 Dec_2002 RBDP

© Copyright IBM Corporation 2004

Figure 8-20. REBUILD INDEX - SCOPE PENDING Example CG381.0

Notes:
In this example, we use the SCOPE PENDING keyword in the REBUILD INDEX utility. The
result is that only the partitions 2, 3, 13, and 14, which are in RBDP state, are rebuilt. Note
that, unlike the use of SCOPE PENDING with REORG, REBUILD INDEX does not care
that partition 15 is in RBDP, but was not included in the range of partitions to rebuild.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-47


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-48 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.7 COPY Enhancements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-49


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

COPY Enhancements

COPY LIST listdef-name - data-set-spec - IC-spec copy-options-spec

table-space-spec IC-spec DSNUM-spec data-set-spec


index-name-spec

copy-options-spec:

YES
SYSTEMPAGES
NO

Default

© Copyright IBM Corporation 2004

Figure 8-21. COPY Enhancements CG381.0

Notes:
In V8, the DB2 COPY utility (full image copy and incremental image copy) is extended with
a new keyword called SYSTEMPAGES. You can specify either YES (this is the default) or
NO.
The SYSTEMPAGES options applies to both table spaces and indexes (defined with
COPY YES).
The visual shows the syntax.

8-50 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
COPY Utility SYSTEMPAGES Option
Indicates whether the dictionary and version system pages are
copied at the beginning of the object to be copied. Especially
important when:
Copying a piece, or a single data set of a table space or index
Using incremental image copy and those system pages have not changed
With SYSTEMPAGES YES
Dictionary pages for the compression dictionary are included
V8 system pages that contain version information are included
Both are included at the beginning of the image copy
Version pages can occur multiple times in the image copy
With SYSTEMPAGES NO
Copy pages as they appear in the object (pre-V8)
Header page is always included no matter what SYSTEMPAGES
option is used
Note that for SYSTEMPAGES YES, the UNLOAD utility can
process image copies with data versioning

© Copyright IBM Corporation 2004

Figure 8-22. COPY Utility SYSTEMPAGES Option CG381.0

Notes:
System pages are pages within a table space that describe the data in that table space,
partition, or index.
Specifying SYSTEMPAGES YES ensures that any header, dictionary, and version system
page is copied at the beginning of the image copy data set. Selecting YES ensures that the
image copy contains the necessary system pages for subsequent UNLOAD utility jobs,
which want to unload data from an image copy, to correctly format and unload all data
rows.
This is especially important when using incremental image copies, in cases where the
system pages have not changed, or when copying only a single data set or piece.
Specifying SYSTEMPAGES YES guarantees that the image copy contains all system
pages in the image copy.
SYSTEMPAGES NO does not ensure that the dictionary and version system pages are
copied at the beginning of the image copy data set. The COPY utility copies the pages in
the order they occur in the table space or index, including the header pages.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-51


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Irrespective of the SYSTEMPAGES option, in V8, the header page is always included in
the image copy.
The CHECKPAGE option also validates system pages.

8-52 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.8 REPAIR Enhancements


In this topic, we discuss the changes to the REPAIR utility.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-53


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REPAIR - Switch Off New Status

To switch off the new AREO* status:

SET table-space-spec NOAREORPENDSTAR

INDEX ( index-name ) NOAREORPENDSTAR

( ALL ) table-space-spec

REPAIR SET TABLESPACE DELIMITD.DELIMITS NOAREORPENDSTAR

© Copyright IBM Corporation 2004

Figure 8-23. REPAIR - Switch Off New Status CG381.0

Notes:
The REPAIR utility has been enhanced to reset the new advisory REORG pending state
(AREO*) for table spaces and index spaces. The new keyword introduced to accomplish
this is NOAREORPENDSTAR. For more information about AREO*, see Figure 2-85,
"DBET States Used by Online Schema Evolution", on page 2-126.

8-54 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
REPAIR - Use for Versions
Use for
Moving objects across subsystems with DSN1COPY
Recycling versions for objects that are not reorganized using the IBM
REORG utility

VERSIONS TABLESPACE table-space-name


database-name .
index-name-spec

REPAIR VERSIONS TABLESPACE SYSTEMBD.SYSTEMBS


REPAIR VERSIONS INDEXSPACE SYSTEMBD.PIX

© Copyright IBM Corporation 2004

Figure 8-24. REPAIR - Use of Versions CG381.0

Notes:
REPAIR VERSIONS updates the version information in the catalog and directory from the
information in the table space or index. Use REPAIR VERSIONS when you perform the
following tasks:
• After you use DSN1COPY to move objects from one system to another, or within a
subsystem.
• As part of version number management for objects that do not use the IBM REORG
utility. It depends on how the non-IBM utilities handle versions and what information is
recorded in the DB2 catalog tables. You may have to use the REPAIR VERSIONS to
update the versions in the catalog and directory in such instances.
REPAIR VERSIONS also writes a SYSCOPY record with an STYPE value of 'V' that the
MODIFY utility can use for reclaiming version numbers. In the SYSCOPY entry, the
OLDEST_VERSION column contains the lowest version found within the active object.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-55


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DSN1COPY Processing with Versions


1. Ensure object definitions are the same in source and target
2. REORG if necessary
3. Ensure there are enough versions available on the target
4. Run DSN1COPY with OBIDXLAT
5. Run REPAIR VERSIONS on target object
Updates CURRENT_VERSION with
MAX(target.CURRENT_VERSION, source.CURRENT_VERSION)
Updates OLDEST_VERSION with
MIN(target.OLDEST_VERSION, source.OLDEST_VERSION)

© Copyright IBM Corporation 2004

Figure 8-25. DSN1COPY Processing with Versions CG381.0

Notes:
When objects are moved from one system to another and contain version system pages,
the version information on the target system’s catalog must match the source versions in
the physical objects for the data to be accessible. You should follow the process outlined
below:
1. Ensure that the current object definitions on the source and target systems are defined
the same. For table spaces, each table must have the same number of columns, and
each column must be of the same data type. This includes things like compression and
SEGSIZE as well. In addition, if a non-version generating ALTER TABLE ADD
COLUMN has been executed, an ALTER TABLE ADD COLUMN must also be
performed on the target object. (This is no different from V7 DSN1COPY processing.)
Indexes that are copied may or may not have been altered in V8. If not altered in V8,
CURRENT_VERSION and OLDEST_VERSION contain zeros for both the source and
target systems.
2. Reorganize any object that has OLDEST_VERSION of 0 and CURRENT_VERSION
that is greater than 0 so that the versions match. This is to remove so-called V0 rows

8-56 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty from the source table. Their description is not in the system pages, but in SYSOBDS.
However, when copying between subsystems using DSN1COPY, only the physical data
set is copied, not the SYSOBDS information.
3. Ensure that there are enough versions available on the target. For a table space, the
combined active number of versions for the object on both the source and target
systems must be less than 255. For an index, the combined active number of versions
must be less than 16.
The active number of versions can be calculated as follows:
For the object on both source and target systems:
If the CURRENT_VERSION is less than the OLDEST_VERSION, add the max
number of versions (255 or 16) to CURRENT_VERSION. (This is done to take into
account the fact that you just cycled passed the maximum version number).
Calculate the number of active versions as follows:
#active_versions = MAX(target.CURRENT_VERSION,source.CURRENT_VERSION) -
MIN(target.OLDEST_VERSION,source.OLDEST_VERSION) + 1;
If the number of active versions is too high, first reorganize the entire source and
target objects, take image copies, and run MODIFY to reclaim versions.
4. Run DSN1COPY, most likely using the OBIDXLAT option, specifying the proper
mapping of table OBIDs from the source to the target system.
5. When on the target system, run REPAIR VERSIONS specifying the object which was
copied over.
- For table spaces, this utility updates:
• OLDEST_VERSION and CURRENT_VERSION in SYSTABLESPACE
• OLDEST_VERSION in SYSTABLEPART
• VERSION in SYSTABLES.
- For indexes, this utility updates:
• OLDEST_VERSION and CURRENT_VERSION in SYSINDEXES
• OLDEST_VERSION in SYSINDEXPART.
REPAIR uses the following formulas to update the version numbers:
CURRENT_VERSION = MAX(target.CURRENT_VERSION,source.CURRENT_VERSION)
OLDEST_VERSION = MIN(target.OLDEST_VERSION,source.OLDEST_VERSION)
More information about versioning can be found in Figure 2-57, "Versioning", on page 2-86.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-57


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REPAIR Versions Example

SYSTEMA NAME CHAR(20) CATALOG SYSTEMA


CAPITAL CHAR(18)
AREA INTEGER SYSTABLESPACE SYSINDEXES
AREA_RANK SMALLINT OLDEST_VERSION=5 OLDEST_VERSION=5
STATE POPULATION INTEGER CURRENT_VERSION=10 CURRENT_VERSION=5
LOCATION CHAR(6) VERSION (SYSTABLES)
CLIMATE CHAR(8)

SYSTEMB NAME CHAR(20)


CAPITAL CHAR(18) CATALOG SYSTEMB
AREA INTEGER
AREA_RANK SMALLINT SYSTABLESPACE SYSINDEXES

STATE POPULATION INTEGER OLDEST_VERSION=15


OLDEST_VERSION=10
LOCATION CHAR(6) CURRENT_VERSION=100
CURRENT_VERSION=10
CLIMATE CHAR(8) VERSION (SYSTABLES)

© Copyright IBM Corporation 2004

Figure 8-26. REPAIR Versions Example CG381.0

Notes:
Notice that the current object definitions on both the target and source systems SYSTEMA
and SYSTEMB are defined the same.

8-58 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.9 Utility Changes to Support Online Schema Evolution

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-59


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to
Support Informational RI Constraints
LOAD and CHECK DATA do not check informational RI
constraints
REPORT TABLESPACESET also reports all table spaces
related to the named table space through informational RI
constraints
QUIESCE TABLESPACESET also quiesces all table spaces
related to the named table space through informational RI
constraints
LISTDEF also includes all table spaces related through
informational RI constraints when the keyword RI is
specified

© Copyright IBM Corporation 2004

Figure 8-27. Utility Changes to Support Informational RI Constraints CG381.0

Notes:
Informational RI is introduced in DB2 Version 8 as part of the Materialized Query Tables
implementation. It allows you to define referential integrity (RI) relationships that exist in the
DB2 catalog, but are not enforced by DB2. You do so by defining foreign key relationships
with the NOT ENFORCED keyword. For more details, see “Informational RI Syntax” on
page 9-41.
Example 8-1 illustrates an informational RI constraint.
Example 8-1. Defining Informational RI

CREATE DATABASE INFORIDB;


CREATE TABLE INFORIT1(C1 INTEGER NOT NULL PRIMARY KEY,
C2 INTEGER)
IN DATABASE INFORIDB;
CREATE UNIQUE INDEX INFORIX1 ON INFORIT1(C1);
CREATE TABLE INFORIT2(C1 INTEGER,
C2 INTEGER REFERENCES INFORIT1 NOT ENFORCED)
IN DATABASE INFORIDB;
INSERT INTO INFORIT1 VALUES(10,20);
INSERT INTO INFORIT1 VALUES(30,40);

8-60 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty INSERT INTO INFORIT2 VALUES(50,60);


INSERT INTO INFORIT2 VALUES(70,80);
______________________________________________________________________
The INSERT statements run successfully because of the keyword NOT ENFORCED.
Instead of using INSERT statements, the LOAD utility can also be used. A LOAD will also
run successfully since the LOAD utility does not check informational RI constraints.
Note that if the keyword ENFORCED is specified (the default), the INSERT statements and
LOAD utility check for the RI constraints, and our program would not have run successfully,
because no primary key entry exists in INFORIT1 for the values 60 and 70.
Now we add a check constraint on each of these two tables as follows:
ALTER TABLE INFORIT1 ADD CHECK (C1 > 1)
ALTER TABLE INFORIT2 ADD CHECK (C1 > 1)
This causes the table spaces for the two tables to be placed in CHKP state.
Run the CHECK DATA utility with the following control statement:
CHECK DATA TABLESPACE INFORIDB.INFORIT1
TABLESPACE INFORIDB.INFORIT2
The CHECK DATA utility runs successfully and resets the CHKP state because it does not
check for informational RI constraints. In this case, CHECK DATA only looks at the check
constraint, and all rows are OK with that.
When you run the REPORT utility with the following control statement:
REPORT TABLESPACESET TABLESPACE INFORIDB.INFORIT1
The REPORT utility reports about both tables INFORIT1 and INFORIT2, taking
informational RI into account.
You can run the QUIESCE utility with the following control statement:
QUIESCE TABLESPACESET TABLESPACE INFORIDB.INFORIT1
This establishes the quiesce point for both the table spaces for the two tables. An
application that does the RI checking inside that application can still take advantage of
defining informational RI in DB2. Defining informational RI allows you to easily quiesce
related objects. Using informational RI and QUIESCE TABLESPACESET gives you an
easy way to quiesce a set of application related objects, without having to track them
individually and making sure you include all of them in the QUIESCE utility.
LISTDEF can also exploit informational RI. You can run the LISTDEF utility with the
following control statement:
LISTDEF LIST1 INCLUDE TABLESPACE INFORIDB.INFORIT1 RI
And LISTDEF includes the table spaces for the two tables in the generated list.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-61


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (1 of 6)


CHECK DATA
When run against entire partitioned table space scans DPSI, extracts keys, and sorts
keys
When run against a partition scans partition of DPSI corresponding to table partition,
extract keys, and skips sort
CHECK INDEX
When run with PART keyword checks specified partition of DPSI
COPY
Supports specifying a partition of DPSI with DSNUM keyword
LISTDEF
PARTLEVEL keyword specifies the partition granularity for partitioned objects has been
extended to DPSIs in V8
TEMPLATE
Templates created for DPSIs may wish to make use of the &PA OBJECT variable
QUIESCE
Drain classes and restrictive states for DPSIs mirror PIs:
WRITE YES: Partitions are DW / UTRO
WRITE NO: No drains or restrictive states
QUIESCE
NPSIs are DW / UTRO during WRITE YES for table space/partition
© Copyright IBM Corporation 2004

Figure 8-28. Utility Changes to Support DPSI (1 of 6) CG381.0

Notes:
In most cases, DPSIs are treated like partitioning indexes in V7 (or data-partitioned
partitioning indexes in V8). We now look at each of the DB2 utilities in more detail.

CHECK DATA
The SCANTAB phase of CHECK DATA extracts foreign keys:
• When run against an entire partitioned table space, if the foreign key index is a
data-partitioned secondary index (DPSI), the index is scanned and foreign keys
extracted. The extracted foreign keys are then sorted in a SORT phase, before the keys
can be used to check for a matching primary key in the parent table.
• When run against a partition of a partitioned table space, if the foreign key index is a
data-partitioned secondary index, the partition of the index corresponding to the target
data partition is scanned and the foreign keys are extracted. The SORT phase is
skipped, as it is not required. All extracted keys are in the correct order since they only
came from a singe part of the DPSI.

8-62 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty CHECK INDEX


If the PART keyword is specified with the CHECK INDEX utility, and the index is a
data-partitioned secondary index, the specified physical index partition is tested for
consistency.

COPY
The DSNUM keyword of the COPY utility may specify a partition of a data-partitioned
secondary index (provided that the index is copy-enabled, of course).

LISTDEF
The PARTLEVEL keyword of the LISTDEF utility statement specifies the partition
granularity for partitioned objects in LISTDEF's list of objects. With the introduction of
data-partitioned secondary indexes, this keyword now extends to those objects as well.
The keyword continues to be ignored for non-partitioned objects.

TEMPLATE
This utility statement is unaffected by the introduction of DPSIs, but note that templates
created for DPSIs may wish to make use of the &PA OBJECT variable.

QUIESCE
The drain classes and restrictive states used for DPSIs mirror those of PIs. That is,
pertinent partition(s) are DW/UTRO during QUIESCE WRITE YES, and there are no drains
or restrictive states during QUIESCE WRITE NO. For non-partitioned secondary indexes
(NPSIs), the entire index is DW/UTRO during QUIESCE WRITE YES operations against
the table space or any partition of the table space.
DW (Drain Writers) means draining the write class, permitting concurrent access for SQL
readers, but not allowing any updaters to access the object. UTRO represents the utility
restrictive state allowing read only access on the target object.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-63


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (2 of 6)


LOAD
LOAD loads records into tables and builds or extends any
indexes defined on the tables.
SORT phase is skipped if all of the following apply:
No more than one key per table
All keys are the same type:
Index key
Indexed foreign key
Foreign key
Data is loaded in key order
Data being loaded is grouped by table and each input record
is loaded into one table only

© Copyright IBM Corporation 2004

Figure 8-29. Utility Changes to Support DPSI (2 of 6) CG381.0

Notes:
The LOAD utility loads records into tables and builds or extends any indexes defined on the
tables accordingly. LOAD proceeds in a series of phases. The sort phase is the only phase
impacted by the data-partitioned secondary indexes.

SORT PHASE
During the sort phase, temporary file records are sorted in preparation for index
maintenance or referential constraint enforcement, if indexes or foreign keys exist. The
SORT phase is skipped if all of the following apply:
• There is not more than one key per table
• All keys are of the same type:
- Index key only
- Indexed foreign key
- Foreign key only
• The data being loaded is in key order (if a key exists):

8-64 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - If the key in question is an index key only and the index is a data-partitioned
secondary index, the data is considered to be in order if the data is grouped by
partition, and ordered within partition by key value.
- If the key in question is an indexed foreign key and the index is a data-partitioned
secondary index, the data is never considered to be in order.
• The data being loaded is grouped by table and each input record is loaded into one
table only.
Because of the changes to the SORT phase, the calculations to estimate the size of
LOAD's work data sets are modified.

LOAD's Work Data Sets:


When calculating the size of work data sets for LOAD, you have to perform the following
calculations:
• Calculating the key, k
• Calculating the number of keys extracted
The presence of data-partitioned secondary indexes on any table targeted by a given
LOAD influences both these calculations.
Calculating the key, k:
If there is a mix of DPSIs and non-partitioned indexes on the table being LOADed, or if
there is a foreign key that is exactly indexed by a DPSI, take:
Max(longest index key + 15, longest foreign key + 15) x (number of keys extracted)
Otherwise, take:
Max(longest index key + 13, longest foreign key + 13) x (number of keys extracted)
Calculating the number of keys extracted:
For each foreign key that is exactly indexed (that is, where foreign key and index definitions
correspond identically):
• Count 0 for the first relationship in which the foreign key participates if the index is not a
DPSI. Count 1 if the index is a DPSI.
• Count 1 for subsequent relationships in which the foreign key participates (if any).

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-65


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (3 of 6)


LOAD - 2

LOAD with an index in PAGE SET REBUILD PENDING


Page set REBUILD PENDING (PSRBD) on a non-partitioning IX blocks
LOAD REPLACE . . . PART on any partition
PSRBD applies only to non-partitioned indexes, not to DPSIs
Concurrency: DPSIs support complete concurrency between
partitions for LOAD PART jobs

Inline statistics

When loading by partition, a separate RUNSTATS INDEX can be


avoided by using inline statistics for DPSIs
(Extra RUNSTATS still needed if you have NPSIs)

© Copyright IBM Corporation 2004

Figure 8-30. Utility Changes to Support DPSI (3 of 6) CG381.0

Notes:
Other changes brought about by DPSIs that affect LOAD are discussed next.

LOAD with RECOVER PENDING or Page Set REBUILD PENDING Status


Prior to Version 8, a page set REBUILD PENDING status on a non-partitioning index will
block the ability to LOAD REPLACE any partition of the table space. This restriction applies
only to non-partitioned indexes. Partitioned indexes, including DPSIs, are never placed in a
page set REBUILD PENDING status.

Concurrency and Compatibility


Today's documented concurrency and compatibility of LOAD with respect to secondary
indexes applies only to non-partitioned secondary indexes (NPSIs). DPSIs support total
concurrency between partitions. The physical partitions of DPSIs are drained in the same
manner as the physical partitions of partitioned partitioning indexes, and there are no
logical claims or drains of RR applications from DPSIs during LOAD utility operation. Thus,

8-66 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty data-partitioning secondary indexes are a big boon to processing multiple LOAD PART
jobs concurrently.

Collecting Inline Statistics while Loading a Table


When you run the LOAD utility with the PART keyword specification, you can collect inline
statistics on the partitions of the DPSI corresponding to the partitions being loaded, just as
you do for partitioning index partitions in V7. There is no need to run RUNSTATS
separately. However, you cannot collect inline statistics on NPSIs when you run the LOAD
utility with the keyword PART specification, and therefore you still have to run RUNSTATS
separately to collect statistics for NPSIs.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-67


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (4 of 6)


RECOVER
DSNUM may specify:
A partition of a partitioned table space or partitioned index
A data set within a non-partitioned table space
DSNUM may not specify a single data set or logical partition
of a non-partitioned index
REPAIR
PART may now specify a partition of a DPSI
REPORT
DSNUM may now specify a partition of a DPSI
RUNSTATS
May be run against single partitions of table spaces or indexes
(including DPSIs). Partition-level statistics are used to update
aggregate statistics for the entire object.

© Copyright IBM Corporation 2004

Figure 8-31. Utility Changes to Support DPSI (4 of 6) CG381.0

Notes:

RECOVER
In DB2 Version 8, the DSNUM option of the RECOVER utility may be used to specify:
• A partition of a partitioned table space
• A partition of a partitioned index, including a data-partitioned secondary index (this is
new)
• A data set within a non-partitioned table space
However, DSNUM may not be specified (at the index level) for:
• A single data set of a non-partitioned index
• A logical partition of a non-partitioned index

Concurrency and Compatibility


When partitions of a DPSI are recovered, each partition being recovered is placed in the
restrictive state DA/UTUT — where DA (Drain All) causes draining of all claim classes (that

8-68 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty is, no concurrent SQL access is possible), and UTUT represents the utility restrictive state
whereby the utility has exclusive control on the target object. The concurrency and
compatibility characteristics for DPSIs are the same as those for PIs.

REPAIR
You can use the PART keyword with the REPAIR utility to specify a partition of a DPSI.

REPORT
In V8, the DSNUM keyword of REPORT utility statement may specify a partition of a DPSI.

RUNSTATS
There are two formats for the RUNSTATS utility: RUNSTATS TABLESPACE and
RUNSTATS INDEX:
• RUNSTATS TABLESPACE gathers statistics on a table space and, optionally, indexes
or columns.
• RUNSTATS INDEX only gathers statistics on indexes.
You can run RUNSTATS against a single partition of a partitioned table space or partitioned
index (including DPSIs). When run RUNSTATS against a single partition of an object, the
partition level statistics that are gathered, are used to update the aggregate statistics for
the entire object.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-69


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (5 of 6)


REBUILD INDEX
Recreates indexes / index partitions from the table / table partitions
that they reference
REBUILD INDEX . . . PART
PI or DPSI: Recreates the physical partition
NPSI: Recreates the logical partition
Multiple partitions of DPSI can be rebuilt in parallel
Concurrency:
For a DPSI, each partition being rebuilt is DA / UTUT
For a NPSI, each (logical) partition being rebuilt is DR

© Copyright IBM Corporation 2004

Figure 8-32. Utility Changes to Support DPSI (5 of 6) CG381.0

Notes:
The REBUILD INDEX utility recreates indexes or index partitions from the table (or table
partitions) that they reference.
Physical partitions are recreated when the PART option is used and the target of the
REBUILD operation is a (partitioned) partitioning index or a DPSI. Logical partitions are
recreated when the PART option is used and the target of the REBUILD operation is a
non-partitioned secondary index.
REBUILD INDEX can rebuild one or more partitions of a DPSI through the use of the PART
keyword. By using the PART keyword, when only certain partitions of the index need to be
rebuilt, REBUILD INDEX avoids unnecessarily scanning the entire table space and
unnecessarily rebuilding undamaged partitions.

Estimating the Work File Size for Parallel Index Build


If you choose to provide work file data sets for the REBUILD INDEX utility (that is, not allow
them to be dynamically allocated), you need to estimate the size, and number of keys that

8-70 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty are present in all of the indexes, or index partitions being processed by the subtask in order
to calculate the size of each sort work file.
When you have determined which indexes or index partitions are assigned to which
subtask pairs, use the following formula to calculate the space required:
2 x (longest index key + c) x (number of keys extracted)
Where:
- Longest index key: Determined as in prior versions
- Value for c: If a mix of DPSIs and non-partitioned indexes are being processed, c is
10; otherwise, c is 8.
- Number of keys extracted: Determined as in prior versions.

REBUILD PENDING Statuses


The RBDP* and PSRBD statuses do not apply to DPSIs. They apply only to non-partitioned
secondary indexes.

Concurrency and Compatibility


When the partitions of a DPSI are rebuilt, each partition being rebuilt is DA/UTUT — where
DA (Drain All) causes draining of all claim classes (that is, no concurrent SQL access is
possible) and UTUT represents the utility restrictive state whereby the utility has exclusive
control on the target object.
When the logical partitions of a NPSI are rebuilt, each partition being rebuilt is DR — where
DR (Drain Repeatable Read claimers) causes draining of repeatable read claim classes
(that is, limited concurrent SQL access is possible).

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-71


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Changes to Support DPSI (6 of 6)


REORG ... PART
Reorganizes a table space partition (or a range), or indexspace
partition
REORG TABLESPACE PART n
REORG TABLESPACE PART n:m
REORG INDEX PART n
REORG phases affected:
SORT and SORTBLD require more space if there is a mix of DPSIs
and NPSIs
BUILD -- REORG PART SHRLEVEL(NONE)
DPSIs are rebuilt -- no contention
NPSIs are corrected -- contention is possible
BUILD2 only for NPSIs, not used with DPSIs

© Copyright IBM Corporation 2004

Figure 8-33. Utility Changes to Support DPSI (6 of 6) CG381.0

Notes:
REORG TABLESPACE PART n reorganizes the data for part n, reorganizes part n of all
partitioned indexes (including all DPSIs), and index entries for logical part n in all
non-partitioned indexes.
REORG TABLESPACE PART n:m reorganizes data for part n through part m, reorganizes
parts n through m of all partitioned indexes (including all DPSIs), and index entries for
logical parts n through m in all non-partitioned indexes.
REORG INDEX PART n reorganizes the part n of the index. REORG INDEX specifying the
PART keyword is only allowed when the index is physically partitioned, either a partitioned
partitioning index, or a DPSI.
The REORG utility operates in a series of phases. Of these, the SORT, BUILD, SORTBLD
and BUILD2 phases are impacted by the use of data-partitioned secondary indexes.

8-72 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty SORT, SORTBLD, and BUILD


The impact on the SORT and SORTBLD phases is with regard to the size required for the
work data sets used for sorting index entries. To calculate the approximate size required for
the work data set, follow these steps:
1. For each table or partition, multiply the number of records in the table or partition by the
number of indexes being rebuilt.
2. Add all the products obtained in step 1.
3. Multiply the sum (from step 2) by the largest key length plus a constant, c:
- If the indexes being rebuilt are a mix of DPSIs and non-partitioned indexes, c is 10.
- Otherwise, c is 8.
If you choose to provide work file data sets to build indexes in parallel (that is, not allow
them to be dynamically allocated), you need to know the size and number of keys that are
present in all of the indexes or index partitions being processed by the subtask in order to
calculate each sort work file size. When you have determined which indexes or index
partitions are assigned to which subtask pairs, use the following formula to calculate the
space required:
• 2 x (longest index key + c) x (number of keys extracted)
- Longest index key: Determined as in prior versions
- Value for c:
• If a mix of DPSIs and non-partitioned indexes are being processed, c is 10.
Otherwise, c is 8.
• Number of keys extracted: Determined as in prior versions.
The other impact to the SORTBLD and BUILD phase (depending whether or not multiple
indexes are involved and parallel index build is invoked) with respect to the processing of
secondary indexes during REORG PART, is performance. For DPSIs, the index parts are
rebuilt. For non-partitioned secondary indexes, the indexes are corrected (index key
update-like processing). Rebuilding a part of a partitioned index (partitioning or DPSI) is
much faster than index correction, and avoids any contention between parallel REORG
PART jobs.

BUILD2
Another very positive effect of using DPSIs is the impact on the BUILD2 phase during
online REORG of a partition. When you use REORG TABLESPACE on a partition, or a
partition range, with SHRLEVEL(REFERENCE) or SHRLEVEL(CHANGE), and you have a
non-partitioned index (or indexes) on the table space, you have a BUILD2 phase.
This phase corrects the index entries for the reorganized part(s) keys in the non-partitioned
index(es), and the entire index is not available. For DPSIs, there is NO BUILD2 phase
when doing on-line REORG of a partition, or a partition range. Therefore, online REORG of

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-73


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

a partition (or range) of a table space with only partitioned indexes is much faster and offers
no contention between multiple REORG jobs.

Concurrency and Compatibility


The concurrency and compatibility characteristics for DPSIs are the same as those for PIs
and for the NPSIs the same as those for former NPIs.

8-74 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.10 Offline Utility (DSN1*) Enhancements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-75


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Stand-Alone Utility Changes


DSN1COMP
Retrieve row "as is" instead of converting to current version
DSN1PRNT
Recognize a system page and print in hex for the format option
Basic support for ASCII or UNICODE data conversion

EBCDIC
PRINT
( hex-constant , hex-constant ) ASCII
UNICODE

DSN1COPY
Handling of version system pages
CHECK option also checks system pages

© Copyright IBM Corporation 2004

Figure 8-34. Stand-Alone Utility Changes CG381.0

Notes:
Here we discuss stand-alone utility changes for DSN1COMP and DSN1PRNT.

DSN1COMP
DSN1COMP retrieves a row “as-is”, when estimating the effects of compression on a table
space. There is no attempt to convert data to the latest version before compressing rows
and deriving a savings estimate.

DSN1PRNT
DSN1PRNT recognizes the table space’s new system pages (related to versioning). When
the FORMAT option is specified, details of fields within system pages are not identified with
formatted output. Rows on system pages are simply printed in a hex format. Page ranges
specified as input identify physical pages and may still be specified even when physical
partitions do not match the logical ordering.

8-76 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty In V7, DSN1PRNT (and DSN1COPY) do not support displaying (on the right hand side of
the printed report) ASCII or Unicode data. With V8, DB2 has been enhanced to include
support for displaying of data in ASCII or Unicode on the right hand side to assist in
problem determination. As the goal here is to assist in problem determination, full
translation support is not implemented. Accented characters or Unicode characters that are
greater than x’80’ are not translated. In addition, the same translate table is used for both
ASCII and Unicode (because they are the same for this range of code points). But even
with this limited support, rows are much easier to read than before, as illustrated in
Example 8-4. In the example we use a Unicode table in which we inserted a few simple
rows (Example 8-2).
Example 8-2. A Unicode Table with Data

CREATE DATABASE BSDBUNI;


CREATE TABLESPACE BSTSUNI IN BSDBUNI CCSID UNICODE;
CREATE TABLE BSDBUNI.TESTA
(COLA VARCHAR(200) NOT NULL)
CCSID UNICODE
IN BSDBUNI.BSTSUNI;
INSERT INTO BSDBUNI.TESTA VALUES(X'C2A7C2A7C2A7C2A7'); -- Paragraph signs in UTF-8
INSERT INTO BSDBUNI.TESTA VALUES('JÜRGEN'); -- Inserting accented character
INSERT INTO BSDBUNI.TESTA VALUES('THIS IS A TEST FOR UPPER CASE');
INSERT INTO BSDBUNI.TESTA VALUES('This is a test for lower case');
SELECT * FROM BSDBUNI.TESTA;
---------+---------+---------+---------+---------+---------+---------+-
COLA
---------+---------+---------+---------+---------+---------+---------+-
§§§§
JÜRGEN
THIS IS A TEST FOR UPPER CASE
This is a test for lower case
______________________________________________________________________
Then we display the data using DSN1PRNT with the EBCDIC option (this is what you
would see with previous DB2 versions). The result is shown in Example 8-3. As you can
see, it is not very readable.
Example 8-3. DSN1PRNT PARM=(PRINT,EBCDIC,FORMAT)

PAGE: # 00000002 -----------------------------------------------------------------------------------------


DATA PAGE: PGCOMB='10'X PGLOGRBA='00004D5E2DA9'X PGNUM='00000002'X PGFLAGS='00'X PGFREE=3961
PGFREE='0F79'X PGFREEP=125 PGFREEP='007D'X PGHOLE1='0000'X PGMAXID='04'X PGNANCH=4
PGTAIL: PGIDFREE='00'X PGEND='N'
ID-MAP FOLLOWS:
01 0014 0024 0033 0058

RECORD: XOFFSET='0014'X PGSFLAGS='02'X PGSLTH=16 PGSLTH='0010'X PGSOBD='0003'X PGSBID='01'X


0008C2A7 C2A7C2A7 C2A7 ..B.B.B.B.

RECORD: XOFFSET='0024'X PGSFLAGS='02'X PGSLTH=15 PGSLTH='000F'X PGSOBD='0003'X PGSBID='02'X


00074AC3 9C524745 4E ...C....+

RECORD: XOFFSET='0033'X PGSFLAGS='02'X PGSLTH=37 PGSLTH='0025'X PGSOBD='0003'X PGSBID='03'X


001D5448 49532049 53204120 54455354 20464F52 20555050 45522043 415345 ..................|...&&.......

RECORD: XOFFSET='0058'X PGSFLAGS='02'X PGSLTH=37 PGSLTH='0025'X PGSOBD='0003'X PGSBID='04'X


001D5468 69732069 73206120 74657374 20666F72 206C6F77 65722063 617365 ........../.......?..%?...../..

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-77


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

______________________________________________________________________
However, using the Unicode option on the DSN1PRNT execution, the result looks much
better (Example 8-4).
Example 8-4. DSN1PRNT PARM=(PRINT,UNOCODE,FORMAT)

PAGE: # 00000002 -------------------------------------------------------------------------------------------


DATA PAGE: PGCOMB='10'X PGLOGRBA='00004D5E2DA9'X PGNUM='00000002'X PGFLAGS='00'X PGFREE=3961
PGFREE='0F79'X PGFREEP=125 PGFREEP='007D'X PGHOLE1='0000'X PGMAXID='04'X PGNANCH=4
PGTAIL: PGIDFREE='00'X PGEND='N'
ID-MAP FOLLOWS:
01 0014 0024 0033 0058

RECORD: XOFFSET='0014'X PGSFLAGS='02'X PGSLTH=16 PGSLTH='0010'X PGSOBD='0003'X PGSBID='01'X


0008C2A7 C2A7C2A7 C2A7 ..........@@

RECORD: XOFFSET='0024'X PGSFLAGS='02'X PGSLTH=15 PGSLTH='000F'X PGSOBD='0003'X PGSBID='02'X


00074AC3 9C524745 4E ..J..RGEN@@@

RECORD: XOFFSET='0033'X PGSFLAGS='02'X PGSLTH=37 PGSLTH='0025'X PGSOBD='0003'X PGSBID='03'X


001D5448 49532049 53204120 54455354 20464F52 20555050 45522043 415345 ..THIS IS A TEST FOR UPPER CASE@

RECORD: XOFFSET='0058'X PGSFLAGS='02'X PGSLTH=37 PGSLTH='0025'X PGSOBD='0003'X PGSBID='04'X


001D5468 69732069 73206120 74657374 20666F72 206C6F77 65722063 617365 ..This is a test for lower case@
______________________________________________________________________
The result is not perfect, but definitely much better than before.
Tip: To be as helpful as possible, DB2 tries to determine the best formatting option by
itself. If the first page in the input data set is a header page, DSN1PRNT uses the format
information in the header page as the default format. Therefore, if you do not specify the
EBCDIC, ASCII, or Unicode option, DB2 tries to determine the option itself (based on the
header page information). However, if you specify the option, it is honored.
The DSN1COPY print option is identical to the one used by DSN1PRNT. Therefore, this
enhancement also applies to DSN1COPY.

DSN1COPY
DSN1COPY tolerates the existence of table space system (versioning) pages. When the
PRINT option is specified, the pages are printed in hexadecimal format.
When you use DSN1COPY (with the OBIDXLAT option) to copy data between objects or
subsystems, you must use the REPAIR utility with the VERSIONS keyword to update the
version information in the DB2 catalog of the target object.
The CHECK option also validates system pages.

8-78 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Miscellaneous Enhancements
Work data sets may require more space if there is a mixture of
DPSIs and NPSIs
In V8, Online REORG with SHRLEVEL REFERENCE is allowed
on ALL catalog tables, including those with links
CHECK LOB sort enhancement
No more SYSUT1 and SORTOUT
Performance improvement for RECOVER using concurrent
copies
CURRENTCOPYONLY option

© Copyright IBM Corporation 2004

Figure 8-35. Miscellaneous Enhancements CG381.0

Notes:
Note the requirement for more space for work data sets if you have a mixture of DPSIs and
NPSIs.

Online REORG of All Catalog Table Spaces


In Version 7, the following catalog table spaces cannot be specified in REORG SHRLEVEL
CHANGE or REFERENCE, although they can be REORGed with SHRLEVEL NONE:
• DSNDB06.SYSDBASE
• DSNDB06.SYSDBAUT
• DSNDB06.SYSGROUP
• DSNDB06.SYSPLAN
• DSNDB06.SYSVIEWS
• DSNDB01.DBD01
As part of the V8 migration process, the DB2 catalog is converted to Unicode. To do that,
DB2 uses online REORG SHRLEVEL REFERENCE. To allow this, online REORG had to

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-79


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

be enhanced to be able to also deal with those DB2 catalog tables that contain links. DB2
allows everybody to benefit from this, and in V8 users (so not just the DB2 migration
process) can run online REORG on all catalog tables including those with links, which you
could not do in prior versions.
In V8, as in previous versions, you cannot REORG DSNDB01.SYSUTILX.

CHECK LOB Sort Enhancement


In V8, the SYSUT1 and SORTOUT DD statement for sort input and output are no longer
needed. (The WORKDDN keyword will remain, but is ignored.) The CHECK LOB utility
now uses a sort subtask. In the CHECKLOB phase, all active pages of the LOB table space
are scanned. During the scan, up to four records can be generated per LOB page. These
records are now passed directly into the SORTIN phase (via a so-called sort pipe). After
the sorting, the sorted records are passed to the REPRTLOB phase by the SORTOUT.

Performance Improvements for RECOVER with Concurrent Copies


DB2 Version 8 introduces a new keyword for the RECOVER utility to enable faster recovery
when the image copies were taken using the concurrent copy feature. The new keyword is
CURRENTCOPYONLY.
It specifies that RECOVER is to improve the performance of restoring concurrent copies
(copies that were made by the COPY utility with the CONCURRENT option) by using only
the most recent primary copy for each object in the list. When you specify
CURRENTCOPYONLY for a concurrent copy, RECOVER builds a DFSMSdss RESTORE
command for each group of objects that is associated with a concurrent copy data set
name.
To avoid the 255 object limit per DFSMSdss RESTORE command, the FILTERDDN option
is used “under the covers” whenever required. For that purpose, a temporary file is
allocated by the RECOVER utility, taking advantage of the new VOLTDEVT DSNZPARM, a
unit name that is to be used for temporary allocations.
If the RESTORE fails, RECOVER does not automatically use the next most recent copy or
the backup copy, and the object fails. If you specify DSNUM ALL with
CURRENTCOPYONLY and one partition fails during the restore process, the entire utility
job on that object fails. If you specify CURRENTCOPYONLY and the most recent primary
copy of the object to be recovered is not a concurrent copy, DB2 ignores this keyword.

8-80 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 8.11 Unicode Utility Statements

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-81


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Utility Unicode Statements


Utility control statements may be specified in Unicode or EBCDIC
DB2 detects which encoding scheme is being used
Must be all UTF-8 or EBCDIC -- no mixing!
Object names in messages will be in EBCDIC
New utility stored procedure interface DSNUTILU for Unicode

© Copyright IBM Corporation 2004

Figure 8-36. Utility Unicode Statements CG381.0

Notes:
In V8, the utility control statement parser can parse Unicode control statements, specifically
UTF-8. You can provide utility control statements, either entirely in EBCDIC characters or
entirely in Unicode characters.
All utility control statement input data sets which begin with any of these characters are
processed as Unicode.
• '20'x - Unicode blank
• '2D'x - Unicode dash (the utility comment delimiter)
• '41'x through '5A'x inclusive - upper case Unicode letters
Utility control statement input data sets are those provided to the DSNUTILB program with
DD names SYSIN, SYSLISTD or SYSTEMPL, or the contents of the UTSTMT field passed
to the DSNUTILU stored procedure created for this purpose.
All output to the SYSPRINT data set and the MVS console continue to be in EBCDIC with
translation taking place as required.

8-82 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNUTILU Stored Procedure
Identical to DSNUTILS except:

Inputs are in Unicode


UTILITY_NAME parameter dropped
Data set DYNALLOC keywords dropped
Use TEMPLATE for all data sets

© Copyright IBM Corporation 2004

Figure 8-37. DSNUTILU Stored Procedure CG381.0

Notes:
The DSNUTILU stored procedure is identical to DSNUTILS stored procedure introduced in
Version 7 with two exceptions:
• All input parameters to the procedure are in Unicode. UTILITY_ID and RESTART inputs
are translated to EBCDIC by the stored procedure for processing. UTSTMT input is
stored in a temporary SYSIN data set and is processed in Unicode as outlined above.
• The dynamic allocation of data sets is removed. As of Version 7, this function is
performed by the TEMPLATE control statement. In order to eliminate dynamic
allocation, the following DSNUTILS keywords are not supported by DSNUTILU:
- UTILITY, xxxxDSN, xxxxDEVT, xxxxSPACE for all values of xxxx.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-83


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

CREATE PROCEDURE DSNUTILU


CREATE PROCEDURE DSNUTILU
( IN UTILITY_ID VARCHAR(16)CCSID UNICODE
, IN RESTART VARCHAR(8) CCSID UNICODE
, IN UTSTMT VARCHAR(32704) CCSID UNICODE
, OUT RETCODE INTEGER)
EXTERNAL NAME DSNUTILU
LANGUAGE ASSEMBLE
WLM ENVIRONMENT WLMENV1
COLLID DSNUTILU
RUN OPTIONS 'TRAP(OFF)'
PROGRAM TYPE MAIN
MODIFIES SQL DATA
ASUTIME NO LIMIT
STAY RESIDENT NO
COMMIT ON RETURN NO
PARAMETER STYLE GENERAL
RESULT SETS 1
SECURITY USER;

© Copyright IBM Corporation 2004

Figure 8-38. CREATE PROCEDURE DSNUTILU CG381.0

Notes:
The DSNUTILU stored procedure enables you to provide control statements in Unicode
UTF-8 characters instead of EBCDIC characters to execute DB2 utilities from a DB2
application program.
When called, DSNUTILU performs the following actions:
• It translates the values specified for utility_id and restart parameters into EBCDIC.
• It creates the utility input (SYSIN) stream for control statements that use Unicode
characters.
• It deletes all the rows currently in the created temporary table (SYSIBM.SYSPRINT).
• It captures the utility output stream (SYSPRINT) into a created temporary table
(SYSIBM.SYSPRINT).
• It declares the following cursor to select from the temporary SYSPRINT table, as
follows:

8-84 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty DECLARE SYSPRINT CURSOR WITH RETURN FOR


SELECT SEQNO, TEXT FROM SYSPRINT ORDER BY SEQNO
• It opens the SYSPRINT cursor and returns.
The calling program then fetches from the returned result set to obtain the captured utility
output. All output to SYSPRINT, and to the operator console, is in EBCDIC format.
DSNUTILU always uses DFSORT “under the covers”.
As DSNUTILS, DSNUTILU must also run in a WLM environment using TCB=1.

© Copyright IBM Corp. 2004 Unit 8. Utility Enhancements 8-85


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

8-86 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Unit 9. Performance Enhancements

What This Unit Is About


Each new version of DB2 includes enhancements that provide
performance improvements for user queries, batch applications, and
administration processes. Learn about the specific improvements in
Version 8.

What You Should Be Able to Do


After completing this unit, you should be able to:
• Describe the new index support functions
• Discuss the new capabilities for obtaining distribution statistics
• Relate the implications of volatile tables
• Explain the new capabilities for processing star joins
• Describe the new functions for Visual Explain

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-1


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
Materialized query tables
Indexing enhancements
Stage 1 and indexable predicates
Table UDF cardinality option and block fetch
Trigger enhancements
Distribution statistics on non-indexed columns
Cost-based parallel sort for single and multiple tables
Performance of multi-row operations
Volatile table support
Data caching and sparse index for star join
Miscellaneous performance enhancements
Visual Explain enhancements

© Copyright IBM Corporation 2004

Figure 9-1. List of Topics CG381.0

Notes:
This unit describes the performance enhancements. It consists of the following topics:
• Materialized query tables
• Index only access for VARCHAR columns
• Stage 1 and indexable predicates
• Table UDF cardinality and block fetch
• Trigger enhancements
• Distribution statistics on non-indexed columns
• Cost-based parallel sort for single and multiple tables
• Volatile table support
• Data caching and sparse index for star join

9-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.1 Materialized Query Tables

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-3


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Data Warehousing Issues


To improve query performance especially for Data Warehousing
Summary tables are often created manually for users
To improve the response time
To avoid redundant work of scanning, aggregation and joins of the
detailed base tables (for example, history)
To simplify SQL to be coded
User needs to be aware of summary tables and know whether to
use them or the base tables depending on the query.

© Copyright IBM Corporation 2004

Figure 9-2. Data Warehousing Issues CG381.0

Notes:
Decision-support system queries typically operate over huge amounts of data. They
perform multiple joins and complex aggregation operations. In addition to that, those
decision-support queries become increasingly interactive. This implies that the demand on
response times has also reached a very high standard. Traditional optimization techniques
often fail to meet these new requirements. In some cases, the only solution is to
pre-compute the whole or parts of each query in order to avoid redundant work and to
simplify SQL to be coded, and use these pre-computed results to be able to provide a
timely answer when queries are submitted to the system.
In the past, this had to be done manually. The disadvantage of this manual solution is that
users must be aware of the existence of the summary tables and know exactly whether
they can be more helpful than the base tables depending on the query.
In DB2 for z/OS Version 8, such pre-computed results are known as materialized query
tables (MQTs). A materialized query table is called an automatic materialized query table
(AMQT),
if it is automatically considered by the database management system (DBMS) to answer a

9-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty query. This means that the user no longer has to be aware of its existence. The DBMS
takes care of that for them. The success of this approach is based on the fact, that
expected queries usually share a number of common sub-operations.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-5


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

What Is a Materialized Query Table (MQT)?


Table containing materialized data derived from one or more
source tables specified by a fullselect
Source tables can be base tables, views, table expressions or
user-defined table expression
MQTs can be accessed directly via SQL or
Chosen by the optimizer (through automatic query rewrite) when
a base table or view is referenced
Two types: Maintained by system or by user
Synchronization between base table(s) and MQT
Using 'REFRESH TABLE' statement
Batch update, triggers, etc. for user-maintained MQTs
Can provide significant query performance improvement

© Copyright IBM Corporation 2004

Figure 9-3. What Is a Materialized Query Table (MQT)? CG381.0

Notes:
As mentioned above, a materialized query table (MQT) contains pre-computed data. The
pre-computed data is the result of a query, that is a fullselect associated with the table,
specified as part of the CREATE/ALTER TABLE statement.
The source for a materialized query table can be base tables, views, table expressions, or
user-defined table functions.
MQTs can either be accessed directly via SQL, which is what people are used to in data
warehouse systems for a long time, or can be chosen by the optimizer (through automatic
query rewrite). The second case is using the DBMS’s optimization technology to determine
whether or not to use the MQT, and is the real power of the feature, as users no longer
have to be aware of the existence of the MQT. The DBMS takes care of that. In the second
case, we are talking about automatic materialized query tables, which are also known as
automatic summary tables (ASTs) or materialized views. Throughout the rest of the
publication we use the term materialized query tables or MQTs.
MQTs can be classified in a couple of different ways. One classification is between
so-called system-maintained and the user-maintained MQTs. The default is MAINTAINED

9-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty BY SYSTEM (system-maintained MQT), which means that these MQTs cannot be updated
by the LOAD utility, nor by INSERT, UPDATE, or DELETE SQL statements. The only way
to update system-maintained MQTs is through the (new) REFRESH TABLE SQL
statement. MQTs created as MAINTAINED BY USER (user-maintained MQTs), can be
updated via the LOAD utility, INSERT, UPDATE or DELETE SQL statements, or any other
way to update a table, for example, using triggers.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-7


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Without MQT, Each Query Re-Computes!

Q11,Q12, ... Q21,Q22, ...

Compute Many Times

Aggregation Aggregation
Aggregation
Aggregation
Aggregation Aggregation
Join Join
Join Join
Join Join

N:1

Data Warehouse

© Copyright IBM Corporation 2004

Figure 9-4. Without MQT, Each Query Re-Computes! CG381.0

Notes:
As mentioned before, if you are running queries against your data warehouse, each time
that you submit a query against the DBMS, it has to (re)compute the result set. This can
mean scanning an enormous amount of data, doing all kinds of groupings and calculations,
consuming the system’s resources every single time you request complex information from
your data warehouse. This usually results in a lot of system resources (CPU and I/O) being
used, and prolonged response times.

9-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
With MQT, Avoid Redundant Computation

Q11,Q12, ... Q21,Q22, ...

Reuse Many Times


MQT MQT

Precompute Once
Aggregation Aggregation

Join Join

N:1 N:1

Data Warehouse

© Copyright IBM Corporation 2004

Figure 9-5. With MQT, Avoid Redundant Computation CG381.0

Notes:
As shown on the visual above, if you use MQTs, the costly computations do not have to be
performed every single time you run a query against your warehouse tables. The query to
build the MQT is executed only once. From then on, the data is available in the MQT, and
can be used instead of the underlying tables.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-9


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Creating an MQT
CREATE TABLE or ALTER TABLE (to register an existing table)
Specify a fullselect clause
DATA INITIALLY DEFERRED, REFRESH DEFERRED
Two types of MQTs can be defined in the CREATE/ALTER TABLE
MAINTAINED BY SYSTEM
Data can only be updated by REFRESH TABLE statement
MAINTAINED BY USER
SELECT/INSERT/UPDATE/DELETE and LOAD operations are permitted
Automatic Query Rewrite exploitation
ENABLE/DISABLE QUERY OPTIMIZATION
May wish to create user-maintained tables with DISABLE option and alter
to ENABLE once table has been populated
Optimizer only uses system-maintained if a REFRESH has occurred
New special registers to govern MQT usage as well

© Copyright IBM Corporation 2004

Figure 9-6. Creating an MQT CG381.0

Notes:
Creating a materialized query table is similar to creating a view. The difference is that a
view is only a logical definition, while a materialized query table contains the materialized
data of the query result. Because of this similarity, some IT books and other vendors also
use the term materialized views.
You can either create an MQT from scratch using the CREATE TABLE statement, or
register existing tables, which you previously used as “manual” MQTs, as “official” MQTs
(by using the ALTER TABLE statement), and enable them for automatic query rewrite.
Refer to Figure 9-17, "The Basics of Automatic Query Rewrite", on page 9-29 for a more
detailed consideration of the query rewrite.
Note that the CREATE TABLE syntax has also been enhanced to allow you specify the
WITH NO DATA clause (see syntax diagram on the next visual). This enables you to create
a table where the column definitions are inherited from the columns specified in the
fullselect. This clause was already available when creating a declared temporary table. In
V8, the syntax is extended to the CREATE TABLE statement.

9-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Using the WITH NO DATA clause leads to creation of a (normal) table that is not populated
with data. The specified fullselect is not executed. With this option, you do not create an
MQT. The object created is registered in the DB2 catalog table SYSIBM.SYSTABLES as
type ‘T’. (Instead of using WITH NO DATA, you can also use DEFINITION ONLY. The effect
of both syntax options is exactly the same. Using WITH NO DATA is preferred as it is the
SQL standards syntax.)
The copy options shown in the syntax diagram specify how the column identity and default
attributes are inherited. The rules are the same as those in DECLARE GLOBAL
TEMPORARY TABLE WITH NO DATA.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-11


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

CREATE TABLE Syntax Extension

CREATE TABLE table-name


materialized query definition

materialized query definition:


AS (fullselect) WITH NO DATA
, copy options
( column name ) refreshable table options

copy options: COLUMN ATTRIBUTES COLUMN


EXCLUDING IDENTITY EXCLUDING DEFAULT

COLUMN ATTRIBUTES COLUMN


INCLUDING IDENTITY INCLUDING DEFAULT
USING TYPE DEFAULTS

refreshable table options:


(1)
DATA INITIALLY DEFERRED REFRESH DEFERRED
MAINTAINED BY SYSTEM
MAINTAINED BY USER
Note: ENABLE QUERY OPTIMIZATION
(1) The same clause must not be specified more than once.
DISABLE QUERY OPTIMIZATION

© Copyright IBM Corporation 2004

Figure 9-7. CREATE TABLE Syntax Extension CG381.0

Notes:
The visual above shows extracts of the CREATE TABLE statement syntax, which is used to
create MQTs. As you can see from this diagram, a few new syntax blocks are available that
are related to the definition of MQTs. When creating an MQT, it is mandatory to specify
DATA INITIALLY DEFERRED followed by REFRESH DEFERRED, as shown in the
refreshable table options block. These two parameters define a table as a materialized
query table.
DATA INITIALLY DEFERRED means that when a materialized query table is created, the
MQT is not populated instantly. In order to have the MQT populated, you must use the
REFRESH TABLE statement (for system-maintained MQTs) or any other allowed SQL
statement (for user-maintained MQTs).
REFRESH DEFERRED means that the data in the MQT is not refreshed immediately when
its underlying base tables are updated and can be refreshed at any time using the
REFRESH TABLE statement (for system-maintained MQTs — MAINTAINED BY SYSTEM)
or via any other allowed SQL statement (for user-maintained MQTs — MAINTAINED BY

9-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty USER). Note that (unlike DB2 for LUW) you can populate user-maintained MQT using the
REFRESH TABLE statement as well.
Another choice you have to make when creating an MQT is between ENABLE QUERY
OPTIMIZATION and DISABLE QUERY OPTIMIZATiON. If you choose to ENABLE QUERY
OPTIMIZATION, you allow that this MQT can be exploited by automatic query rewrite. In
contrast to that, specifying DISABLE QUERY OPTIMIZATION will cause this MQT not to be
considered by the automatic query rewrite process. In addition to specifying this option at
CREATE/ALTER TABLE time, there are also two new special registers that govern the
selection of the MQT by automatic query rewrite at run time. For more information, see
Figure 9-14, "Controlling MQT for Automatic Query Rewrite", on page 9-24.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-13


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Creating an MQT - Example

CREATE TABLE MQT1 AS (


SELECT T.PDATE, T.TRANSID,
SUM(QTY * PRICE) AS TOTVAL,
COUNT(QTY * PRICE) AS CNT
FROM SCNDSTAR.TRANSITEM TI, SCNDSTAR.TRANS T
WHERE TI.TRANSID = T.TRANSID
GROUP BY T.PDATE, T.TRANSID)
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY SYSTEM
ENABLE QUERY OPTIMIZATION
IN MYDBMQT.MYTSMQT;

© Copyright IBM Corporation 2004

Figure 9-8. Creating an MQT - Example CG381.0

Notes:
The visual above shows a simple example of the creation of an MQT. As you can see, the
AS keyword followed by a fullselect preludes the definition of an MQT. The other four
parameters that are special for the creation of an MQT in this example are:
• DATA INITIALLY DEFERRED
• REFRESH DEFERRED
• MAINTAINED BY SYSTEM (default)
• ENABLE QUERY OPTIMIZATION (default)
MQTs are registered in the DB2 catalog table SYSIBM.SYSTABLES. The identifier in
column TYPE is ‘M’ for MQTs.
As you can see from the sample above, as for any other table that physically exists, you
can decide which table space you want to use to store your MQT data.

9-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Changing the Attributes of an MQT
ALTER TABLE statement to update the attributes of an MQT
To register/unregister an MQT, use:
ADD/DROP MATERIALIZED QUERY clause
Can alter between system-maintained and user-maintained
Control the types of operations permitted on MQT
System-maintained during online day to prevent updates and User
maintained during off-line data loading
To enable or disable automatic query rewrite option on an
MQT, use:
Enable/disable query optimization clause
Use to temporarily disable query optimization during table maintenance

© Copyright IBM Corporation 2004

Figure 9-9. Changing the Attributes of an MQT CG381.0

Notes:
As for almost all other DB2 objects, you can change the attributes of your MQTs using an
ALTER SQL statement. Since your MQT is considered to be a table, the appropriate
statement is ALTER TABLE.
The next visual shows the enhanced ALTER TABLE syntax diagram. You can perform the
following changes regarding MQTs using the ALTER TABLE statement:
• ALTER TABLE ADD MATERIALIZED QUERY
This option converts a base table to an MQT. This option is of interest if today you are
already working with tables that hold aggregated, pre-computed data, and you want to
register these tables as MQTs.
• ALTER TABLE DROP MATERIALIZED QUERY
This option converts an MQT into a (normal) base table.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-15


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• ALTER TABLE ALTER MATERIALIZED QUERY


This option lets you change all parameters which can be used to define the
characteristics of an MQT, as we discussed in Figure 9-6, "Creating an MQT", on page
9-10.

9-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
ALTER TABLE Syntax Extension

ALTER TABLE table-name

MATERIALIZED
QUERY
ADD (fullselect) refreshable table options
MATERIALIZED
DROP QUERY
MATERIALIZED
ALTER QUERY materialized query table alteration

refreshable table options:


(1)
DATA INITIALLY DEFERRED REFRESH DEFERRED
MAINTAINED BY SYSTEM
MAINTAINED BY USER
ENABLE QUERY OPTIMIZATION
DISABLE QUERY OPTIMIZATION

materialized query table alteration:


(1)
MAINTAINED BY SYSTEM
USER
ENABLE QUERY OPTIMIZATION
DISABLE Note:
(1) The same clause must not be specified more than once.

© Copyright IBM Corporation 2004

Figure 9-10. ALTER TABLE Syntax Extension CG381.0

Notes:
The visual above shows the enhanced ALTER TABLE syntax. As you can see, you can
change an existing table into an MQT or vice versa.
You can also switch an MQT between a system-maintained and user-maintained MQT, or
enable or disable the MQT for query optimization.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-17


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Alter Base Table to MQT - Example

ALTER TABLE T1 ADD MATERIALIZED QUERY (


SELECT T.PDATE,
SUM(QTY * PRICE) AS TOTVAL,
COUNT(QTY * PRICE) AS CNT
FROM SCNDSTAR.TRANSITEM TI, SCNDSTAR.TRANS T
WHERE TI.TRANSID = T.TRANSID
GROUP BY T.PDATE)
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY USER
ENABLE QUERY OPTIMIZATION;

© Copyright IBM Corporation 2004

Figure 9-11. Alter Base Table to MQT - Example CG381.0

Notes:
The example assumes that there is a table T1 that already exists. The data in table T1 was
generated using the fullselect shown above. That is, the data stored in T1 is the result of a
pre-computation, which performs some scalar functions, a join, GROUP BY, and so on,
which make up the SELECT statement. When altering an existing table into an MQT, it is
the user’s responsibility to make sure that the data in the table matches the result of the
query that makes up the MQT.
If a user is aware of table T1 and what data it contains, they can use this table or a subset
of data from tables TI (TRANSITEM) and T (TRANS), which is stored in T1. However, if —
and this is very likely the case — the user does not know that this pre-computation has
been done already, and submits the fullselect against TI and T instead of T1, all the
computation has to be done again, which is a great waste of system resources. In addition,
it is very often not a trivial task to figure out whether the data you are looking for is actually
fully available from T1.
For these reasons, you may want to turn an existing table (T1) into an MQT, managed by
DB2.

9-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Starting with DB2 V8, you can ALTER existing tables into MQTs. As shown in the example
above, we use the ENABLE QUERY OPTIMIZATION option. This means that when we
submit a select statement against TI and T instead of selecting from T1, the optimizer
knows that the subset of data that you need already exists in table T1, and uses T1 instead
of accessing TI and T.
Altering a WITH NO DATA Table Into an MQT
Assume that you have created the following definition-only table, using the following
syntax:
CREATE TABLE MQT1
AS(SELECT * FROM SYSIBM.SYSDUMMY)
WITH NO DATA
If you want to change the characteristics of this table using the ALTER TABLE statement
and turn it into an MQT, you cannot use either of the following statements:
ALTER TABLE MQT1
ADD MATERIALIZED QUERY DATA INITIALLY DEFERED REFRESH DEFERRED
Nor these:
ALTER TABLE MQT1
ALTER MATERIALIZED QUERY MAINTAINED BY USER
Instead, you must code:
ALTER TABLE MQT1
ADD MATERIALIZED QUERY (SELECT * FROM SYSIBM.SYSDUMMY1)
DATA INITIALLY DEFERRED REFRESH DEFERRED
After you use the last of the statements above, the WITH NO DATA table is converted to an
MQT, which you can also verify from the entry in TYPE column of SYSIBM.SYSTABLES,
because it changes from ‘T’ to ‘M’.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-19


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Fullselect Considerations
Base table or views can be referenced subject to a number of
conditions
No references to ROWIDs, LOB types or remote objects
To enable query optimization (which is done at the QBLOCK
level)
Fullselect must be a subselect
Single select after view and table expression merge
No outer joins or subselects are allowed
Has other specific rules regarding the contents of the fullselect
Error message returned when fullselect does not satisfy the rules

© Copyright IBM Corporation 2004

Figure 9-12. Fullselect Considerations CG381.0

Notes:
There are a few considerations for the fullselect you specify as part of the
materialized-query-definition block.
When you specified WITH NO DATA or DEFINITION ONLY (which does NOT define an
MQT):
• The fullselect must not refer to host variables, or include parameter markers.
• The fullselect must not reference a remote object
• The fullselect must not result in a column having a ROWID data type, because a
ROWID requires a generated attribute
• The fullselect must not result in a column having a LOB data type, because a ROWID is
needed for LOB data types, but ROWID is restricted.
• The fullselect must not contain PREVIOUS VALUE FOR and NEXT VALUE FOR
expressions.

9-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty When you create an MQT, by specifying REFRESH DEFERRED, and use the DISABLE
QUERY OPTIMIZATION option, the following additional restrictions apply:
• The fullselect cannot contain a reference to a created global temporary table or a
declared global temporary table.
• The fullselect cannot reference another materialized query table
When you specify ENABLE QUERY OPTIMIZATION, your fullselect must adhere to the
following additional restrictions:
• The fullselect must be a subselect.
• The subselect cannot reference to a user-defined scalar or table function with the
EXTERNAL ACTION or NON-DETERMINISTIC attributes, or built-in function RAND.
• The subselect cannot contain:
- Any predicates that include subqueries
- A nested table expression or view that requires temporary materialization
- A join using the INNER JOIN syntax
- An outer join
- A special register
- A scalar fullselect
- A row expression predicate
- Sideway references
- Table objects with multiple CCSID sets
When the fullselect does not satisfy these restrictions, an error is returned. Note also that
when defining a materialized query table, the column attributes, such as DEFAULT and
IDENTITY, are not inherited from the fullselect.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-21


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Populating and Refreshing an MQT


Ensure data currency meets user requirements to avoid out-dated
results
Issue REFRESH TABLE statement periodically
Deletes all rows in the MQT
Executes the fullselect as defined in the original MQT definition
Inserts calculated data into the MQT
Updates the catalog with refresh timestamp and cardinality information
Executes in a single UOW
Consider logging and performance impact
For user-maintained MQTs, other population methods can also be
used, for example, LOAD, Inserts, Updates, DPROP Apply

© Copyright IBM Corporation 2004

Figure 9-13. Populating and Refreshing an MQT CG381.0

Notes:
You can use the REFRESH TABLE mq_table SQL statement to populate a
system-maintained, or user-maintained MQT.
Whenever you issue the REFRESH TABLE statement, the following actions are performed:
1. All rows are deleted from the MQT. This is a mass delete when the MQT physically
resides in a segmented table space. As for regular tables, mass deletes are much faster
if the data is stored in a segmented table space instead of a simple table space.
2. The MQT’s fullselect is executed to recalculate the data from the tables that are
specified in this fullselect. The isolation level used for this execution is the one that
belongs to the MQT (the isolation level that was in effect when the MQT was created).
Access to the MQT itself is blocked during the execution of the REFRESH TABLE
statement.
3. The calculated data is then inserted into the MQT.

9-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 4. The catalog is updated with the refresh timestamp and cardinality of the MQT. After
successful execution of a REFRESH TABLE statement, the SQLCA field SQLERRD(3)
also contains the number of rows inserted into the materialized query table.
The four steps described above are all done within a single commit scope. In DB2 for z/OS
V8, only full refresh is supported.
The REFRESH TABLE statement is an explainable statement. The EXPLAIN output
contains rows for INSERT with the fullselect in the MQT definition.
Query rewrite avoids using locked MQTs. It will use those MQTs not locked, or if no MQTs
are available, base tables instead.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-23


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Controlling MQT for Automatic Query Rewrite


Two special registers to control if automatic query rewrite is
enabled
CURRENT REFRESH AGE (0 or ANY)
0 : No MQTs considered for automatic query rewrite
ANY: All MQTs considered for automatic query rewrite
CURRENT MAINTAINED TABLE TYPES (FOR OPTIMIZATION)
(ALL/NONE/SYSTEM/USER)

=
SET CURRENT REFRESH AGE numeric constant
ANY
host variable

TABLE FOR OPTIMIZATION =


SET CURRENT MAINTAINED TYPES ALL
NONE
SYSTEM
USER
host variable

Initial values specified on installation panel DSNTIP8


Defaults are 0 and SYSTEM, respectively

At MQT level ENABLE/DISABLE QUERY OPTIMIZATION


© Copyright IBM Corporation 2004

Figure 9-14. Controlling MQT for Automatic Query Rewrite CG381.0

Notes:
The process of recognizing whether an MQT can be used in answering a query and
rewriting the query accordingly, is called automatic query rewrite.
Two new special registers, CURRENT REFRESH AGE and CURRENT MAINTAINED
TABLE TYPES FOR OPTIMIZATION, control whether or not an MQT is considered by
automatic query rewrite for a dynamically prepared query. The default value for both
parameters is specified on panel DSNTIP8 during installation time. The default value for
CURRENT REFRESH AGE ends up as DSNZPARM parameter REFSHAGE, and the one
for CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION goes into DSNZPARM
parameter MAINTYPE.
You can think of the refresh age of an MQT, as the duration between the current timestamp
and the time when the MQT was last refreshed using the REFRESH TABLE statement. In
V8, the CURRENT REFRESH AGE can only have two possible values:

9-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • 0 (zero)
A value of 0 means that no MQTs are considered for automatic query rewrite by this
application.
• ANY
This means that all MQTs are considered for automatic query rewrite by this application.
If you check the current setting of the special register after you have set it to ANY, you see
a value of 99999999999999.000000. This represents 9999 years, 99 month, 99 days, 99
hours, 99 minutes, 99 seconds. The six zeros after the decimal point represent
microseconds, which are ignored.
Because user-maintained MQTs can be updated by using INSERT, UPDATE, or DELETE
SQL statements, or the LOAD utility, the refresh age of a user-maintained materialized
query table cannot truly represent the “freshness” of the data in a user-maintained MQT.
This is why a second new special register is used in conjunction with MQTs, called
CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION. You can set this special
register to ALL, NONE, SYSTEM, or USER. Only those MQTs which belong to the group
that matches the value of the special register, are eligible for automatic query rewrite. That
is, if you decide to set this value to NONE, no MQT is considered for automatic query
rewrite.
Note that in addition to these special registers that govern whether or not MQTs are
considered by automatic query rewrite at the application level, you also have the
ENABLE/DISABLE QUERY OPTIMIZATION option on the CREATE/ALTER TABLE
statement. Specifying DISABLE QUERY OPTIMIZATION disables the selection of the MQT
by automatic query rewrite. The MQT must be defined with ENABLE QUERY
OPTIMIZATION, before automatic query rewrite can consider the MQT.
If a system-maintained materialized query table has not been populated with data (when
the REFRESH_TIME column in SYSIBM.SYSVIEWS is equal to the default timestamp
’0001-01-01.00.00.00.000000’), the MQT is not considered by automatic query rewrite. For
a user-maintained materialized query table, the refresh timestamp in the system catalog
table is not maintained, therefore users should not use the value.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-25


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Relationship Between
Two Special Registers for MQTs

CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION

SYSTEM USER ALL NONE

All system- All user-


All
maintained maintained
CURRENT query
query query
REFRESH ANY optimization None
optimization optimization
enabled
AGE enabled enabled
MQTs
MQTs MQTs

0 None None None None

© Copyright IBM Corporation 2004

Figure 9-15. Relationship Between Two Special Registers for MQTs CG381.0

Notes:
Use the table above to find out which types of MQTs are considered for automatic query
rewrite, depending on the actual settings of the special registers, CURRENT MAINTAINED
TABLE TYPES FOR OPTIMIZATION and CURRENT REFRESH AGE in your application,
or as defaulted by their respective DSNZPARM values.
As you can see, setting the special register CURRENT REFRESH AGE to 0, or just
accepting the default value for DSNZPARM REFSHAGE (0-zero), prevents DB2 from using
MQTs during automatic query rewrite.

9-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Preparation Steps for Using MQTs
Before an MQT can be considered for automatic query rewrite
Ensure the CURRENT REFRESH AGE is set to ANY
(Default is 0 on install panel DSNTIP8))
Ensure the CURRENT MAINTAINED TABLE TYPES FOR
OPTIMIZATION is set properly (Default is system only on the install
panel DSNTIP8)
Perform initial data refresh for system-maintained MQTs
Ensure user-maintained MQTs have been populated
Ensure the query optimization is enabled for the MQT
Default is enabled but you may have disabled the use until the
MQT was populated (especially for user-maintained MQTs)
Ensure RUNSTATS has been run to assist the Optimizer

© Copyright IBM Corporation 2004

Figure 9-16. Preparation Steps for Using MQTs CG381.0

Notes:
The visual above summarizes the steps that you must take care of, in order to prepare your
MQT for being considered by automatic query rewrite.
If you just accept the system wide defaults, which are set during the installation/migration of
your DB2 subsystem, no MQT will ever be considered during automatic query rewrite. You
must set the value for CURRENT REFRESH AGE to ANY. This can either be done system
wide, by specifying REFSHAGE=ANY in your DSNZPARM, or as a session parameter by
using the SET CURRENT REFRESH AGE = ANY statement.
Once you have enabled automatic query rewrite, you may also have to change the default
for the CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special register. If
you accept the default, only system-maintained MQTs are taken into account.
Apart from setting the appropriate values for the aforementioned special registers, you
must also make sure that your MQTs themselves have been created or altered specifying
the ENABLE QUERY OPTIMIZATION clause. If QUERY OPTIMIZATION is DIABLED, your
MQTs are not affected by the special registers described above.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-27


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

As mentioned before, system-maintained MQTs that have not been populated with data are
not considered for automatic query rewrite.
Also make sure to run RUNSTATS against the MQT after refreshing the content of the
MQT, to ensure that the optimizer has accurate statistics when determining the access path
for your queries. Otherwise, DB2 uses default or out-of-date statistics. The estimated
performance of queries that are generated by automatic rewrite might inaccurately
compare less favorably to the original query. When you run the REFRESH TABLE
statement, the only statistic that DB2 updates for the MQT is the cardinality statistic
(CARDF).
Another thing you must be aware of, is that only MQTs with an isolation level equal to, or
higher than the query isolation level, will be considered by automatic query rewrite.

9-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
The Basics of Automatic Query Rewrite
State of the MQT (that is, must be ready for query optimization and
not locked)
Whether the query result can be derived from or directly use the MQT
There are no predicates in the MQT subselect that are not in the query
Each column in the result table can be derived from one or more columns
in the MQT
The GROUP BY clauses are compatible
Estimated cost of the queries
MQTs are not used for short-running queries
The original and rewritten costs are compared and the lower cost chosen
Only considered for read-only queries
Dynamically prepared queries only
Not for static SQL or static SQL with REOPT(VARS)
Note that optimization is considered at the query block level

© Copyright IBM Corporation 2004

Figure 9-17. The Basics of Automatic Query Rewrite CG381.0

Notes:
Automatic query rewrite is the general process that examines an SQL statement which
references one or more base tables, and, if appropriate, rewrites the query so that it
performs better. This process also determines whether to rewrite a query so that it refers to
one or more materialized query tables that are derived from the source tables. (Another
form of query rewrite is the generation of additional predicates that can be derived from
predicates coded in the SQL statement; this is also known as predicate transitive closure.)
A user query can contain multiple query blocks. Examples for query blocks are, subselect
of UNION or UNION ALL statement, temporarily materialized views, materialized table
expressions, and subquery predicates. Automatic query rewrite is generally considered at
query block level.
The qualified query block in the user query and the subselect in the MQT definition are
analyzed to determine whether the query can be rewritten to an equivalent one using the
MQT and provide the same results with better performance. Here the overriding principle is
that the MQT must contain the source table data needed to satisfy the query.
Furthermore, the following conditions are checked:

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-29


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• In general, there cannot be any predicates in the MQT subselect, which are not in the
query, because in this case, it can be assumed that these predicates may have resulted
in discarded rows as the MQT was refreshed.
• GROUP BY clauses are compared to determine if a query GROUP BY clause will result
in a subset of the rows that are in the MQT.
• The query select list is examined to determine if each column in the result table can be
derived from one or more columns from the MQT.
If the characteristics described above apply for both the MQT and the query, the query is
rewritten so that all, or parts of the references to base tables are replaced by references to
the MQT. If the query rewrite process is successful, DB2 determines the cost and the
access path of the new (rewritten) query. Only if the cost of the new query is less than the
cost of the original query, will the new query be submitted.
In addition, MQTs are not used for short-running queries. It would take the optimizer more
time to figure out whether or not the MQT can be used, than to actually execute the query.
Apart from all the things mentioned above, automatic query rewrite is only supported for
dynamically prepared queries that are read-only. It is not supported for statically bound
queries, including REOPT(VARS) packages. You can, however, use an MQT in either a
statically bound query, or a dynamically prepared query to improve the response time, if
appropriate, by coding them directly in the SQL statement (“bypassing” automatic query
rewrite done by the optimizer).
To determine whether an MQT is used instead of a base table, you can use the EXPLAIN
statement. The EXPLAIN output shows the plan for the rewritten query. When used, the
name of the MQT is shown in the TNAME column, and a TABLE_TYPE of 'M' is used.

9-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Automatic Query Rewrite Table Comparison
Source tables referenced in the fullselect are compared to the
set of source tables referenced by the query
If there are source tables in common, the materialized query
table is a possible candidate for query rewrite

Source Tables

T1, T2, T3

CREATE ... M
SELECT ...
SELECT... FROM Fullselect (F)
FROM T1, T2... T1,T2,T3

Query (Q)
MQT (M)

© Copyright IBM Corporation 2004

Figure 9-18. Automatic Query Table Rewrite Comparison CG381.0

Notes:
The following pages give a high level step-by-step overview of the rules used by automatic
rewrite to determine eligibility of MQTs.
We use the following abbreviations:
• M to denote the MQT
• Q to reference the user query
• F for the fullselect that makes up the MQT’s content
The first and probably easiest checking that is done pertains the source tables that are
referenced in the query.
As you can see from the visual above, tables T1 and T2, which are referenced in the
SELECT statement, have been used in the definition of the MQT.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-31


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Automatic Query Rewrite


Predicate Comparison
In general, predicates in the MQT and the fullselect should match
or be less restrictive than the fullselect
Exception: If additional join predicates exist in the MQT that result
in a "lossless" join (a unique column exists)

Source Tables

T1, T2, T3

SELECT ...
FROM T1, T2 CREATE ... M Fullselect (F)
WHERE T1.C1 = SELECT... FROM
T2.C1... T1,T2,T3
WHERE T2.C2 = T3.UCO
AND T1.C1 = T2.C1 ... T2.C2 = T3.UCO
results in a lossless
Query (Q) join because T3.UCO is
a unique column and
T3.C2 is a
NOT NULL column
MQT (M)

© Copyright IBM Corporation 2004

Figure 9-19. Automatic Query Rewrite Predicate Comparison CG381.0

Notes:
In general, the materialized query table fullselect (F) should contain the same predicates as
the user query (Q). If the MQT fullselect (F) contains predicates that are not in the user
query (Q), DB2 assumes that these predicates may result in discarded rows when the
materialized query table was refreshed. Thus, any rewritten query that makes use of the
materialized query table might not give the correct results. The query is not a candidate for
query rewrite.
However, DB2 behavior differs if a predicate joins a common base table (in both Q and F)
to an extra table that is unique to the materialized query table fullselect (F). The predicate
does not result in discarded data if DB2 can determine that the join is lossless, for example,
through the existence of a referential constraint (PK-FK) between the two base tables.
However, the materialized query table fullselect must not have any local predicates that
reference this extra table; otherwise, it could again reduce the number of rows returned by
the MQT.
In the visual above, the T2.C2 = T3.UCO predicate only occurs in the MQT, not in the
fullselect. However, assuming that T2.C2 is a NOT NULL column and T3.UCO is a unique

9-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty column, having this extra join predicate (and no additional filtering predicates on T3) does
not reduce the number of rows in the result set of the MQT. It is a so-called lossless join,
and therefore the MQT can be substituted for the underlying base tables. The optimizer can
determine that this is a lossless join when an RI relationship is defined between T2.C2 and
T3.UCO. More information on the importance of RI constraints when using MQTs can be
found in Figure 9-25, "The Role of Constraints", on page 9-40.
It is recommended that the predicates in the query be coded in exactly the same way as
they are in the MQT subselect, because otherwise the matching may fail on some complex
predicates.
For example, the matching between the simple equal predicates such as COL1=COL2 and
COL2=COL1 will be successful. Extra blanks are also ignored. In contrast to that, the
matching between (COL1+3)*10=COL2 and COL1*10+30=COL2 will fail.
The IN-list predicates are an exception; the order of the items in the IN-list need not be in
exactly the same order.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-33


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Automatic Query Rewrite GROUP BY Matching


If a GROUP BY clause exists, it is compared to determine if the
query GROUP BY clause will result in a subset (not necessarily
proper) of the rows that are in the MQT
If so then the MQT remains a candidate for query rewrite

Source Tables

...
... GROUP BY T.PDATE,
T.CUSTID;
Fullselect (F)
GROUP BY T.PDATE;

Query (Q)

MQT (M)

© Copyright IBM Corporation 2004

Figure 9-20. Automatic Query Rewrite GROUP BY Matching CG381.0

Notes:
In a subsequent step, DB2 checks if the GROUP BY clause used in the query will result in
a subset of the rows that are in the MQT.
The visual above shows an MQT which fulfills this requirement. T.PDATE is used in the
query and in the MQT’s fullselect. As the MQT’s GROUP BY clause is more granular (at a
lower level, since it also includes the T.CUSTID column), grouping by T.PDATE can be
derived from the MQT’s content.

9-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Automatic Query Rewrite
Expression Derivation
Determine if the column(s) in the result table requested by the query
can be derived from one or more columns in the MQT
The columns must be equivalent, that is, T1.C1 = T2.C1 or
Column references to the base table can be derived from the MQT
For example, SUBSTR(C, 5, 10) can be derived from column C
in the MQT (M)
C*(A+B) can be derived from (B+A) column and C column in M

Source Tables
SELECT
T.C*(A+B), SELECT T.(B+A), T.C
SUBSTR(T.C,5,10).. Fullselect (F)

Query (Q)
MQT (M)

© Copyright IBM Corporation 2004

Figure 9-21. Automatic Query Rewrite Expression Derivation CG381.0

Notes:
Automatic query rewrite also checks whether the columns in the select list of the query can
be derived from the data in the MQT.
In the visual, C*(A+B) in the user query can be derived from the result column of the
expression B+A and column C, and SUBSTR(*C,5,10) can be derived from column C in the
MQT.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-35


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Automatic Query Rewrite


Heuristics of Selection
If there are multiple MQTs that match the query(Q), heuristic rules
are applied
1st choice: MQTs that involve no regrouping, no residual joins (that is,
table left in Q), and no rejoins
2nd choice: MQTs that involve no regrouping and no residual joins
Final criteria: The MQT with the largest reduction power:
|T1|*...*|Tn| / |M|, where T1, ..., Tn are base tables in F
Certain restrictions apply on the Query (Q) and the MQT
fullselect (V) for automatic query rewrite
See next visual for 'restrictions on F' and 'Restrictions on Q'

© Copyright IBM Corporation 2004

Figure 9-22. Automatic Query Rewrite Heuristics of Selection CG381.0

Notes:
If there are multiple MQTs that match the query but cannot be used simultaneously,
heuristic rules are used to choose one of them.
In general, if multiple MQTs apply, the optimizer chooses the one(s) that require the least
amount of additional processing after using the data from the MQT.

9-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Diagrammatic
Overview of Automatic Query Rewrite
Restrictions on
Query Grouping matching
Table mapping
Subselect that can be rewritten:
Functional dependency: Column K determines C
Read-only query
Matching requirement: Grouping column in F
Common table: Table in both Q and F Contains base tables & table
functions, not outer joins only determines grouping column in Q
Residual table: Table in Q only No regrouping requirement: Grouping column in
No ROWID, LOB types
Extra table: Table in F only Q determines grouping column in F
Rejoin table: Common table joins Table Grouping
with M to derive non-key columns mapping matching
Heuristics of
Selections
Match with no regrouping, no residual
joins, and no rejoins.
Match with no regrouping and no
residual joins
Match with the largest reduction ratio:
Predicate matching |T1|*...*|Tn|/|M|, where T1, ..., Tn are Expression derivation
base tables in F
Column equivalence: T1.C1=T2.C1, T2.C1 can be
Join predicates: Exact match in both Q and M used for T2.C1
Local predicates: P in Q subsumes P in M Predicate Expression Arithmetic expressions: C*(A+B) can be derived from
B+A and C
C > 5 subsumes C > 0; derivation
matching Scalar functions: substr(C, 5,10) can be derived from C
C in ('A', 'B') subsumes C in ('A', 'B', 'C') Set functions:
In other words, M contains data that Q needs. Restrictions on AVG(C) = SUM(C)/COUNT(*) if C NOT NULL
VAR(C) = ...
Fullselect Rejoin table columns: From key column T.K derive
Base tables & table functions non-key column T.C

Single SELECT after view and


table expression merge
No ROWID, LOB types
No outer joins2

© Copyright IBM Corporation 2004

Figure 9-23. Diagrammatic Overview of Automatic Query Rewrite CG381.0

Notes:
The visual above summarizes the checking that is performed for a query, if automatic query
rewrite is generally enabled in your DB2 subsystem.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-37


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

MQT Exploitation - Simple Example

QUERY:
SELECT T.PDATE, AVG (QTY * PRICE) AVGAMT
FROM SCNDSTAR.TRANSITEM TI, SCNDSTAR.TRANS T
WHERE TI.TRANSID = T.TRANSID AND
T.PDATE >= '2001-01-01'
GROUP BY T.PDATE;

AFTER QUERY REWRITE:


SELECT PDATE, CASE WHEN SUM(CNT) = 0 THEN NULL
ELSE SUM(TOTVAL)/SUM(CNT) END AVGAMT
FROM MQT1
WHERE PDATE >= '2001-01-01'
GROUP BY PDATE;

© Copyright IBM Corporation 2004

Figure 9-24. MQT Exploitation - Simple Example CG381.0

Notes:
The example above is used to give you an impression of what a rewritten query may look
like.
The example refers to the sample MQT that we introduced earlier in this unit. For ease of
use, the CREATE TABLE statement is repeated in Example 9-1.
Example 9-1. Sample MQT Creation Statement

CREATE TABLE MQT1 AS (


SELECT T.PDATE, T.TRANSID,
SUM(QTY * PRICE) AS TOTVAL,
COUNT(QTY * PRICE) AS CNT
FROM SCNDSTAR.TRANSITEM TI, SCNDSTAR.TRANS T
WHERE TI.TRANSID = T.TRANSID
GROUP BY T.PDATE, T.TRANSID)
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY SYSTEM
ENABLE QUERY OPTIMIZATION
IN MYDBMQT.MYTSMQT;
______________________________________________________________________

9-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The prerequisites for automatic query rewrite, described in Figure 9-17, "The Basics of
Automatic Query Rewrite", on page 9-29, are fulfilled for the initial query. Let us look at it in
detail now:
• The query uses the same tables as the MQT.
• The only predicate in the MQT is TI.TRANSID = T.TRANSID. This one is also present in
the query. The query has additional predicates, which is fine, as it does not preclude the
use of the MQT.
• The way the GROUP BY clause is coded in the query indicates that the query will result
in a subset of the rows that are in the MQT. The GROUP BY in the MQT is more
granular than the query.
• Each column of the query can be derived from, or is a column of, the MQT. The
AVG(QTY * PRICE) can be derived from SUM(QTY * PRICE)/AVG(QTY * PRICE).

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-39


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The Role of Constraints


Constraints are very important in determining whether a
materialized query table can be used for a query
Referential constraints (to allow extra lossless joins in MQT, or
some dimensions missing in a query)
Functional dependencies (primary keys and unique indexes, to
allow more group-by columns in a query)
V8 allows informational referential constraints to be specified
for base tables, and avoids the enforcement overhead (in a
data warehouse environment)

© Copyright IBM Corporation 2004

Figure 9-25. The Role of Constraints CG381.0

Notes:
Referential constraints between base tables are an important factor in determining whether
an MQT can be used for a query. For this reason, informational referential integrity
constraints are introduced in DB2 V8. They allow you to declare referential constraints
(primary - foreign key relationships), but avoid the overhead of enforcing the referential
constraints by DB2 at the same time. DB2 can take advantage of the referential constraints
during automatic query rewrite.

9-40 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Informational Constraints
Informational RI is not enforced by database manager and is
ignored by most utilities
Except LISTDEF RI, QUIESCE/REPORT TABLESPACESET
Informational RIs are always used by the optimizer in query
rewrite

REFERENCES table-name
,

( column name )

ENFORCED ENABLE QUERY OPTIMIZATION

ON DELETE RESTRICT NOT ENFORCED


NO ACTION
CASCADE
SET NULL

© Copyright IBM Corporation 2004

Figure 9-26. Informational Constraints CG381.0

Notes:

Informational RI Syntax
The extract of the CREATE or ALTER TABLE syntax diagram above shows that V8 allows
you to use the NOT ENFORCED clause to define informational RI on a table.
As stated in the visual, most utilities are not affected by the new possibility to define
informational constraints on your tables. For more details on the use of informational RI by
utilities, see Figure 8-27, "Utility Changes to Support Informational RI Constraints", on page
8-60.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-41


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Informational RI Example

CREATE TABLE SCNDSTAR.TRANS


(TRANSID CHAR(10) NOT NULL PRIMARY KEY,
ACCTID CHAR(10) NOT NULL,
PDATE DATE NOT NULL,
STATUS VARCHAR(15),
LOCID CHAR(10) NOT NULL,
CONSTRAINT ACCTTRAN FOREIGN KEY (ACCTID)
REFERENCES SCNDSTAR.ACCT NOT ENFORCED,
CONSTRAINT LOC_ACCT FOREIGN KEY (LOCID)
REFERENCES SCNDSTAR.LOC NOT ENFORCED
)
IN DBND0101.TLND0101;

© Copyright IBM Corporation 2004

Figure 9-27. Informational RI Example CG381.0

Notes:
The example in the visual above demonstrates the usage of the NOT ENFORCED
parameter in a CREATE TABLE statement, to define two informational constraints
ACCTTRAN and LOC_ACCT on table SCNDSTAR.TRANS.

9-42 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
MQT Administration and Considerations
Information about MQTs is stored in SYSIBM.SYSVIEWS
No primary keys, unique indexes or triggers are allowed on
MQTs
All the MQTs and their indexes are dropped if their associated
base tables are dropped
Specialized MQTs or generalized MQTs?
Design issue as to whether to have a few generic MQTs or lots
of more specialized MQTs to help with particular queries
Trade off between performance and maintenance

© Copyright IBM Corporation 2004

Figure 9-28. MQT Administration and Considerations CG381.0

Notes:
After the successful creation of an MQT, the fullselect used to define it is stored in
SYSIBM.SYSVIEWS.
The REFRESH_TIME column contains the default timestamp at creation time.
It contains the value of the CURRENT TIMESTAMP special register after the REFRESH
TABLE statement finishes successfully (or after altering the MQT to a system-maintained
MQT).
Since you cannot create a primary key for an MQT and no unique index can be created, an
MQT can never be a parent table in a referential constraint. Note that you can create a
primary key and unique indexes on the underlying base tables of an MQT. Index
uniqueness is derived from definition when processing a query that references an MQT.
Consider creating your MQTs using the DISABLE QUERY OPTIMIZATION option.
Otherwise, it is possible that queries are rewritten to use the empty MQT when using
user-maintained MQTs.
If you drop a base table, all associated MQTs and their indexes are dropped as well.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-43


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The design of MQTs involves trade-offs between conflicting design objectives. On the one
hand, automatic MQTs that are specialized to a particular query or a set of queries can lead
to the greatest performance benefits. This approach can also lead to a proliferation of
MQTs, since many are needed to support a wide variety of queries. Since automatic MQTs
can be expensive to define and keep current, this approach can be expensive
On the other hand, MQTs whose purpose is more general, that is, which support a large
number of submitted queries will often tend to provide less performance improvement, but
easier maintenance, because there will be fewer of them.
In order to be able to make accurate decisions on which MQTs are needed, you need to
fully understand the query workload that is run against the underlying base tables in your
system.

9-44 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Summary of Changes
SQL Statements
CREATE TABLE / ALTER TABLE (extended)
REFRESH TABLE (new)
Special Registers
CURRENT REFRESH AGE (new)
CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION (new)
Catalog Changes
New table/view type 'M' for MQT in many catalog tables
New columns for informational RIs in SYSIBM.SYSRELS
New column NUM_DEP_MQTS in SYSIBM.SYSTABLES and
SYSIBM.SYSROUTINES
New columns REFRESH_TIME, REFRESH, ENABLE, ISOLATION,
MAINTENANCE, and SIGNATURE in SYSIBM.SYSVIEWS

© Copyright IBM Corporation 2004

Figure 9-29. Summary of Changes CG381.0

Notes:
Type ‘M” for MQTs is stored in SYSIBM.SYSTABLES, SYSIBM.SYSVIEWS,
SYSIBM.SYSVIEWDEP, SYSIBM.SYSPLANDEP, SYSIBM.SYSPACKDEP,
SYSIBM.SYSVTREE, and SYSIBM.SYSVLTREE.
There are two new columns in catalog table SYSIBM.SYSRELS; they are ENFORCED and
CHECKEXISTINGDATA. If the value of ENFORCED is set to N, the entry belongs to an
informational RI constraint. CHECKEXISTINGDATA basically contains the same
information. If ENFORCED is set to N, CHECKEXISTINGDATA is also always set to N.
Both SYSIBM.SYSTABLES and SYSIBM.SYSROUTINES are expanded by column
NUM_DEP_MQTS, which contains the information about how many MQTs are dependent
on a table or a table UDF respectively.
In addition to that, table SYSIBM.SYSVIEWS has six new columns that contain information
related to MQTs:
• REFRESH - ‘D’ for deferred refresh mode; or blank, which means the row does not
belong to an MQT.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-45


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• ENABLE - ‘Y’ or ‘N’ for QUERY OPTIMIZATION enablement, or blank for a view.
• MAINTENANCE - ‘S’ for system-maintained, ‘U’ for user-maintained or blank for view.
• REFRESH_TIME - Only used by system-maintained MQTs. It indicates the timestamp
of last REFRESH TABLE statement.
• ISOLATION - Isolation level when MQT is created or altered from a base table.
• SIGNATURE - Contains an internal description of the MQT.

9-46 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.2 Indexing Enhancements

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-47


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Index Support Improvements


Varying-length index keys (NOT PADDED)
Backward index scan

© Copyright IBM Corporation 2004

Figure 9-30. Index Support Improvements CG381.0

Notes:
In this topic, we discuss two new functions:
• Varying-length index keys
• Backward index scan

9-48 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Varying-Length Index Keys (1 of 2)
Prior to V8, data is varying length in the table but padded to the
maximum length in indexes
In V8, padding or not padding varying-length columns to the
maximum length in index keys is an option that you can choose
Performance Benefits:
"Index-only access" for VARCHAR data
Normally reduces the storage requirements for indexes since only
actual data is stored

© Copyright IBM Corporation 2004

Figure 9-31. Varying-Length Index Keys (1 of 2) CG381.0

Notes:
Prior to DB2 V8, VARCHAR and VARGRAPHIC columns are padded to their maximum
lengths when they are part of an index, but remain in their variable length format in the
tables. This facilitates fast key comparisons since these comparisons are between equal
length columns. The disadvantage to this approach is that index only access is not allowed
when retrieving a varying length key column.
Prior to V8, you can use the RETVLCFK=YES DSNZPARM (Panel DSNTIP4 Install DB2 -
Application Programming Defaults Panel 2, field VARCHAR FROM INDEX). This allows
you to use VARCHAR columns of an index and still have index only access. However,
when one of the columns in the SELECT list of the query is retrieved from the index when
using index-only access, the column is padded to the maximum length, and the actual
length of the variable length column is not provided. Therefore, an application must be able
to handle these “full length” variable length columns. DB2 V7 enhances this feature by
allowing index-only access against variable length columns in an index, even with
RETVLCFK=NO, if no variable length column is present in the SELECT list of the query.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-49


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 V8 supports true varying-length key columns in an index. Varying-length columns are
not padded to their maximum lengths, if you choose that option. This reduces the storage
requirements for this type of index, since only actual data is stored. Furthermore, this
allows for index-only access to index key columns of varying-length in all cases, and since
the length of the variable length column is stored in the index, it can potentially improve
performance.
Indexes can be created or altered to contain true varying-length columns in the keys.
Padding of both VARCHAR and VARGRAPHIC data to their maximum length can now be
controlled.
You can continue to use existing indexes that contain padded varying-length columns.
However, with DB2 V8, you have the ability to convert padded indexes to varying-length
indexes and also to convert varying-length indexes back to padded indexes.

9-50 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Varying-Length Index Keys (2 of 2)
New Keywords in CREATE/ALTER INDEX
NOT PADDED
Require additional bytes to store the length information for the
variable length columns within the key
PADDED
Varying-length columns are padded to maximum length
May be a better option for performance at the expense of storage
because of fast key comparisons since all keys have same length

Indexes are not automatically converted to NOT PADDED in V8


ALTER INDEX is required
The default is controlled by the PADIX DSNZPARM
CREATE UNIQUE INDEX INDEX1
ON TABLE(COL01,VARCOL02) NOT PADDED

© Copyright IBM Corporation 2004

Figure 9-32. Varying-Length Index Keys (2 of 2) CG381.0

Notes:
The new keywords NOT PADDED and PADDED on CREATE INDEX and ALTER INDEX
statements specify how varying-length columns are stored in the index.
• NOT PADDED specifies that varying-length columns are not padded to their maximum
length in the index. If there exists at least one varying-length column within the key,
length information is stored with the key. For indexes composed of only of fixed length
columns, there is no length information added to the key.
The default on the CREATE INDEX statement can be controlled through the new
DSNZPARM PADIX (Panel DSNTIPE Install DB2 - Thread Management, field PAD
INDEXES BY DEFAULT). A sample create of a non-padded index is:
CREATE UNIQUE INDEX DSN8810.XDEPT1
ON DSN8810.DEPT (DEPTNO ASC)
NOT PADDED
USING STOGROUP DSN8G810
PRIQTY 512
SECQTY 64

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-51


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

ERASE NO
BUFFERPOOL BP1
CLOSE YES
PIECESIZE 1 M;
• PADDED specifies that varying-length columns within the index are always padded with
the default pad character to their maximum length. All indexes prior to DB2 V8
new-function mode are padded by default. A sample create of a padded index is:
CREATE UNIQUE INDEX DSN8810.XDEPT1
ON DSN8810.DEPT (DEPTNO ASC)
PADDED
USING STOGROUP DSN8G810
PRIQTY 512
SECQTY 64
ERASE NO
BUFFERPOOL BP1
CLOSE YES
PIECESIZE 1 M;
When comparisons are made between keys with varying-length columns, the keys have to
match in length. This requires that like columns of different sizes have the smaller column
padded to the size of the larger column. Key comparison is left to right and column by
column. The following example illustrates this using a single column index:
Key entry 1 length=4 Value= x'F1F2F3F4'
Key entry 2 length=3 Value= x'F1F2F3'
Pad character = x'40'

After padding, Key 2 = x'F1F2F340'


When Key entry 1 and Key entry 2 are compared
Key value 1 > Key value 2
Indexes are not automatically converted to NOT PADDED, except for the system defined
indexes on the DB2 catalog. They are converted to NOT PADDED as part of the enable
new function mode processing.

9-52 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
ALTER INDEX Changes
Alter index from PADDED to NOT PADDED
Index placed in RBDP state
No access to data through this index
Reset by REBUILD INDEX
ALTER INDEX INDEX1 NOT PADDED

Alter index from NOT PADDED to PADDED


Index placed in RBDP state
No access to data through this index
Reset by REBUILD INDEX
ALTER INDEX INDEX1 PADDED

© Copyright IBM Corporation 2004

Figure 9-33. ALTER INDEX Changes CG381.0

Notes:
Indexes from a prior release do not automatically convert to NOT PADDED, even if an
ALTER TABLE ALTER COLUMN SET DATATYPE statement is executed and the altered
column is part of an the index. You have to use the ALTER INDEX statement to change a
PADDED index to NOT PADDED.
After an index has been altered to NOT PADDED, the index is placed in rebuild pending
state, if there exists at least one varying-length column in the index. A REBUILD of the
index is necessary to realize the full benefit of a NOT PADDED index.
Altering a PADDED index to a NOT PADDED index can be done as shown in the following
example:
ALTER INDEX DSN8810.XDEPT1 NOT PADDED
When altering a NOT PADDED index to a PADDED index, the index is placed in rebuild
pending state, if there exists at least one varying-length column in the index.
Altering a NOT PADDED index to a PADDED index can be done as shown below:
ALTER INDEX DSN8810.XDEPT1 PADDED;

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-53


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Note that DB2 tried to mitigate the effect of having an index in RBDP status. See Figure
2-83, "RBDP Index Avoidance", on page 2-121.

9-54 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Performance Expectation
Factors affecting performance of PADDED / NOT PADDED index:
NOT PADDED indexes can provide a true index-only access path
Size of the index pages
In general, number of index pages for NOT PADDED index are fewer
than for the PADDED index, thus less index getpages and index page
I/Os
Potentially fewer index levels
Smaller index size favors the index to be chosen for a query
(Index-only access)
Extra comparison
NOT PADDED indexes that contain multiple VARCHAR columns
require more than one comparison

© Copyright IBM Corporation 2004

Figure 9-34. Performance Expectation CG381.0

Notes:
NOT PADDED indexes can provide a number of advantages over PADDED indexes:
• NOT PADDED indexes allow for true index-only access. If you have applications that
today cannot take advantage of index-only access because they contain VARCHAR
columns in the index, NOT PADDED indexes can allow for index-only access and may
improve performance for those queries.
• NOT PADDED indexes are smaller in size than their PADDED counterparts. Smaller
indexes can mean fewer index pages to scan, and potentially fewer index levels.
The disadvantage of using NOT PADDED indexes is that they are more complex to
process. DB2 has to read the length of the NOT PADDED column in the index in order to
determine where the next column start. With fixed length columns, this is much easier.
Processing a NOT PADDED index with VARCHAR keys can require a significant amount of
additional CPU time, just like VARCHAR columns in a data record.
NOT PADDED index performance is heavily dependent on the number and size of
VARCHAR columns in the index key, as column comparisons are more expensive.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-55


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Backward Index Scan


In V8, DB2 selects an ascending index and can use a backward
scan to avoid the sort for the descending order
In V8, DB2 uses the descending index to avoid the sort and can
scan the descending index backwards to provide the ascending
order
To be able to use an index for backward scan,
Index must be defined on the same columns as ORDER BY and
Ordering must be exactly opposite of what is requested in
ORDER BY
If index defined as DATE DESC, TIME ASC, can do:
Forward scan for ORDER BY DATE DESC, TIME ASC
Backward scan for ORDER BY DATE ASC, TIME DESC
But must sort for:
ORDER BY DATE ASC, TIME ASC
ORDER BY DATE DESC, TIME DESC

© Copyright IBM Corporation 2004

Figure 9-35. Backward Index Scan CG381.0

Notes:
With the enhancements introduced to support dynamic scrollable cursors, DB2 also
provides the capability for backward index scans. This allows DB2 to avoid a sort and/or
allows you to define fewer indexes. With this enhancement it is no longer necessary to
create an ascending and descending index on the same table columns. The visual shows
an example.
For another example, if you create an ascending index (the default) on the ACCT_NUM,
STATUS_DATE and STATUS_TIME columns of the ACCT_STAT table, DB2 can use this
index for backward index scanning for the following SQL statement:
SELECT STATUS_DATE, STATUS
FROM ACCT_STAT
WHERE ACCT_NUM = :HV
ORDER BY STATUS_DATE DESC, STATUS_TIME DESC
DB2 can use the same index for forward index scan for the following SQL statement:

9-56 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty SELECT STATUS_DATE, STATUS


FROM ACCT_STAT
WHERE ACCT_NUM = :HV
ORDER BY STATUS_DATE ASC, STATUS_TIME ASC
This is true also for static scrollable cursors and non-scrollable cursors. In V7 you have to
create two indexes for the above to avoid a sort for both queries.
To be able to use the backward index scan, you have to create the index on the same
columns as the ORDER BY and the ordering must be exactly opposite of what is requested
in the ORDER BY.
For example, if you create the index as ACCT_NUM, STATUS_DATE DESC,
STATUS_TIME ASC, then DB2 can do a:
• Forward index scan for ORDER BY on STATUS_DATE DESC, STATUS_TIME ASC
• Backward index scan for ORDER BY STATUS_DATE ASC, STATUS_TIME DESC.
DB2 has to perform a sort for:
• ORDER BY STATUS_DATE DESC, STATUS_TIME DESC
• STATUS_DATE ASC and STATUS_TIME ASC
A backward index scan takes advantage of sequential detection to trigger dynamic prefetch
to read 32 index pages backward as needed. This can improve I/O performance by an
order of magnitude compared to a synchronous read of index pages one page at a time.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-57


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

9-58 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.3 Stage 1 and Indexable Predicates

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-59


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Stage 1 and Indexable Predicates


DB2 determines if a predicate is Stage 1, based on predicate
syntax, type and lengths of constants in the predicate, whether
predicate evaluation is done before or after a join operation
Prior to V8, if data types and lengths do not match, the predicate
is evaluated at Stage 2
Major performance enhancement in DB2 V8 for queries involving
predicates with mismatched data types and length comparison
Certain Stage 2 predicates become Stage 1, and possibly
Indexable
Eliminates table space scan when mismatch between host variable and
DB2 column
Improved situation in unknown join sequence

© Copyright IBM Corporation 2004

Figure 9-36. Stage 1 and Indexable Predicates CG381.0

Notes:
Stage 1 predicates are simple predicates evaluated by the Data Manager (DM). They are
evaluated first to reduce processing cost and eliminate the number of rows to be evaluated
by the complex predicates. They are also known as “sargable” predicates.
Stage 2 predicates are complex predicates evaluated by the Relational Data System
(RDS). They are also known as “nonsargable” or residual predicates.
An indexable predicate is a predicate that can use a (matching) index access as an access
path. Indexable predicates are always Stage 1, but not all stage 1 predicates are indexable.
DB2 determines if a predicate is Stage 1 based on:
• Predicate syntax
• Predicate type and lengths of constants
• Predicate evaluation done before or after join operation
DB2 V8 has introduced enhancements to facilitate major performance enhancement for
queries involving predicates with mismatched data types and length comparison and also
in joins.

9-60 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Mismatched Data Types
Becoming more common as not all programming languages
support the full range of SQL data types, for example
C/C++ has no DECIMAL data type
Java has no fixed character data type
The following predicate types become stage 1 and also
indexable for unlike data types
Col op expression
Expression op col
Col BETWEEN expression 1 and expression 2

© Copyright IBM Corporation 2004

Figure 9-37. Mismatched Data Types CG381.0

Notes:
When you do a database and application design, you normally make sure that the data
type of the columns in your tables match with the data types used by the host variables in
your programs. This has always been a good design rule (and still is) because it allows
DB2 to use certain techniques (like using an index) to boost performance.
However, it has become more and more difficult to apply this rule in all situations,
especially when you build new applications to access existing data (with an existing
design) on a DB2 for z/OS system. For example, when your application is coded in C, that
language does not have a DECIMAL data type, although some of the existing tables might
have columns defined as DECIMAL(p,s). Note that the C/C++ compiler for z/OS supports
fixed-point (packed) decimal data type.
Another case is Java. The Java language does not have a fixed length character string
data type; every string is variable length. In DB2, on the other hand, in most cases, fixed
length character columns defined as CHAR(n) are used.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-61


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Prior to DB2 V8, for many types of predicates, if the data types of the predicate operands
do not match, then the predicate is considered residual, also known as Stage 2, and its
treatment can have a negative effect on the performance of the query.
So when you run a simple SELECT statement in a Java application as shown below:
SELECT RESOURCE_GROUP,RESOURCE_OPTION,INTVAL,CHARVAL
FROM Q.RESOURCE_TABLE
WHERE RESOURCE_GROUP = :hv_res_gr
DB2 cannot use an index on RESOURCE_GROUP because of the mismatch in data type
of the column and the host variable. The data type of the RESOURCE_GROUP column is
CHAR, and that of the :hv_res_gr host variable is VARCHAR (since that is the only string
data type supported by Java).
In addition, it is sometimes also necessary to join tables on columns with different data
types, also resulting in not maximizing performance. (Joining on CHAR and VARCHAR
columns does not cause performance problems.)
DB2 V8 provides improved performance of queries that involve predicates with
mismatched data types. Now those predicates can be processed at Stage 1, and can
possibly also use an index (subject to certain restrictions).
Processing the following types of predicates is improved by this enhancement:
• col op expression
• expression op col
• col BETWEEN expression1 AND expression2
• col IN (list)
In these expressions:
• 'col' is the column name of a table.
• 'expression' is any expression. It may contain constants, host variables, special
registers, parameter markers or columns. The expression can be a simple column. For
example, T1.col = T2.col, or T1.col > T2.col. If it contains a column, the column must
not be in the same table as the other predicate operand.
• 'op' is either =, <, <=, >, >= or <> (note that '<>' is not indexable, but can be processed
at stage 1).
• ‘list’ items have to meet all of the following criteria:
- list items are only elements from the following list:
• Constants
• Host variables
• Special registers
• Session variables
• Parameter markers

9-62 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - The predicate that contains the list is not in the WHEN clause of a trigger
- For every element in list, column = list-element must be stage 1 and indexable
When each predicate operand is a simple column from different tables (for example,
T1.col = T2.col), then the join sequence determines which predicate operand is considered
the 'column' and which is considered the 'expression'. The inner table is considered to be
the ‘column’ and the outer table in the join the ‘expression’.
For example, consider the following predicate:
• T1.col > T2.col
If T1 is the inner table of the join, then T1.col is considered the 'column' and T2.col is
considered the 'expression'. Likewise, if T2 is the inner table of the join, then T2.col is
considered the 'column' and T1.col is considered the 'expression'.
All predicates of the form listed above are now indexable and processed during Stage 1,
subject to certain conditions. Let us now look at a few examples to illustrate these
enhancements.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-63


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Mismatched Operands
Numeric Types Comparison

EMP( NAME CHAR (20),


SALARY DECIMAL (12,2),
DEPTID CHAR (3) );

SELECT * FROM EMP


WHERE SALARY > :HV_FLOAT ;

Prior to V8 V8 and beyond

Stage-2 predicate Stage-1 predicate


Table space scan Could use index on
SALARY column

© Copyright IBM Corporation 2004

Figure 9-38. Mismatched Operands Numeric Types Comparison CG381.0

Notes:
Assume that we have a table EMP defined as shown in the visual.
This example shows how the SALARY column (decimal data type) is compared with a float
host variable. In this case the predicate can be processed during Stage 1 and, assuming
an index exists on SALARY, is also indexable. Note that salary has a precision less than
16.
This would not be the case if SALARY is defined as DECIMAL(16,2).
Numeric Types Comparison
All numeric type comparisons are Stage 1 and indexable except the following ones:
• REAL -> DEC(p,s) where p > 15
• FLOAT -> DEC(p,s) where p > 15
Note that in the comparison notation above, the REAL or FLOAT “value” refers to the
“right-hand side” of the predicate, or the outer table in a join. For example, the restriction
applies to the following predicate: DEC_column > REAL_hostvar (if the precision of the
DEC_column is greater than 15).

9-64 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty In the case above, the decimal value is the indexed value, so the comparison must be done
on the decimal value. However, REAL and FLOAT values cannot be converted to decimal
with precision > 15 without possibly changing the collating sequence. Consequently, these
are Stage 2 (residual) predicates.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-65


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Mismatched Operands String Types

SELECT * FROM EMP


WHERE DEPTID = '6S5A' ;

or CHAR(3)
SELECT * FROM EMP
WHERE DEPTID = '6S5' ;

V8 and beyond
Prior to V8

Stage-2 predicate Stage-1 predicate


Table space scan Could use index on
DEPTID column

© Copyright IBM Corporation 2004

Figure 9-39. Mismatched Operands String Types CG381.0

Notes:
This example shows how the DEPTID column (character data type) is compared with a
character host variable of longer length. In this case the predicate can be processed during
stage 1 and, assuming an index exists on DEPTID, is also indexable.
String Types Comparison
We now consider several types of string comparisons.
Same CCSID String Comparisons
All predicates comparing string types with the same CCSID are stage 1 and indexable
except the following ones:
• graphic/vargraphic -> char/varchar
In general, predicates comparing graphic/vargraphic to char/varchar are not indexable.
However, if the char/varchar is Unicode mixed and the predicate is an '=' predicate, then
the predicate is indexable.

9-66 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • char/varchar(n1) -> char/varchar(n2) n1 > n2 and not '=' pred


• graphic/vargraphic(n1) -> graphic/vargraphic(n2) n1 > n2 and not '=' pred
• char/varchar(n1) -> graphic/vargraphic(n2) n1 > n2 and not '=' pred
Here the indexed value is the right hand side of “->”, and so the comparison must be done
with that data type and length. However, when the left hand side value in these cases is
cast to the right hand side data type and length, truncation may occur. Consequently, these
cases are stage 1 but not indexable.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-67


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Mismatched Operands Transitive Closure

SELECT DEPT.NAME, EMP.NAME CHAR(3)


FROM EMP, DEPT
WHERE EMP.DEPTID = ? AND
EMP.DEPTID = DEPT.ID AND
DEPT.ID = ? ; generated

CHAR(4)
Prior to V8 V8 and beyond

Stage-2 predicate Stage-1 predicate


Table space scan Could use index on
DEPT.ID column

© Copyright IBM Corporation 2004

Figure 9-40. Mismatched Operands Transitive Closure CG381.0

Notes:
Assume that table DEPT exists with column ID defined as CHAR(3). Remember
EMP.DEPTID is defined as CHAR(4). In DB2 V8, predicate transitive closure is done even
though EMP.DEPTID and DEPT.ID are of different lengths. The new predicate is stage 1
and is indexable. The generated parameter marker has the same size as the parameter
marker in the EMP.DEPTID predicate.

9-68 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Unknown Join Sequences
Where both operands are columns from different tables, whether
the predicate is Stage 1 or Stage 2 is determined by:
The join sequence
Which column is considered the column and which the expression
A join predicate between a user-defined table function and a
base table is Stage 1 if the base table is the inner table of the join
and the other sargability conditions are met

© Copyright IBM Corporation 2004

Figure 9-41. Unknown Join Sequences CG381.0

Notes:
In the case of a join where the operands are columns from different tables, whether the
predicate is Stage 1 or not is determined by the following:
• Whether DB2 evaluates the predicate before or after a join operation:
A predicate that is evaluated after a join operation is always a Stage 2 predicate.
• Join sequence:
The same predicate might be Stage 1 or Stage 2, depending on the join sequence. Join
sequence is the order in which DB2 joins tables when it evaluates a query. This is not
necessarily the same as the order in which the tables appear in the predicate.
For example, the following predicate might be Stage 1 or Stage 2:
T1.C1=T2.C1+1
If T2 is the first table in the join sequence, the predicate is Stage 1, but if T1 is the first
table in the join sequence, the predicate is Stage 2.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-69


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Unknown Join Sequence Column Expression

SELECT E1.*
FROM EMP E1, EMP E2, DEPT
WHERE E1.DEPTID = DEPT.ID AND
DEPT.MGR = E2.NAME AND
E1.SALARY > E2.SALARY * 1.10 ;

DECIMAL(12,2)
Prior to v8
(w/o PQ54042) V8 and beyond

Stage-2 predicate If E2 is inner table


Stage-2 predicate
If E1 is inner table
Stage-1 predicate
Could use index on E1.SALARY

© Copyright IBM Corporation 2004

Figure 9-42. Unknown Join Sequence Column Expression CG381.0

Notes:
A performance improvement that impacts joins when column expressions are used in
predicates was introduced in DB2 V7 with the APAR PQ54042 and extended to V8.
Assume that table DEPT with column ID exists.
Whether the predicate E1.SALARY > E2.SALARY * 1.10 is considered stage 1 or not is
determined by the sequence in which the tables are joined.

9-70 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Unknown
Join Sequence - BETWEEN Predicates

SELECT EMP.*
FROM EMP, SALRANGE S
WHERE EMP.LEVEL = S.LEVEL AND
EMP.SALARY BETWEEN S.LOW AND S.MID;

Prior to v8
(w/o PQ54042) V8 and beyond

Stage-2 predicate If SALRANGE is inner table


Stage-2 predicate
If EMP is inner table
Stage-1 predicate
Could use index on EMP.SALARY

© Copyright IBM Corporation 2004

Figure 9-43. Unknown Join Sequence - BETWEEN Predicates CG381.0

Notes:
A performance improvement that impacts joins when BETWEEN is used with column
names in predicates was introduced in DB2 V7 with the APAR PQ54042 and extended to
V8.
Assume that table EMP has been altered to include column LEVEL and table SALRANGE
containing columns LOW and MID exists.
Whether the predicate EMP.SALARY BETWEEN S.LOW AND S.MID is considered Stage
1 or not is determined by the sequence in which the tables are joined.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-71


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Unknown Join Sequence - Table UDFs

BOOK( ID INTEGER,
TITLE CHAR(60), Table UDF
AUTHOR CHAR(30) );

SELECT BOOK.ID, BOOK.TITLE


FROM BOOK, TABLE (tf_contain('Computer')) tf(id)
WHERE BOOK.ID = tf.id ;

Prior to v8 V8 and beyond


(w/o PQ54042)
If tf_contain is inner table
Stage-2 predicate
Stage-2 predicate
Table space scan on BOOK If BOOK is inner table
Stage-1 predicate
Could use index on BOOK.ID column

© Copyright IBM Corporation 2004

Figure 9-44. Unknown Join Sequence - Table UDFs CG381.0

Notes:
A performance improvement that impacts table UDFs was introduced in DB2 V7 with the
APAR PQ54042 and extended to V8 to unlike data types.
With this enhancement, the book.ID=tf.ID predicate is Stage 1 and indexable provided the
table UDF is accessed first (outer table).
If you join a base table with a user-defined table function, the sequence in which the tables
are joined determines whether the predicate is Stage 1 or not.

9-72 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Unknown Join Sequence - Different CCSIDs

UNICODE EBCDIC
CHAR type
SELECT ...
FROM SUPPLIER, PRODUCT
WHERE SUPPLIER.ID = PRODUCT.SUPPLIERID ;

Prior to V8 V8 and beyond

Not allowed If PRODUCT (EBCDIC) is inner table


Stage-1 predicate
Not indexable
If SUPPLIER (Unicode) is inner table
Stage-1 predicate
Could use index on SUPPLIER.ID
column

© Copyright IBM Corporation 2004

Figure 9-45. Unknown Join Sequence - Different CCSIDs CG381.0

Notes:
DB2 UDB for z/OS and OS/390 is increasingly being used as a part of large client server
systems, for example, in data centers of multinational companies and e-commerce. DB2
V7 introduced Unicode support to store data in Unicode. However, being able to store data
in Unicode is not enough to solve all code page related problems. For example, DB2 V7
does not allow joining tables with different encoding schemes. Thus, an EBCDIC table
cannot be joined with a Unicode table.
DB2 V8 enhances support for Unicode and allows joining of tables with different encoding
schemes. Thus, an EBCDIC table can be joined with a Unicode table.
The join predicate in this situation is considered Stage 1. However, if the join predicate is
indexable or not depends on the sequence in which the tables are joined.
The same restrictions as for string comparisons between the same CCSID also apply here.
Besides that, in order to be Stage 1 and indexable, the inner table column, or the “col” side
of the predicate, has to be Unicode. Otherwise, the predicates are Stage 1 but not
indexable. The reason is that all predicates comparing unlike CCSID are evaluated in the
Unicode encoding scheme.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-73


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

In all cases involving “column = column” comparisons (same or unlike CCSID) with
columns from different tables, the optimizer considers a merge scan join as a potential
access path.

9-74 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.4 Table UDF Cardinality Option and Block Fetch

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-75


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Table UDF Cardinality and Materialized FETCH


What is it? .....
User-defined table function cardinality option indicates the total number
of rows returned by a user-defined table function reference
Used by DB2 at bind time to evaluate table function access cost
A host variable or parameter marker cannot be used
Using materialized FETCH, rows returned from a user-defined function
are pre-fetched into a work file in its first invocation
Feature is enabled by DB2 based on the access cost estimation
Benefits .....
Better performance for text search with Text Extender using UDF
Enhances ability to tune performance of queries containing
user-defined table function
Performance improvement to move data between table functions and
DB2 using block data movement
Best performance when combined with V8 capability to make
predicates with unlike data types stage1

© Copyright IBM Corporation 2004

Figure 9-46. Table UDF Cardinality and Materialized FETCH CG381.0

Notes:
Today, for a user-defined table function, you can specify the CARDINALITY option to
specify an estimate of the expected number of rows that the function returns. The number
is used for optimization purposes. This is fine as long as each invocation of the UDF
returns more or less the same number of rows. However, that is not always the case.
Subsequent invocations of the table UDF, depending on the input parameters, can return a
totally different answer set size.
In Version 8, DB2 allows you to specify the cardinality option when you reference a
user-defined table function in an SQL statement, for example, in a SELECT. With this
option, users have the capability to better tune the performance of queries that contain
user-defined table functions.
The user-defined table function cardinality option indicates the total number of rows
returned by a user-defined table function reference. The option is used by DB2 at bind time
to evaluate the table function access cost.

9-76 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
UDF Cardinality
SELECT Statement Syntax Changes

Table-function-reference / table-spec

TABLE (function-name ( )
,

expression
TABLE transition table name

) correlation clause
,
table UDF cardinality clause
expression
TABLE translation table name

Table-UDF-cardinality-clause

CARDINALITY integer constant


CARDINALITY MULTIPLIER numeric constant

© Copyright IBM Corporation 2004

Figure 9-47. UDF Cardinality SELECT Statement Syntax Changes CG381.0

Notes:
A cardinality clause can be specified to each user-defined table function reference within
the table specification of the FROM clause in a subselect. This option indicates the
expected number of rows to be returned by referencing the function in a particular query.
The cardinality clause comes in two flavors, as shown in the visual.
• The CARDINALITY keyword, followed by an integer that represents the expected
number of rows returned by the user-defined table function.
This keyword specifies an estimate of the expected number of rows returned by
user-defined table function reference.
Example: DB2 expects number of rows to be returned by a user-defined table function
is 30 regardless of value in CARDINALITY column in SYSIBM.SYSROUTINES for this
function.
SELECT * FROM TABLE (TUDF(1) CARDINALITY 30) AS X;
• The CARDINALITY MULTIPLIER keyword, followed by a numeric constant.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-77


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The expected number of rows returned by the table function is computed by multiplying
the given number with the reference cardinality value that is retrieved from the
CARDINALITY column of SYSIBM.SYSROUTINES for the corresponding table function
name, that was specified when the user-defined table function was created.
Example 1: If SYSIBM.SYSROUTINES.CARDINALITY = 1 for the user-defined table
function, DB2 assumes the expected number of rows to be returned is 30 (30 * 1) for
this invocation of the function.
SELECT * FROM TABLE(TUDF(2) CARDINALITY MULTIPLIER 30) AS X;

9-78 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Cardinality Option and Cardinality in Catalog
Cardinality-clause is not SQL standard syntax
Cardinality option
Cardinality and cardinality multiplier cannot co-exist
Cardinality is integer only
Value of integer_constant must range from 0 to 2147483647
Cardinality multiplier can be integer, decimal, floating point

Cardinality in catalog
Table UDF cardinality in table SYSIBM.SYSROUTINES
Column CARDINALITY populated by CREATE TABLE UDF statement

Cardinality option and cardinality in catalog comparison


Cardinality multiplier is multiplied with cardinality in catalog
Cardinality option applies to current query only, it does not overwrite
cardinality in catalog

© Copyright IBM Corporation 2004

Figure 9-48. Cardinality Option and Cardinality in Catalog CG381.0

Notes:
The cardinality clause is a non-standard SQL feature, specific to DB2 for z/OS
implementation.
The cardinality clause allows you to specify either the CARDINALITY option or the
CARDINALITY MULTIPLIER option. These keywords are mutually exclusive.
Specifying the CARDINALITY option when referencing a user-defined table function in a
SELECT statement does not change the corresponding CARDINALITY column value in
SYSIBM.SYSROUTINES. When you specify the CARDINALITY option when referencing a
user-defined table function, the value only applies for that particular query, and the value in
the CARDINALITY column in SYSIBM.SYSROUTINES is ignored for that particular query.
Specifying the CARDINALITY MULTIPLIER option when referencing a user-defined table
function in a SELECT statement does not change the CARDINALITY column value in
SYSIBM.SYSROUTINES. When you specify the CARDINALITY MULTIPLIER option when
referencing a user-defined table function, the value only applies for that particular query.
However, the value in the CARDINALITY column in SYSIBM.SYSROUTINES is not
ignored.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-79


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The CARDINALITY column value in SYSIBM.SYSROUTINES can only be initialized by the


CARDINALITY option in the CREATE FUNCTION statement when the user-defined table
function is created. It can be changed by the CARDINALITY option in the ALTER
FUNCTION statement.
The following example illustrates a case where the CARDINALITY MULTIPLIER option for
a user-defined table function can influence the query optimization process of DB2.
SELECT *
FROM BOOKS B,
TABLE(CONTAINS(1,'cs') CARDINALITY MULTIPLIER 15.0) AS X1(ID),
TABLE(CONTAINS(2,'database') CARDINALITY MULTIPLIER 2.0) AS X2(ID),
TABLE(CONTAINS(3,'Clark') CARDINALITY MULTIPLIER 0.03) AS X3(ID)
WHERE B.ID = X1.ID AND B.ID = X2.ID AND B.ID = X3.ID;
In this example, we assume that, for a user-defined table function CONTAINS, the
CARDINALITY column in SYSIBM.SYSROUTINES is 1000. The table function CONTAINS
searches a string in a column of the BOOKS table and returns ID numbers of the matching
BOOKS rows. The first argument of CONTAINS indicates the column number of BOOKS
and the second argument is the search string.
The first reference to CONTAINS searches a string 'cs' in the category of books (which is
column 1 of the BOOKS table). We expect that 15000 books will meet the condition.
The second reference indicates that we expect to find 2000 books that contain a string
'database' in their abstracts (column 2).
The third reference indicates that there are probably around 30 books written by authors
called 'Clark' (the authors column is column 3 in the BOOKS table).
The following example shows that, instead of using the CARDINALITY MULTIPLER option,
the same query can be written using the CARDINALITY option.
SELECT *
FROM BOOKS B,
TABLE(CONTAINS(1,'cs') CARDINALITY 15000) AS X1(ID),
TABLE(CONTAINS(2,'database') CARDINALITY 2000) AS X2(ID),
TABLE(CONTAINS(3,'Clark') CARDINALITY 30) AS X3(ID)
WHERE B.ID = X1.ID AND B.ID = X2.ID AND B.ID = X3.ID;
When you estimate the number of rows returned by each reference of the CONTAINS
function, DB2 can evaluate the access cost more accurately based on the specified
cardinality option, and a more appropriate join sequence and join type can be chosen by
the query optimization process. The effectiveness of the option depends on the access
cost of the user defined table function computed by DB2, relative to the access costs of the
other tables in the query.

9-80 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Sequential Fetch versus Materialized Fetch
Sequential fetch
Each returned row needs a UDF fetch call
Each call needs a context switch, also known as, suspend/resume
Materialized fetch
All rows are returned in one UDF (open) call and stored in work file
Saves (#rows -1) context switch
Adds one-time create work file, insert rows into work file and deallocate work
file cost
Plan_table: Materialized fetch, as a new table access type for table UDF is
displayed as 'RW'.
Sequential fetch compared to materialized fetch
When the number of rows reaches a threshold, materialized fetch performs
better than sequential fetch
Depends on other UDF parameters, such as CPU time per row, I/O cost per row
Depends also on system parameters, such as MIPS
DB2 decides which is better, based on access cost estimation
The feature of materialized fetch is independent of the cardinality option. If no
cardinality option is specified, the cost estimation is done using the cardinality
data in the catalog (or using the default value)

© Copyright IBM Corporation 2004

Figure 9-49. Sequential Fetch versus Materialized Fetch CG381.0

Notes:
The performance improvement of queries using table UDFs is achieved in conjunction with
another new feature introduced in DB2 V8, called “Materialized fetch” or “Table UDF block
fetch”.
Before this enhancement, each returned row from a user-defined table function needs a
UDF fetch call, and each call needs a context switch, since a UDF runs in a WLM managed
address space, like a stored procedure. Context switches can become expensive,
especially when the table UDF returns many rows.
To reduce the amount of context switches required, DB2 V8 uses a technique called
materialized fetch or block fetch.
All rows are returned during the first invocation of the UDF, prefetched and stored in DB2
workfile. By using this technique, the savings are #rows_in_table_UDF_result - 1 context
switches, at the cost of a one-time workfile creation, the cost to insert the rows into the
workfile and the cost of deallocating the workfile.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-81


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Note that this type of block fetch is not to be confused with block fetching in a distributed
environment.
During access path selection, the optimizer decides whether or not to use this new block
fetching technique, depending on the following criteria:
• The estimated #rows returned by the user-defined table function is considered.
• SYSIBM.SYSROUTINES information: When available, the values in the
IOS_PER_INVOC, INSTS_PER_INVOC, INITIAL_IOS, INITIAL_INSTS columns are
taken into consideration. Note that you have to supply this information by manually
updating the catalog. RUNSTATS has no way of collecting this information. When the
information is not available, default values are used.
• The processor speed of the machine you are running on is also taken into
consideration, since the INST_PER_INVOC and INITIAL_INSTS columns are
expressed in number of instructions, and not in service units or CPU seconds.
You can determine if DB2 has used the table UDF block fetch feature by using EXPLAIN.
When used, the ACCESSTYPE field in the PLAN_TABLE contains “RW.” Equivalent
information is available in the mini-plan trace record IFCID 22.

9-82 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.5 Trigger Enhancements

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-83


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Trigger Enhancements
TRIGGER work file creation avoided
When the WHEN clause (conditional trigger) evaluates false (and the
trigger is not invoked)
When old/new transition variables/tables can fit in a memory buffer
Very significant performance enhancement
When few or no triggers fired
When the transition variable information fits into the memory buffer
Applies to both BEFORE and AFTER triggers

© Copyright IBM Corporation 2004

Figure 9-50. Trigger Enhancements CG381.0

Notes:
Prior to DB2 V8, each time an AFTER trigger with a WHEN clause (also known as a
conditional trigger) is invoked, a work file is created for the old and new transition variables.
The work file is always created, even when the trigger is not activated because the WHEN
condition evaluates false.
For example, let us say that you insert 1000 rows into a table that has a trigger. Assume
that only three of these rows actually invoke the trigger (satisfying the WHEN clause) that is
defined on the table. Since a transition table (work file) is created for each change/insert,
the transition table is created 1000 times and only used by the trigger manager three times.
So for 997 times, the workfile is created and deleted needlessly.
In the following example, the insert of the first row causes the row to be inserted into a
transition table, but the trigger is not invoked, because the values NAME = 'TASHA' and
POUNDS = 10 do not match the WHEN clause for trigger NEWCAT. The transition table is
deleted after the statement is completed.

9-84 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The trigger is not invoked for the second row also, because the values NAME = ‘BLACKIE’
and POUNDS = 9 do not match the WHEN clause for trigger NEWCAT. The transition table
is deleted after the statement is completed.
However, the trigger is invoked for the third row because the values NAME = ‘SUNSHINE’
and POUNDS = 12 match the WHEN clause for trigger NEWCAT. So the row that is
inserted into the transition table is actually used to process the AFTER trigger.
Here is the coding for our example:
CREATE TRIGGER NEWCAT
AFTER INSERT ON CATS
REFERENCING NEW AS NROW
FOR EACH ROW MODE DB2SQL
WHEN (NROW.NAME = 'SUNSHINE' AND NROW.POUNDS = 12)
INSERT INTO PETS(COL1,COL2,COL3,COL4)
VALUES (0, 1, NROW.NAME, 'INSERTED SUNSHINE');

INSERT INTO CATS(ID,NAME,POUNDS,C4,C5,C6,C7,C8,C9,C10)


VALUES (1, 'TASHA', 10, '001',4, 2, 2, 4342, 'PURINA CAT CHOW', 'ANN')
INSERT INTO CATS(ID,NAME,POUNDS,C4,C5,C6,C7,C8,C9,C10)
VALUES (2, 'BLACKIE', 9, '001', 4, 2, 2, 3023, 'KAL KAN', 'BETH')
INSERT INTO CATS(ID,NAME,POUNDS,C4,C5,C6,C7,C8,C9,C10)
VALUES (3, 'SUNSHINE', 12, '001', 4, 2, 2, 1000, 'FRISKIES BUFFET', 'BETH')
In addition, V8 uses a memory buffer to store a small numbers of rows with the transition
variables to avoid creating and deleting the work files. If more information needs to be
stored than fits into the buffer, a workfile is created.
Note that this enhancement applies to ALL triggers, not just AFTER triggers.
This enhancement can represent a very significant performance enhancement, especially
when the trigger is only fired a few times compared to the number of time is evaluated, or
when only a small amount of transition variable information needs to be kept, and can fit in
a memory buffer, so no workfile needs to be created.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-85


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

9-86 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.6 Distribution Statistics on Non-Indexed Columns

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-87


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The Need for Extra Statistics


Distribution statistics are currently collected for indexed columns
only
Non-uniform distribution statistics for non-leading indexed
columns are not collected by RUNSTATS, which can result in
non-optimal performance
Less efficient join sequences
Inappropriate table join method
Increase in the number of rows that need to be processed

© Copyright IBM Corporation 2004

Figure 9-51. The Need for Extra Statistics CG381.0

Notes:
With DB2 V7, RUNSTATS may only collect frequencies on the leading column and leading
concatenated column groups of indexed columns. Data correlation and skew may occur on
any column or column group. When a query contains predicates on columns which are
correlated and/or skewed, and the optimizer does not have correlation and/or skew
statistics, the optimizer may incorrectly estimate the filtering for those predicate(s).
Inaccurate filter factors can manifest themselves in many ways — inefficient join sequence,
join method, single table access. Also, access paths for queries where insufficient statistics
information is available tend to be more unstable. Inaccurate filter factor estimation can
result in inaccurate query cost, so efficient and inefficient access paths may end up having
similar cost estimate. This increases the probability that insignificant costing change results
in access path change and possibly severe regression. When the optimizer is accurately
estimating filter factors, the estimated costs more accurately reflect actual costs. So
inefficient and efficient access paths are unlikely to be close in cost estimate — which
should result in more efficient and stable access paths.

9-88 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty With DB2 V8, RUNSTATS can collect the most and least frequently occurring frequencies
on almost any column or column group. RUNSTATS can also collect multi-column
cardinality on almost any column group. In many cases, customers have predicates on
some matching and some screening columns. Now customers can collect statistics to
better reflect the total index filtering. When customers have correlation and skew on some
indexed and some non-indexed columns, customers can collect statistics so the optimizer
more accurately estimates the number of rows which qualify from the table. The number of
rows qualified from a table can be critical in accurately costing joins — such as how many
probes will be done to the inner table for nested loop join, and in deciding which table
should be the outer table.

DSTATS (Distribution Statistics for DB2 for OS/390) Tool


Before DB2 V8, users could download a program called DSTATS (Distribution Statistics for
DB2 for OS/390). DSTATS is a standalone DB2 application program containing embedded
dynamic and static SQL statements. This tool is aimed to address the issue by collecting
additional statistics on column distributions that were not being collected by RUNSTATS
before DB2 V8. The DSTATS tool can be downloaded using the following link:
http://www-1.ibm.com/support/docview.wss?uid=swg24001598
The RUNSTATS utility is enhanced to collect additional distribution statistics on virtually
any column (or group of columns) that would likely be used in a predicate. The collection of
this information is especially important when the data distribution in those columns is
skewed.
Note: This enhancement to the RUNSTATS utility eliminates the need to use the
DSTATS tool for DB2 V8. In addition, the DSTATS program has not been enhanced to
support DB2 V8.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-89


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

RUNSTATS Enhancement
Can collect distribution statistics on any column, or group(s) of
columns, indexed and non-indexed, specified at the table level
Frequency distributions for (non-indexed) columns or groups of
columns
Cardinality values for groups of (non-indexed) columns
LEAST frequently occurring values, along with MOST for both
indexed and non-indexed column distributions
New keywords: COLGROUP MOST LEAST BOTH
SORTNUM SORTDEVT

© Copyright IBM Corporation 2004

Figure 9-52. RUNSTATS Enhancement CG381.0

Notes:
In DB2 Version 8, the RUNSTATS utility has incorporated the functionality of the DSTATS
program. You can now use RUNSTATS to collect distribution statistics on any column in
your tables, whether or not they are part of an index, and whether or not they are the
leading columns of an index (when collecting statistics for column groups).
When customers have correlation and skew on certain columns, collecting these additional
statistics can help the optimizer to more accurately estimate the number of rows which
qualify from the table (filter factor). More accurate filter factor computations should lead to
better optimization choices. Thus the query performance improves with better filter factor
information in the DB2 catalog.
These additional statistics can only be gathered by the “stand-alone” RUNSTATS utility.
They cannot be gathered when collecting statistics as part of another utility’s execution,
so-called inline statistics.
To summarize, RUNSTATS enhancements provide the following functionality:
• Frequency value distributions for non-indexed columns or groups of columns.

9-90 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Cardinality values for groups of non-indexed columns.


• LEAST frequently occurring values, along with MOST frequently occurring values, for
both index and non-indexed column distributions. (DB2 V7 only gathers the most
frequently occurring values, and therefore does not require you to specify a keyword to
indicate which statistics you want.)

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-91


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

RUNSTATS Utility Syntax Changes

RUNSTATS TABLESPACE LIST-listdef-name


table-space-name PART - integer

database-name .

TABLE (table-name)
25 column-spec colgroup-spec
SAMPLE integer

SORTDEVT device-type SORTNUM integer

col-group spec:

COLGROUP ( column name )

MOST
FREQVAL - COUNT integer
BOTH
LEAST

© Copyright IBM Corporation 2004

Figure 9-53. RUNSTATS Utility Syntax Changes CG381.0

Notes:
The visual shows the changes to the RUNSTATS utility syntax to collect cardinality and
distribution statistics on any column or group of columns in a table.
The colgroup-spec allows you to specify the COLGROUP and associated keywords to
collect cardinality and distribution statistics on any column or group of columns in a table.
The sort-spec specifies the device type that allows DFSORT to dynamically allocate the
sort work data sets that are required.
The correlation-stats-spec block (Figure 9-1) in V7 allows you to specify what distribution
statistics to gather at the index level. This block is enhanced in DB2 V8 to include the
keywords MOST, BOTH, and LEAST. Their meaning in this block is the same as in the
colgroup-spec block. The same block can also be used in the specification of a RUNSTATS
INDEX statement.

9-92 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
correlation-stats-spec:

FREQVAL NUMCOLS 1 COUNT 10 MOST

MOST
KEYCARD FREQVAL NUMCOLS integer COUNT integer
BOTH
LEAST

Figure 9-1 Correlation-stats-spec Syntax Block

Collecting Cardinality and Distribution Statistics


To enable the collection of cardinality and distribution statistics on any table column, a new
colgroup-spec block is introduced. New keywords COLGROUP, LEAST, MOST and BOTH
are introduced in this block. In addition, the existing keywords FREQVAL, and COUNT can
also be used. Cardinality and distribution statistics are collected only on the columns
explicitly specified. Cardinality and distribution statistics are not collected if you specify
COLUMN ALL.
COLGROUP
When the keyword COLGROUP is specified, the set of columns specified within the
COLGROUP keyword is treated as a group. The cardinality values are collected on the
column group. You can specify the COLGROUP keyword multiple times (with different sets
of columns) for the same table in your RUNSTATS statement.
The cardinality statistics for the specified group(s) of columns are collected in
SYSCOLDIST, and if the table space is partitioned also in SYSCOLDISTSTATS, catalog
tables.
The COLUMN keyword works the same as in previous releases. The cardinality statistics
of individual columns are collected in the SYSCOLUMNS catalog table, as in versions prior
to DB2 V8.
FREQVAL
This keyword controls the collection of frequent value statistics. These are collected either
on the column group or on individual columns, depending on what you specified on the
COLGROUP keyword. If FREQVAL is specified, then it must be followed by the keyword
COUNT. When specified for table-level statistics, FREQVAL can only be specified together
with the COLGROUP keyword. (No frequent value statistics are collected for columns
specified together with the COLUMN keyword.)
If the FREQVAL keyword is specified together with the COLGROUP keyword, the
distribution statistics are collected in SYSCOLDIST and if the table space is partitioned
also in SYSCOLDISTSTATS catalog tables.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-93


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

COUNT integer
COUNT indicates the number of frequent values to be collected. Specifying an integer
value of 20 means to collect 20 frequent values for the specified columns. No default value
is assumed for COUNT. The keyword COUNT integer is followed by the keyword MOST,
LEAST, or BOTH.
MOST
The most frequent values are collected when the keyword MOST is specified.
LEAST
The least frequent values are collected when the keyword LEAST is specified.
BOTH
The most frequent values and the least frequent values are collected when the keyword
BOTH is specified.

Collecting Column Correlation Statistics on Indexed Columns


The keywords FREQVAL, NUMCOLS, and COUNT in the correlation-stats-spec block can
be used to collect, by default, the most frequent value statistics for the specified index as in
prior versions. In V8, this block is enhanced to include specification of the keyword MOST,
LEAST, or BOTH, as shown before, in Figure 9-1 on page 9-145.

Using Work Data Sets for Collecting Frequency Statistics


If you are collecting frequency statistics, for example, for a data-partitioned secondary
index, DB2 sorts the statistics once for each RUNSTATS job. You need to specify how to
sort the statistics by using temporary work data sets and a sort message data set. You can
use the SORTDEVT option to specify the device type for temporary data sets that DFSORT
can use for sorting.
You can also use the SORTNUM option to specify the number of temporary data sets to
use. The DD name STnnWKmm defines the sort work data sets that are used during utility
processing. The value of nn identifies one or more data sets that are to be used by the
subtask invocation of DFSORT. You can dynamically allocate the work data sets by using
the TEMPLATE utility, or you can define the data sets through JCL statements.

Estimating the Size of the STnnWKmm Data Sets


If you define the data sets through JCL, you need to determine the size and number of
records that RUNSTATS is to create and process. You can use the following formula to
calculate the size of the data sets:
2 * (maximum record length * numcols * (count + 2) * number of indexes)
The values in the formula are as follows:

9-94 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Maximum record length The maximum length of the SYSCOLDISTATS record on which
RUNSTATS is to collect frequency statistics. You can obtain this
value from the RECLENGTH column in SYSTABLES.
Numcols The number of key columns to concatenate when you collect
frequent values from the specified index.
Count The number of frequent values that RUNSTATS is to collect.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-95


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example 1
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
TABLE(DSN8810.EMP)
COLGROUP(EDLEVEL,JOB,SALARY)

SYSIBM.SYSCOLDIST
NAME TYPE NUMCOLUMNS COLGROUPCOLNO CARDF

EDLEVEL C 3 00090008000C +0.330000E+02

SYSIBM.SYSCOLDISTSTATS
PARTITION NAME TYPE NUMCOLUMNS COLGROUPCOLNO CARDF

1 EDLEVEL C 3 00090008000C +0.320000E+02

2 EDLEVEL C 3 00090008000C +0.000000E+00

3 EDLEVEL C 3 00090008000C +0.100000E+02

4 EDLEVEL C 3 00090008000C +0.000000E+00

5 EDLEVEL C 3 00090008000C +0.000000E+00

EDLEVEL,JOB,SALARY are non-indexed columns


Cardinality value of the column group is stored in the catalog tables

© Copyright IBM Corporation 2004

Figure 9-54. Example 1 CG381.0

Notes:
In this example, the cardinality is collected for the column group (EDLEVEL, JOB,
SALARY) by specifying the COLGROUP keyword.
The cardinality value is stored in column CARDF in SYSCOLDIST catalog table. Table
space DSN8D81A.DSN8S81E has five partitions to hold the data for DSN8810.EMP and
therefore the cardinality values are stored for each partition in column CARDF in
SYSCOLDISTSTATS catalog table.
The name of only the first column (EDLEVEL) in the column group is recorded in column
NAME in the catalog tables. The value 3 in column NUMCOLUMNS indicates that there are
three columns in the column group. COLGROUPCOLNO shows the column numbers for
the three columns in the colgroup.
The CARDF value in SYSCOLDIST catalog table indicates that there are 33 distinct values
in DSN8810.EMP table for the colgroup (EDLEVEL,JOB,SALARY).
The CARDF value in SYSCOLDISTSTATS catalog table indicates that there are 32 distinct
values in partition 1 and 10 distinct values in partition 2 in DSN8810.EMP table for the

9-96 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty colgroup (EDLEVEL,JOB,SALARY). Partitions 2, 4, and 5 do not have any rows in the
table.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-97


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example 2
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E TABLE(DSN8810.EMP)
COLGROUP(EDLEVEL, JOB, SALARY)
FREQVAL COUNT 10 MOST

SYSIBM.SYSCOLDIST
NAME TYPE NUM COL CARDF COL FREQUENCYF
COLUMNS GROUPCOLNO VALUE

EDLEVEL C 3 00090008000C +0.33000E+02


- -
EDLEVEL F 3 00090008000C -0.10000E+01 +0.9523809523809523E-01
.Ø....
EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.4761904761904762E-01

EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.2380952380952381E-01

EDLEVEL F 3 00090008000C +0.2380952380952381E-01


-0.10000E+01 .Ø....
EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.2380952380952381E-01

EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.2380952380952381E-01

EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.2380952380952381E-01

EDLEVEL F 3 00090008000C +0.2380952380952381E-01


-0.10000E+01 .Ø....
EDLEVEL F 3 00090008000C -0.10000E+01 .Ø.... +0.2380952380952381E-01

F 3 00090008000C .Ø.... +0.2380952380952381E-01


EDLEVEL -0.10000E+01

The cardinality value of the column group and 10 most frequent values are
collected for the column group specified (EDLEVEL, JOB, SALARY)

© Copyright IBM Corporation 2004

Figure 9-55. Example 2 CG381.0

Notes:
In this example, the cardinality is collected for the column group (EDLEVEL, JOB,
SALARY) by specifying the COLGROUP keyword.
In addition to this, the 10 most frequently occurring values for the colgroup are also
collected by specifying FREQVAL COUNT 10 MOST.
The values are stored in SYSCOLDIST and also in SYSCOLDISTSTATS catalog tables
since the table space DSN8D81A.DSN8S81E is partitioned. The entries in
SYSCOLDISTSTATS catalog table are not shown on the visual.
Column TYPE indicates whether the row has the cardinality value or frequency value.
• If TYPE has the value ‘C’, CARDF contains the cardinality value, and FREQUENCYF is
not relevant.
• If TYPE has the value ‘F’, FREQUENCYF has the percentage of rows in the table with
the value specified in column COLVALUE when the number is multiplied by 100, and
CARDF is not relevant.

9-98 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Note that columns COLVALUE and COLGROUPCOLNO are VARCHAR columns FOR BIT
DATA. Moreover, if the value has a non-character data type, the data might not be
printable.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-99


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Performance Expectation
Performance gain in certain queries (such as queries that use
non-leading indexed columns and/or where non-indexed
columns are used as predicates, and that have skewed
(non-uniform) data)
CPU and elapsed times for RUNSTATS utility increases if new
functions are invoked
Actual amount of increase depends on the number of columns
specified in the COLGROUP keyword
Number of COLGROUPs specified
Amount of sorting that needs to be done be RUNSTATS
Note:
Do not forget to allocate sort workfiles for RUNSTATS
when collecting statistics on non-indexed columns

© Copyright IBM Corporation 2004

Figure 9-56. Performance Expectation CG381.0

Notes:
When the new RUNSTATS functionality is invoked, the user can expect to see an increase
in CPU and elapsed time for the utility. The amount of increase depends on the number of
columns specified with the COLGROUP keyword at the table level, as well as the number
of column groups (multiple COLGROUP keywords) specified.
More information about other RUNSTATS enhancements can be found in Figure 8-10,
"RUNSTATS Enhancements", on page 8-24.

9-100 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.7 Cost-Based Parallel Sort for Single and Multiple Tables

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-101


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Cost-Based Parallel Sort


DB2 V8 introduces cost-based consideration to determine if
parallel sort for single or multiple (composite) tables should
be disabled
Sort data size < 2 MB (500 pages)
Sort data per parallel degree < 100 KB (25 pages)
Hidden DSNZPARM OPTOPSE control - default is ON
ON - Cost-based parallel sort considerations for both single table and
multi-table
OFF - Not cost-based - sort parallelism behavior same as V7, that is,
Single (composite) table: Parallel sort
Multiple tables: Sequential sort only
Considerations:
Elapsed time improvement
More usage of workfiles and virtual storage

© Copyright IBM Corporation 2004

Figure 9-57. Cost-Based Parallel Sort CG381.0

Notes:
Currently (V7), DB2 tries to do as much sorting in parallel as possible. However, there are
instances where it may not be cost effective to execute the sort process in parallel, one
typical case being a small data sort. In DB2 V8 not all sorts are done in parallel. A cost
model is used to decide whether or not to run the sorts in parallel, for both single-table as
well as multi-table sorts.
The hidden DSNZPARM OPTOPSE is provided so that an installation can go back to the
prior behavior of sort parallelism.
• ON — cost-based — parallel sort for both single table and multi-table
Parallel sort is disabled under the following conditions:
- Sort data size < 2MB (500 pages)
- Sort data per parallel degree < 100 KB (25 pages)
• OFF — not cost-based — sort parallelism behavior same as V7

9-102 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Query Example - Plan Table
EXPLAIN output to determine if parallel sort is executed or not

SELECT * from Ta, Tb, Tc where Ta.a2 = Tb.b2 and Tb.b3 = Tc.c3;

SMJ_1 SMJ_2
Ta --------> Tb ---------> Tc (Merge scan join used to join the tables)

Prior to DB2 V8 DB2 V8 (with Hidden DSNZPARM OPTOPSE set


to ON -> Post-Optimizer decides to do
parallel sort)
+-------------------------------+ +-------------------------------+
|PLAN|TNAME|ACC|SORTC|SORTN|JOIN| |PLAN|TNAME|ACC|SORTC|SORTN|JOIN|
| | |PGR|_PGR |_PGR |_PGR| | | |PGR|_PGR |_PGR |_PGR|
| | |ID |_ID |_ID |_ID | | | |ID |_ID |_ID |_ID |
+-------------------------------+ +-------------------------------+
| 1 | Ta | 1 | ? | ? | ? | | 1 | Ta | 1 | ? | ? | ? |
| 2 | Tb | 2 | 1 | 2 | 3 | | 2 | Tb | 2 | 1 | 2 | 3 |
| 3 | Tc | 4 | ? | 4 | 5 | | 3 | Tc | 4 | 3 | 4 | 5 |
+-------------------------------+ +-------------------------------+

SORTC is executed in sequential mode SORTC is executed in parallel mode

© Copyright IBM Corporation 2004

Figure 9-58. Query Example - Plan Table CG381.0

Notes:
In data warehousing environments, it is often good practice to utilize as many resources as
are available, in order to reduce the elapsed time of critical queries. Prior to DB2 V8, in
some situations it is not possible to fully utilize the CPU when parallel sort is involved. This
is mainly due to the fact that sort-composite is not pushed down for parallelism if the
composite involves more than one table.
We use the following terminology:
• Single composite table: For example, a 2-way merge scan join sorts the table(s) in
join column sequence before joining. The result of the sort is a single composite table.
The sort of single composite table can be done in parallel in V7.
• Multi-table composite: This is the case, for example, in a 3-way merge scan join (see
the visual). After TB1 and TB2 have been joined, they are resorted before joined to
TB3. The result of that sort is a multi-table composite. We cannot use parallel sort in V7
for a multi-table composite table.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-103


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

In DB2 V8, the sort process has been enhanced to be able to run the multi-table sorts in
parallel. In this case, CPU resources can be better exploited and elapsed time can be
reduced. As mentioned in the previous topic, the optimizer decides whether the sort is done
in parallel or not.
To illustrate this enhancement, let us assume that the access path for the following query (a
three-table join) uses two merge scan joins, also known as sort merge join (SMJ):
SELECT * from Ta, Tb, Tc where Ta.a2 = Tb.b2 and Tb.b3 = Tc.c3;
Prior to DB2 V8, the “sort composite” for SMJ_2 (output of SMJ_1 involving Ta and Tb and
input to SMJ_2), is executed in the parent task (performed as a sequential sort). In DB2 V8,
this sort may be pushed down to the child task and performed in parallel.
You can tell whether a sort is executed in parallel or not by examining the EXPLAIN output.
In PLAN_TABLE output, SORTC_PGR_ID reflects the parallel_group_id if a "sort
composite" is executed in parallel. Similarly, SORTN_PGR_ID reflects the
parallel_group_id if a "sort new" is executed in parallel.
For PLAN 2, SORTC_PGR_ID, which involves only Ta, reflects parallel_group_id of 1 prior
to DB2 V8 and in DB2 V8.
For PLAN 3, SORTC_PGR_ID, which involves Ta and Tb, reflects parallel_group_id of 3 —
that is, this SORTC is executed in parallel in DB2 V8. Prior to this enhancement,
SORTC_PGROUP_ID is "?", meaning that this SORTC is executed in sequential mode.
Note that in the PLAN_TABLE entries, “?” indicates null value.

9-104 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.8 Performance of Multi-Row Operations

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-105


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Multi-Row Fetch - Local Application

Single Row Fetch Multi-row Fetch


Fetch Fetch
row 1 row1 1
row
Fetch row
row2 2

row 2 row 3
row 3

Fetch

row 3

Appl DB2 Appl DB2

© Copyright IBM Corporation 2004

Figure 9-59. Multi-Row Fetch - Local Application CG381.0

Notes:
DB2 Version 8 introduces multi-row operations. This way you reduce the number of trips
over the API from the application into DB2. Multi-row fetch and insert is described in great
detail in Figure 3-17, "Multi-Row FETCH and INSERT", on page 3-32.
This topic discusses some of the performance aspects and early performance
measurements of using multi-row operations. We discuss the performance impact of:
• Multi-row fetch in local applications
• Multi-row insert in local applications
• Positioned update and delete with multi-row operations in local applications
• Multi-row operations in a distributed environment
- When using rowsets in your DB2 for z/OS applications
- Exploitation by DB2 clients on Linux, UNIX and Windows
- Automatic exploitation of multi-row operations by DDF

9-106 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Multi-row Fetch in Local Applications


When using multi-row fetch operations, you normally fetch a set of rows, a rowset in a
single fetch operation, as shown below:
FETCH NEXT ROWSET FROM my-cursor FOR 3 ROWS INTO :hva1, :hva2, :hva3
In the example, we fetch 3 rows (to be in line with the figure above) in a single API
(Application Program Interface) crossing (going from the application to DB2 and back)
instead of 3 API crossings without rowsets.
Using multi-row fetch, local applications can reduce their CPU time up to 50% by avoiding
API overhead for each row fetch. The percentage of improvement that is to be expected is
lower if more columns (making the fetch more expensive) and/or fewer rows (making the
reduction of the number of API crossings less significant) are fetched per call.
Remember also that fetch intensive workloads experience a higher impact of turning on the
class 2 accounting trace. Therefore, a higher percentage of CPU time improvement can be
expected by using multi-row fetch operations, if accounting trace class 2 is active.

Multi-row Insert in a Local Environment


When using multi-row insert operations, you insert a set of rows, a rowset, in a single insert
operation, as shown below:
INSERT INTO my-table FOR 20 ROWS VALUES(:hva1,:hva2,...)
In this example we insert 20 rows in a single insert operation. Again this saves trips across
the API between the application and DB2. Up to 30% CPU time reduction has been
achieved in lab performance measurements by avoiding the overhead of crossing the API
for each individual row insert. Percentage-wise, the improvement is less for multi-row insert
than for multi-row fetch, because in general an insert operation is more expensive than a
fetch, and therefore the overhead of crossing the API is less significant.
The percentage of improvement that can be obtained is higher on tables with fewer
indexes, fewer columns, and/or when you insert more rows per call.
Note that when using multi-row insert, you have an ATOMIC clause (the default) that
specifies that, if the insert of any row fails, all changes made by the multi-row insert are
undone. In order to provide this functionality, DB2 takes a SAVEPOINT at the start of a
multi-row insert using the ATOMIC clause, which typically takes about 15µs on a z900
processor. This contributes less than 5% overhead when using a 2-row insert, and is
completely negligible for a many-row insert operation.

Multi-row Update and Delete Operations in a Local Environment


Similar improvement can be expected for multi-row cursor update and delete operations.
However, remember that in most cases you want to delete or update individual rows from a
rowset and not the entire rowset. In that case, no performance improvement is to be
expected.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-107


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Multi-row Operations in a Distributed Environment


Here we discuss multi-row operations in several situations.

Between DB2 for z/OS Systems


Using multi-row operations, a dramatic reduction in network traffic and response time is
possible by avoiding a message send/receive operation for each row in the following
cases:
• Fetch when not [read-only or [CURRENTDATA NO and ambiguous cursor]]
• Update and/or delete with cursor
• Insert
These cases prevent DB2 from using block fetch in a distributed environment. An
updateable cursor, or an ambiguous cursor when using CURRENTDATA NO, as well as an
INSERT statement, always send a single row across the wire.
When using multi-row operations, DB2 always sends a rowset in a single network
message, even for an updateable cursor, an ambiguous, or a (multi-row) insert operation.
Therefore by using multi-row operations, you can have a similar effect on network traffic
than block fetching, and reduce the network time dramatically. (However, you should be
aware that you potentially hold locks all rows of the rowset when doing rowset operations.)

Between DB2 Distributed Clients and DB2 for z/OS


When executing INSERT statements on the distributed platform, DB2 ODBC clients on
Linux, UNIX and Windows, already for so-called “array input”. When talking to a DB2 for
z/OS and OS/390 V7, all INSERT statements are bundled by the driver in a single network
message, when they are send to the DB2 for z/OS system. At the server side, the message
is taken apart and multiple INSERT statements are executed to insert all the rows that
make up the input array.
This functionality saves on network traffic, as fewer messages are sent. When going to
DB2 for z/OS V8, “array input” can take advantage of DB2’s capabilities to use multi-row
INSERT. In this case a single message is sent (as before) with a single (multi-row) INSERT
statement. In this case, we see a reduction in the number of API crossings between DDF
and DBM1 because it is only one INSERT statement, and therefore a reduction in CPU and
elapsed time.
In addition, when using dynamic scrollable cursors with a DB2 ODBC/CLI client on the
distributed platforms, it also use multi-row fetch operations when communicating with a
DB2 for z/OS V8 server. This way, it is possible to retrieve multiple rows (a rowset) in a
single network operation. Remember that non-rowset dynamic scrollable cursors only send
single rows across the wire because of the semantics of the “dynamic” keyword. (This also
applies to sending rows between DB2 for z/OS systems when using non-rowset dynamic
scrollable cursors.)

9-108 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DDF and Multi-row FETCH
DDF automatically uses multi-row FETCH for read-only queries
Reduces CPU cost of read-only queries significantly

DB2 for OS/390 V7


xxxDIST xxxDBM1
OPEN
FETCH
FETCH
Block
DB2 Client FETCH
fetch FETCH

SELECT * FROM ...


Block DB2 for z/OS V8
fetch xxxDIST xxxDBM1
OPEN Build all
FETCH rows for a
buffer in one
API crossing

© Copyright IBM Corporation 2004

Figure 9-60. DDF and Multi-row Fetch CG381.0

Notes:
In most cases, you must change your applications in order to exploit multi-row fetch
operations. For example, you have to set up your applications to use host variable arrays.
However, when you use a remote client to connect to DB2, for example, a Java application
using a Type 4 connection with the Universal Driver, DB2 will automatically use multi-row
fetch “under the covers” when fetching rows from the tables while building a block that will
be sent back to the client.
In order for DB2 to be able to automatically exploit multi-row fetch for distributed
applications, the cursor has to be read-only or you must be using CURRENTDATA NO with
an ambiguous cursor. When these conditions are satisfied, DB2 can enable block fetching.
When DB2, is putting together a block of data to be sent to the client inside the DDF
address space, DDF issues FETCH statements (like any other application). When you are
using block fetching against a V8 DB2 system, the DDF address space will use multi-row
fetch to build the blocks of data to be sent to the client. This is completely transparent to the
requester.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-109


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Early measurements have shown significant CPU savings by using this feature, up to 50%
in a case where many rows are fetched.
Note that this enhancement does not require the client application to use the FOR n ROWS
clause, or host variable arrays, and that this enhancement has no effect on the blocking
done by DDF. It only affects the number of API crossings between DDF and DBM1.

9-110 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.9 Volatile Table Support

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-111


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Volatile Table Enhancement


What is a volatile table?
A table whose contents can vary from empty to very large at run time
Generating an access plan based on the point-in-time captured
statistics can result in incorrect or poorly performing plan
For example, if statistics are gathered when the volatile table is empty,
the optimizer tends to favor accessing the volatile table using a table
scan rather than an index
Declaring a table as "volatile" influences the optimizer to favor
matching index access rather than depend on the existing
statistics for that table
CREATE TABLE ... VOLATILE

© Copyright IBM Corporation 2004

Figure 9-61. Volatile Table Enhancement CG381.0

Notes:
A volatile table is a table whose contents can vary from zero to very large at run time. DB2
often does a table space scan or non-matching index scan when the data access statistics
indicate that a table is small, even though matching index access is possible. This is a
problem if the table is small or empty when statistics are collected, but the table is large
when it is queried. In that case, the statistics are not accurate and can lead DB2 to pick an
inefficient access path. Favoring index access may be desired for tables whose size can
vary greatly.

9-112 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CREATE TABLE/ALTER TABLE Syntax

CREATE TABLE table-name ...


CARDINALITY
NOT VOLATILE

VOLATILE

ALTER TABLE table-name ... CARDINALITY


NOT VOLATILE

VOLATILE

© Copyright IBM Corporation 2004

Figure 9-62. CREATE TABLE/ALTER TABLE Syntax CG381.0

Notes:
DB2 V8 adds two new keywords to the CREATE TABLE statement: VOLATILE (to favor
index access whenever possible), and NOT VOLATILE (to allow any type of access to be
used). Here are their effects:
VOLATILE: Specifies that for SQL operations, index access is to be used
on this table whenever possible. However, be aware that by
specifying this keyword, list prefetch and certain other
optimization techniques are disabled.
NOT VOLATILE: Specifies that SQL access to this table should be based on the
current statistics. This is the default.
CARDINALITY: An optional keyword expressing the fact that the table can have
frequently changing cardinality; it can have only a few rows at
times, and thousands or millions of rows at other times. This
keyword is allowed for DB2 family compatibility, but will serve
no additional function in DB2 for z/OS.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-113


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

SAP R/3 Cluster Table Concurrency Issue


Cluster tables contain groups - or clusters - of rows
The rows in a cluster logically belong together
The clusters are defined by the primary key of the index on the table
There is a high incidence of lock contention when the same
cluster table is being accessed concurrently due to access
path patterns
VOLATILE option can be used to force index access for these
tables to reduce chance of contention
Significant performance improvement for some SAP applications

© Copyright IBM Corporation 2004

Figure 9-63. SAP R/3 Cluster Table Concurrency Issue CG381.0

Notes:
One common database design involves tables that contain groups of rows that logically
belong together. Within each group, the rows should be accessed in the same sequence
every time. The sequence is determined by the primary key on the table. Lock contention
can occur when DB2 chooses different access paths for different applications that operate
on a table with this design.
To minimize contention among applications that access tables with this design, specify the
VOLATILE keyword when you create or alter the tables. A table that is defined with the
VOLATILE keyword is known as a volatile table. When DB2 executes queries that include
volatile tables, DB2 uses index access whenever possible. As well as minimizing
contention, using index access preserves the access sequence that the primary key
provides.
Defining a table as volatile has a similar effect on a query to setting the NPGTHRSH
subsystem parameter (introduced in DB2 V7) to favor matching and non-matching index
access for all tables.

9-114 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The effect of NPGTHRSH is subsystem-wide, whereas volatile is at table level. In DB2 V8,
you can forget about DSNZPARM NPGTHRSH, instead use volatile tables.

Review of DSNZPARM NPGTHRSH


In DB2 V7, the best solution to the problem is to run RUNSTATS after the table is
populated. However, if it is not possible to do that, you can use subsystem parameter
NPGTHRSH to cause DB2 to favor matching index access over a table space scan and
over non-matching index access.
The value of NPGTHRSH is an integer that indicates the tables for which DB2 favors
matching index access. Values of NPGTHRSH and their meanings are:
0 DB2 selects the access path based on cost, and no tables qualify for special handling.
This is the default.
n The value you set depends on the following:
- DB2 favors matching index access for tables for which the total number of pages on
which rows of the table appear (NPAGES) is less than n. This is the situation when
data access statistics have been collected for all tables and NPAGES is updated.
The recommended value for n is a small value, such as 10.
- DB2 favors matching index access for tables for which NPAGES=-1. This is the
situation when data access statistics have not been collected for some tables,
(NPAGES=-1 for those tables).
The recommended value for n is a high value, such as 500.
-1 DB2 favors matching index access for all tables; this is not recommended.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-115


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Cluster Table Concurrency Example


First name, Last name, Seq. # form the primary key
George Bush, Gray Davis, and Ron Gonzales are each clusters
Lock contention is minimized when rows are always accessed in
Sequence # order within each cluster
Specifying VOLATILE favors index access and reduces chance of
deadlock

First Name Last Name Sequence # City


George Bush 1 D.C.
George Bush 2 Houston
George Bush 3 Austin
Gray Davis 1 Sacramento
Gray Davis 2 Los Angeles
Ron Gonzales 1 San Jose

© Copyright IBM Corporation 2004

Figure 9-64. Cluster Table Concurrency Example CG381.0

Notes:
The volatile table enhancement provides a way in DB2 to indicate that a given table is
made up of logical rows, with each logical row consisting of multiple physical rows from that
table. A logical row is identified by the primary key with a “sequence number” appended, to
provide the logical ordering of the physical rows. When accessing this type of table, the
logical rows are intended to be accessed in this order (primary key + sequence number).
This reduces the chance of deadlocks occurring when two applications want to access the
same logical row but touch the underlying physical rows in a different order. To illustrate
this enhancement, we create the following table and populate it with data:
CREATE TABLE VOLTABLE(
SEQ# SMALLINT GENERATED ALWAYS AS IDENTITY,
FIRSTNAME CHAR(10),
LASTNAME CHAR(10),
CITY CHAR(20));

CREATE UNIQUE INDEX IX1 ON VOLTABLE(FIRSTNAME,LASTNAME,SEQ#);

9-116 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)


VALUES('GEORGE','BUSH','D.C.');
INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)
VALUES('GEORGE','BUSH','HOUSTON');
INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)
VALUES('GEORGE','BUSH','AUSTIN');
INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)
VALUES('GRAY','DAVIS','SACRAMENTO');
INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)
VALUES('GRAY','DAVIS','LOS ANGELES');
INSERT INTO VOLTABLE(FIRSTNAME,LASTNAME,CITY)
VALUES('RON','GONZALES','SAN JOSE');
Notice that the table VOLTABLE is created without the keyword VOLATILE. The table is
therefore created as a non-volatile table. The PLAN_TABLE entry reveals that the access
strategy is a table space scan for the following query:
SELECT * FROM VOLTABLE
The PLAN_TABLE entry reveals that the access strategy is through index IX1 with list
prefetch enabled, if the query includes a WHERE clause as in the following query:
SELECT * FROM VOLTABLE where FIRSTNAME = ‘GEORGE’ and LASTNAME = ‘BUSH’
Thus, if two applications try to access the data concurrently from this table, they do not
necessarily retrieve the rows in the same sequence and there is the possibility of deadlock.
Use the ALTER TABLE statement to alter the table VOLTABLE to be volatile as follows:
ALTER TABLE VOLTABLE VOLATILE
The PLAN_TABLE entries reveal that the access strategy is through index IX1 with no list
prefetch enabled for both the following queries:
SELECT * FROM VOLTABLE
SELECT * FROM VOLTABLE where FIRSTNAME = ‘GEORGE’ and LASTNAME = ‘BUSH’
Thus, if two applications try to access the data concurrently from this table, they retrieve
the rows in the same sequence, thus eliminating or at least reducing the possibility of
deadlock.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-117


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Volatile Table Implications


Column SPLIT_ROWS in SYSIBM.SYSTABLES indicates if the
table is a VOLATILE table
For VOLATILE tables, index access is chosen, whenever
possible, regardless of the efficiency of the available indexes
Certain types of optimization techniques are disabled when
using VOLATILE

© Copyright IBM Corporation 2004

Figure 9-65. Volatile Table Implications CG381.0

Notes:
If you create a table with keyword VOLATILE, a new column SPLIT_ROWS in
SYSIBM.SYSTABLES contains ‘Y”, otherwise the value is blank.
A non-volatile table can be changed to a volatile table using keyword VOLATILE with
ALTER TABLE and the value in column SPLIT_ROWS in SYSIBM.SYSTABLES is set to
‘Y’. A volatile table can be changed to a non-volatile table using keyword NOT VOLATILE
with ALTER TABLE and the value in column SPLIT_ROWS in SYSIBM.SYSTABLES is set
to ‘ ‘.
In both these situations, the plan or package is not invalidated. Instead, the value ‘A’ is
stored in column VALID in SYSIBM.SYSPLAN and SYSIBM.SYSPACKAGE to indicate that
the access strategy needs to be evaluated. The plan can continue to be executed with the
existing access strategy. An explicit rebind is necessary to change the access strategy and
at that time, the value in column VALID is changed from ‘A’ to ‘Y’.
Certain types of access paths are disabled for volatile tables. They are: list prefetch, hybrid
join and multi-index access.

9-118 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.10 Data Caching and Sparse Index for Star Join

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-119


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Star Join Processing Enhancements

Store Product
(Dimension) (Dimension)

Region Material
Sales
(Dimension) (Dimension)
(Fact)

Time Customer
(Dimension) (Dimension)

© Copyright IBM Corporation 2004

Figure 9-66. Star Join Processing Enhancements CG381.0

Notes:
There exist many kinds of data models to support today’s business requirements, but
typically for data warehousing, the design usually uses a number of highly normalized
tables (known as dimensions or snowflakes) surrounding a centralized table (known as the
fact table). This model is known as a star schema, named as such because the dimension
tables appear as points of a star, surrounding the central fact table. The visual shows a
simple star schema design.

9-120 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Star Schema Data Model
Star schema data model
Often used in data warehouses, datamarts and OLAP
To minimize data redundancy
Has highly normalized tables (dimensions or snowflakes)
Has a centralized table (fact table)
Star schema query characteristics
Join predicates between the fact and dimension tables are equi-join
predicates
Existence of local predicates on the dimension tables
Large number of tables (dimensions/snowflakes and fact tables)
participating in the query

© Copyright IBM Corporation 2004

Figure 9-67. Star Schema Data Model CG381.0

Notes:
The increased demand for Decision Support Systems (DSS) and Online Analytical
Processing (OLAP) for business increases the complexity of database design, query
construction and query optimization.
The concept of a data warehouse or data mart is evolving and expanding as the business
grows. Typically, the central data warehouse needs to support several data marts, which
look at a subset of the data in their own way.
The attributes that generally dictate a star schema database model are as follows:
• Large fact table:
- The fact table often contains sales type transactions that can be in the order of
hundreds of millions, or billions of data rows.
• Highly normalized design:
- The dimension tables are highly normalized to avoid maintaining redundant
descriptive data in the central fact table. Redundant data introduces opportunities

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-121


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

for data inconsistencies. In addition, storing descriptive information in the fact table
would make the size of the fact table even bigger than it already is.
• Relatively small dimensions:
- Highly normalized dimension tables contain a finite number of descriptions and
detailed information for the codes stored in the fact table.
• Sparse “Hyper Cube”:
- There is a high correlation among dimensions, leading to a sparse nature of data in
the fact table. For example, product sales are dependent on the climate, therefore
the sale of shorts is more likely in a state enjoying hot weather.
• Fact table is dependent on the dimension tables:
- If the dimension tables exist due to normalization of repetitive fact table data, then
there exists a parent-child relationship, where the dimension table is the parent, and
the fact table is the child. There is no requirement however for an explicit foreign key
relationship to be defined.
Unlike OLTP queries, where a large number of short duration queries execute, OLAP
queries involve a large number of tables and immensely large volumes of data to perform
decision making tasks. Hence, OLAP queries are expected to run much longer than OLTP
queries, but are less frequent.
In a purely normalized star schema design, the fact table does not contain attribute
information, and merely contains event occurrences. To decode the fact table data, each
row must be joined to the relevant dimensions to obtain the code descriptions.
Consequently, we can characterize a typical star schema query as having the following
properties:
• Join predicates between the fact and dimension tables are equi-join predicates:
- Decoding a fact table column requires an equi-join to the relevant dimension table,
matching on the fact code value.
• Existence of local predicates on the dimension tables:
- Assuming the attribute information has been normalized within the dimension tables,
selective predicates are applied to the dimensions rather than the fact table.
• Large number of tables participating in the query:
- The star schema (and the snowflake schema) dictates that many tables participate
in a single star join query.
• No join predicates that cross dimensions.

9-122 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Typical Star Schema Query

id month qtr year id city region country id item class department

1 Jan 1 2002 1 New York East USA 1 stereo audio audio-visual

2 Feb 1 2002 2 Boston East USA 2 cd player audio audio-visual


3 Mar 1 2002 3 Chicago East USA 3 television video audio-visual
4 Apr 2 2002 4 San Jose West USA
5 May 2 2002 5 Seattle West USA
6 Jun 2 2002 Product (dimension)
6 Los Angles West USA
60,000 rows
39 Mar 1 2003
Region (dimension)
Time (dimension) 1,000 rows
39 rows

time locn prod customer seller .... SELECT *


FROM SALES S, TIME T,
1 1 1 123 22 REGION L, PRODUCT P
WHERE S.TIME = T.ID
2 5 2 345 33
AND S.REGION = L.ID
2 2 2 567 66
AND S.PRODUCT = P.ID
2 2 1 789 12 AND T.YEAR = 2002
3 3 3 112 23 AND T.QTR = 1
3 6 2 348 78 AND L.CITY IN ('Boston','Seattle')
2 6 1 777 60 AND P.ITEM = 'stereo';

Sales (fact)
150 billion rows

© Copyright IBM Corporation 2004

Figure 9-68. Typical Star Schema Query CG381.0

Notes:
The sample star join query against the sample set of tables is shown on the visual.
Assume that a very expensive stereo is one of 20,000 products that are only sold at a few
of the 600 store locations, including Boston and Seattle, and these 600 locations have a 1
in 3 month rotation for a single sale.
Considering the attributes of a star schema query, and the complexity of the associated
data model, an efficient access path for such a query has three major objectives:
• Encourage a matching index scan of the fact table.
- The large fact table cannot be scanned in its entirety, unless a large percentage of
fact table rows are to be retrieved. Therefore matching index access must be
available on as many selective join predicates as possible.
• Access dimension tables before the fact table to minimize the search space.
- With the filtering provided by the intersection of dimensions, matching index access
on as many fact and dimension join columns as possible reduces the range of data
rows that must be retrieved for each fact table access. This implies that there are

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-123


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

selective local predicates applied to these dimensions to narrow down the search
space.
• Determine the balance between increased fact table filtering and excessive cartesian
result.
- Because there are no join predicates between dimensions (all join predicates are
between dimensions and the fact table — unless snowflakes are involved), joining
dimensions before accessing the fact table to reduce the search space, results in a
cartesian product of those tables.
- Although increasing the selective dimensions may limit the qualifying fact table
rows, it will increase the cartesian dimensions that must be joined to the fact table.
For a cartesian result of 10,000 rows, adding a further dimension with as little as 2
table rows doubles the cartesian result, thus doubling the number of rows to be
joined to the fact table.

9-124 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Snowflake Star Schema

id month qtr id city country id item class id class dept id dept


1 Jan 1 1 New York 1 1 stereo 1 1 audio 1 1 AV
month (dimension) 12 rows city (dimension) 1000 rows item 60,000 rows class 100 rows dept 10 rows
id year id country
1 1997 1 USA id item class department
year (dimension) 5 rows country (dimension) 8 rows 1 1 1 1
id qtr id region Product (dimension/fact)
1 1 1 East
60,000 rows
qtr (dimension) 4 rows region (dimension) 10 rows
id month qtr year id city region country id title firstname lastname
1 1 1 1 1 1 1 1 123 Mr Fred Smith
Location (dimension/fact) 345 Mrs Hilary Clinton
Time (dimension/fact)
246 Ms Julia Roberts
60 rows 1,000 rows 432 Mr Ryan Giggs
time geo prod customer seller ... 999 Ms Liza Minelli
1 1 1 123 22 348 Mr Bill Clinton
2 5 2 345 56 Customer (dimension)
2 2 2 246 67
10 million rows
2 2 1 432 12
id seller name
3 3 3 999 88
3 6 2 348 60 22 Joe
Sales (fact) 56 Lynn Salesperson
150 billion rows 67 Herb (dimension/fact)
12 Mary
88 Joseph
1,500 rows
+: More flexible, redundant data eliminated 60 Alice
- : More complicated (15 tables to join in this simple example)

© Copyright IBM Corporation 2004

Figure 9-69. Snowflake Star Schema CG381.0

Notes:
The snowflake star schema takes the star schema concept to the next level, with further
normalization within the dimensions.
Probably the majority of star schema implementations include snowflakes in at least some
of their dimensions. Note that an alternative design might take different levels within what is
shown here as a single dimension (for instance, time) and implement them as separate
dimensions. Such a design would still qualify as a star schema, although its capabilities for
performance and flexibility are likely to be different from the design shown here.
In the figure above, the time, location, and product dimensions are further normalized than
they were in the previous visual. This is what we call a snowflake design. It is the further
normalization of a simple star schema. Note that in the time dimension for example, month,
qtr (quarter) and year, are only represented by a “code number” in the time dimension. The
actual values, for example, the actual year, is one more level down, in the YEAR table. The
same is true for month and qtr.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-125


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Star Join Access Path Technical Issues


Bind-time issues
Run-time issues
Run-time alternatives
"Selectively" join from the outside-in
For example, number of join columns, leading join columns of the index
on the fact tables
Efficient "Cartesian Join" from the outside-in
For example, Index key Feedback
Efficient "join back", inside-out
Efficient access of the fact table

© Copyright IBM Corporation 2004

Figure 9-70. Star Join Access Path Technical Issues CG381.0

Notes:
The technical issues surrounding the optimization of decision support and data
warehousing queries against a star schema model can be broken down into bind-time and
run-time issues.
The first consideration generally is the sheer size of the fact table, and also the large
number of tables that can be represented in the star schema.
• Bind-time issues:
For non-star join queries, the optimizer uses the pair-wise join method to determine the
join permutation order. However, the number of join permutations grows exponentially
as the number of tables being joined increases. This results in an increase in the bind
time, because of the time required to evaluate all possible join permutations.
• Run-time issues:
Star join queries are also challenging at run-time. DB2 uses either cartesian joins of
dimension tables or pair-wise joins. The pair-wise joins has worked quite effectively in
the OLTP world. However, in the case of star join, since the only table directly related to

9-126 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty other tables is the fact table, the fact table is most likely chosen in the pair-wise join,
with subsequent dimension tables joined based on cost.
Even though the intersection of all dimensions with the fact table can produce a small
result, the predicates supplied to a single dimension table are typically insufficient to
reduce the enormous number of fact table rows. For example, a single dimension join to
the fact table may find:
- 100+ million sales transactions in the month of December
- 10+ million sales in San Jose stores
- 10+ million sales of Jeans, but
- Only thousands of rows that match all three criteria
Hence there is no one single dimension table which could be paired with the fact table
as the first join pair to produce a manageable result set.
With these difficulties in mind, the criteria for star join can be outlined as follows:
• “Selectively” join from the outside-in:
- Purely doing a cartesian join of all dimension tables before accessing the fact table
may not be efficient if the dimension tables do not have filtering predicates applied,
or there is no available index on the fact table to support all dimensions. The
optimizer should be able to determine which dimension tables should be accessed
before the fact table to provide the greatest level of filtering of fact table rows.
- For this reason, outside-in processing is also called the filtering phase. V8 enhances
the algorithms to do a better job at selecting which tables to join during outside-in
processing.
• Efficient “Cartesian Join” from the outside-in:
- A physical cartesian join generates a large number of resultant rows based on the
cross product of the unrelated dimensions. A more efficient cartesian type process is
required as the number and size of the dimension tables increase, to avoid an
exponential growth in storage requirements. Index key feedback technique is useful
for making cartesian joins efficient.
• Efficient “join back”, inside-out:
- The join back to dimension tables that are accessed after the fact table must also be
efficient. Non-indexed or materialized dimensions present a challenge for excessive
sort merge joins and workfile usage. DB2 V8 enhancements introduce support for
in-memory workfiles and sparse indexes for workfiles to meet this challenge.
• Efficient access of the fact table:
- Due to the generation of arbitrary (or unrelated) key ranges from the cartesian
process, the fact table must minimize unnecessary probes and provide the greatest
level of matching index columns based on the pre-joined dimensions.
Of course, the very first step is to determine if the query qualifies for a star join. You can
refer to the whitepaper The Evolution of Star Join Optimization available from the Web site:

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-127


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

http://www.ibm.com/software/data/db2/os390/techdocs/starjoin.pdf

9-128 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Index Key Feedback Technique

Time Product Region Store


1 10
100 1020 Dimension Workfiles
2 X 20 X 200
X 1030
1040
1050

Sales Fact Table


1 10 100 1020 1 10 100 1020
1 10 100 1030 1 10 100 1999
1 10 100 1040 1 10 200 1030
Cartesianed 1 10 100 1050 1 10 300 1040
Dimension 1 10 200 1020 ..... ..... ..... ........

Values 1 10 200 1030


1 10 200 1040
1 10 200 1050
..... ..... ..... ........

© Copyright IBM Corporation 2004

Figure 9-71. Index Key Feedback Technique CG381.0

Notes:
For an efficient cartesian join process, DB2 employs a “logical”, rather than “physical”
cartesian join of the dimension tables. Each dimension covered by the chosen fact table
index is accessed independently before the fact table. Each qualifying dimension has all
local predicates applied, with the result sorted into join column order, and finally
materialized into its own separate workfile.
Rather than requiring the physical workfile storage involved in a physical cartesian product,
DB2 simulates a cartesian by repositioning itself within each workfile to potentially join all
possible combinations to the central fact table. The sequence of this simulated cartesian
join respects the column order of the selected fact table index.
The sparseness of data within the fact table implies a significant number of values
generated by the cartesian process are not to be found by a join to the fact table. To
minimize the CPU overhead of joining unnecessarily derived rows to the fact table, DB2
introduces an index key feedback loop to return the next highest key value whenever the
fact table is accessed.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-129


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

A miss returns the next valid fact table index key so that DB2 can reposition itself within the
dimension workfiles, thus skipping composite rows with no possibility of obtaining a fact
table match.
A hit on the fact table index returns the matching fact table row. When a matching row is
found in the fact table, the next fact table key is checked to see if there is a duplicate key.
The position is moved forward in the fact table index until no more duplicates are found.
When this "not found" condition is met, the next highest key is returned.
The visual demonstrates the index key feedback technique.
This technique, used during the outside-in stage, is a push-down star join, which involves
an index scan on the fact table. The star join is pushed down to the Data Manager
component of DB2 (that is, Stage 1 — non-star joins are handled by DB2’s Relational Data
System (RDS) component, that is, Stage 2). In the PLAN_TABLE, the column JOIN_TYPE
has the value “S” to indicate star join.

9-130 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Star Join Workfile Challenges
Where do they come from?
Materialization of snowflakes
Sort of dimension tables

Typically small (hundreds or thousands)

Today, RSCAN is the only option


Outside-in processing
Inside-out processing (sparse index can also be used in V7)

Version 8 enhancements
In-memory workfiles
Sparse index on workfiles (also during outside-in processing)

© Copyright IBM Corporation 2004

Figure 9-72. Star Join Workfile Challenges CG381.0

Notes:
Processing star join queries usually involves a lot of workfile processing. They are mainly
created because:
• The star schema uses a snowflake design. Currently (V7), snowflakes are always
materialized in workfiles before being processed.
• Dimension tables need to be sorted in the selected fact table index order. After local
predicates have been applied, the qualifying rows of dimension tables are sorted in the
join column order, and stored in a workfile.
Workfiles are used during outside-in and inside-out processing, as illustrated in the next
topics.
These workfiles are usually scanned multiple times while computing the result table for the
query.
DB2 Version 8 dramatically enhances star join workfile processing by introducing
in-memory workfiles and sparse indexes on workfiles.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-131


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Workfiles in Outside-In Processing


During 'logical' cartesian product, DB2 frequently repositions
in the dimension workfiles
Processing of the workfiles can only be done be scanning
them over and over again (RSCAN)

Outside-in
(Dimension tables to fact table)
Inside-out
(Fact table to dimension tables)

NLJ NLJ MSJ MSJ NLJ


ISCAN (Mat) SCAN SCAN ISCAN SCAN SCAN ISCAN
S S S

WF WF WF WF

Dimension D (Time) D (Store) SALES Dim Dim Dim Dim Dimension


Dim
(300M)

Excessive RSCAN

© Copyright IBM Corporation 2004

Figure 9-73. Workfiles in Outside-In Processing CG381.0

Notes:
Workfiles are also used during the outside in phase of star join processing. Dimension
tables that are snowflakes are materialized in a workfile. In addition, also non-snowflake
dimensions are materialized in workfiles after applying local predicates, and sorted in the
key columns of the index chosen by the optimizer to access the fact table.
As mentioned before, DB2 does not actually perform a physical cartesian product when
filtering the dimensions. It “simulates” the cartesian product by repositioning in the
dimension tables. When these dimension tables are workfiles, finding matching entries
involves a relational (sequential) scan of the workfiles (as there are no indexes on
workfiles). Depending on when the dimension is joined, DB2 may have to reposition and
re-scan the workfiles multiple times. These workfile scans can negatively impact the
performance of your star join queries.

9-132 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Inside-Out Stage Workfile Challenges
Typical data warehouse queries touch significant portion of fact table
In the inside-out stage, sort is performed on the composite table (the
result of outside-in processing) at the start of MSJ processing
(materialized work file has no index)
Sorting the large result set is a performance concern (time and space)
Sort of "new" table (materialized snowflake - workfile) before MSJ

WF = Work File Inside-out


(Fact table to dimension tables)
NLJ = Nested Loop Join
MSJ = Merge Scan Join Outside-in
S = Pushdown Star Join (Dimension tables to fact table)
NLJ NLJ MSJ MSJ NLJ
ISCAN (Mat) SCAN SCAN ISCAN SCAN ISCAN ISCAN
S S S SORT SORT
COMP NEW
WF WF WF WF

SALES Dim Dim Dim Dim Dim Dimension


Dimension D (Time) D (Store)
(300M)

15M rows (5%) after filtering


during outside-in processing

© Copyright IBM Corporation 2004

Figure 9-74. Inside-Out Stage Workfile Challenges CG381.0

Notes:
In the inside-out stage, the fact table (already joined with some dimension tables in the
outside-in stage) is joined with the remaining dimension tables.
Extensive workfile processing is often required in the inside-out stage, because the
intermediate result rows may still be large after the outside-in join. The lack of an index on
workfiles may lead the optimizer to select a merge scan join. In addition, the tables that
need to be joined back during inside-out processing (very often snowflakes materialized in
a workfile) need to be sorted in the merge scan join column order.
This causes increased workfile space consumption, excessive CPU and I/O consumption,
increased parallelism overhead due to merge activity. This is a critical storage/performance
issue for large intermediate results or short running queries.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-133


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

In-memory Workfiles
In-memory data structure - above the bar
Sorted in the join column order
Containing only the join column and the selected columns
Binary search for the target row
More beneficial for large join composite
Ideal for scanning dimension workfiles

DASD In-memory
10 1 1010 ...
10 1010
10 1 1020 ...
10 1020
10 1 1030 ...
10 1030
10 2 1010 ...
10 1010
key
10 2 1020 ...
10 2 1030 ...
10 1020 +
10 1030
11 1 1010 ...
11 1010 selected
11 1 1020 ...
11 1 1030 ...
11 1020 columns
11 1030
11 2 1010 ...
11 1010
... ... ... ...

Selected column

Join column (sort key)

© Copyright IBM Corporation 2004

Figure 9-75. In-memory Workfiles CG381.0

Notes:
DB2 V8 supports in-memory workfiles for star join queries. This means that the normal
workfile database is not used. The complete workfile is stored in memory instead.The
in-memory workfiles contain all the columns of a workfile that are necessary to satisfy a
query. These columns are the join column and the selected columns. The rows are sorted
in join column sequence. The in-memory workfile is dense, in the sense that it contains an
entry for each workfile record. DB2 performs a binary search to find the target row. (The
entries are not stored in a B-tree structure like “normal” indexes.) As in-memory workfiles
potentially save a large number of I/O operations against workfiles, they promise a
considerable performance gain.
In-memory workfiles are stored in a new dedicated storage pool that is called a star join
pool. The DB2 DSNZPARM SJMXPOOL specifies its maximum size, which defaults to 20
MB (maximum 1GB). It resides above the 2 GB bar and is only in effect when star join
processing is enabled through DSNZPARM STARJOIN. When a query that exploits star
join processing finishes, the allocated blocks in the star join pool to process the query are
freed. More information on how to size SJMXPOOL can be found in the “Dedicated Virtual

9-134 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Memory Pool for Star Join Operations” section in the DB2 Administration Guide,
SC18-7413.
The use of the star join pool is not mandatory. If it is not created, star join processing takes
place without using in-memory workfiles. Also, if the allocation of space for a workfile in the
star join pool fails, because SJMXPOOL is reached, then processing falls back to using the
new sparse index, discussed in the next topic.
The possibility of caching workfiles in memory does not only help star join performance, but
also other concurrently running jobs which normally use physical workfiles, such as sort,
merge join, view materialization, nested table expression materialization, trigger, created
temp table, non-correlated subquery, table UDF, etc. Since the star join is now capable of
caching the data in memory, instead of using workfiles, these jobs do not have to compete
for workfile storage in the buffer pool and table spaces used for workfile processing,
resulting in fewer workfile access contention.
The use of in-memory workfiles, instead of traditional workfiles in the work database
(usually DSNDB07), is also known as workfile caching or data caching.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-135


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

In-memory Workfile Benefits

Outside-in

Inside-out

NLJ NLJ NLJ NLJ NLJ


ISCAN (Mat) SCAN SCAN ISCAN SCAN SCAN ISCAN
S S S

IMWF IMWF IMWF IMWF

Dimension D (Time) D (Store) SALES Dim Dim Dim Dim Dim Dimension
(300M)

In-memory workfiles NLJ + IMWF

© Copyright IBM Corporation 2004

Figure 9-76. In-memory Workfile Benefits CG381.0

Notes:
Both outside-in and inside-out processing benefit from the use of in-memory workfiles.
During outside-in processing, repositioning inside the workfiles, when doing the “logical”
cartesian join, can be done using a binary search inside an in-memory workfile, which is
much more efficient than repositioning using a relational (table space) scan. These are the
benefits:
• Significant I/O cost reduction.
• Slight CPU cost reduction. More CPU reduction can be expected when large sorts or
multiple sorts are involved.
During inside-out processing, the optimizer can now choose a nested loop join instead of a
merge scan join, and avoid having to sort the large composite (the result of outside-in
processing). In addition, access to the workfiles, usually representing materialized
snowflakes, will be more efficient as the workfiles are now in-memory structures, and DB2
can use a binary search to access them. These are the benefits:

9-136 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • The sort of a large composite (result of the outside-in joining phase) table is avoided (no
sort work file space requirements) because the query no longer uses a merge scan join.
• Reduction of parallelism overhead (merge and repartition).
• Greater exploitation of parallelism.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-137


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Sparse Index on Workfiles for Star Join


Introduce sparse index on the join key for the materialized
work files
Thus nested loop join is possible. It replaces the merge scan
join and eliminates the sort of the large outer composite table
(fact table)
Sparse index
In-memory virtual index
Probed through an equal-join predicate
Binary search for the target portion of the table
Sequential search within the target portion if it is sparse
The denser the faster - in favor of small work files
More beneficial for large join composite
Backup solution for dimension work file access in star
schema scenario if in-memory workfiles fail

© Copyright IBM Corporation 2004

Figure 9-77. Sparse Index on Workfiles for Star Join CG381.0

Notes:
When a star join query is executed, DB2 first tries to cache the workfile in-memory, as
described in the previous topic. DB2 tries to allocate the required space in the new
in-memory workfile storage pool. If the allocation is successful, the sorted records for the
workfile are cached in memory, and the physical workfile is not created. If the pool is not
created or the allocation fails because no more space is available in the pool, only the sort
key is saved in-memory as a sparse index, and the data records are stored in a physical
workfile.
The sparse index is implemented as a memory structure within DB2’s virtual storage. Each
entry of the index contains the sort key and the 5-byte RID. If the total number of work file
entries is small enough, then all entries can be represented in the index, thus providing a
one-to-one relationship between the index and work file. The index itself only becomes
sparse if the number of index entries cannot be contained within the space allocated. The
maximum size of the structure is 240 KB.

9-138 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
How Does Sparse Index Work?

NLJ
T1 T2 (WF)

t1.c = t2.c
Key RID ... ...
Binary
Segment covering
Search
1 sparse IX entry
... ... ... ...
T2
... ... (WF)
Sorted in t2.c order
Sparse Index

© Copyright IBM Corporation 2004

Figure 9-78. How Does Sparse Index Work? CG381.0

Notes:
The workfile is sorted in the join column sequence (t2.c). While sorting, the (sparse) index
is built (containing the key (join) columns and the RID. The sparse index structure is flat
rather than a B-tree structure used by normal indexes on data tables. The index is probed
through an equi-join predicate (t1.c = t2.c). A binary search of the index is utilized to find
the target row (or segment of rows). In case the index is sparse (not all entries in the
workfile have an index entry), a sequential search of the work file is subsequently initiated
within the target segment to find the corresponding join row.
Note that the query will try to use data caching (in-memory workfiles) first, and that the use
of the sparse index is only a “fallback” plan.
Tip: APAR PQ61458 (6/02) provides support for sparse indexes on the snowflake
workfiles that are used in the inside-out join phase for DB2 V7.
V7 cannot use in-memory workfiles at all, or sparse indexes during outside-in
processing.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-139


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Other Star Join Enhancements


Snowflake handling - Controlled materialization
V7 - Snowflakes are always materialized
For large or non-filtering snowflakes, snowflake materialization may
dominate the overall query time
V8 - Controlled materialization
Materialization only done when adequate filtering is expected
When no materialization is done, indexes of the underlying tables
making up the snowflake can be used
Better selection of filtering dimensions
Improved cost estimation algorithm that better estimates the filtering
effect of dimensions

© Copyright IBM Corporation 2004

Figure 9-79. Other Star Join Enhancements CG381.0

Notes:
Here we discuss some other enhancements related to the star join configuration.

Snowflake Handling - Controlled Materialization


Prior to DB2 V8, all snowflakes were materialized. This provided the benefit of simplified
access path selection by reducing the overall number of tables joined.
For the inside-out join phase (post fact table), relatively small snowflakes, or snowflakes
that provide adequate filtering are good candidates to be materialized. With the introduction
of in-memory workfiles and/or sparse index on workfiles, the snowflake, which may contain
many tables, is resolved once and fact table rows are joined to a much smaller result set
using an efficient join method that can take advantage of the in-memory or sparse index.
For large or non-filtering snowflakes, the materialization overhead may dominate the
overall query time, and is therefore detrimental to query performance. For in-memory
workfile and sparse index on workfile, the result must be sorted to allow a binary search to
locate the target row. Sorting a large result can be expensive. If the memory is available for

9-140 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty a very large result, the binary search for in-memory workfiles may result in multiple
iterations to find the target row. If fallback occurs to sparse index, then the index may be
too sparse, and therefore, each locate in the workfile may still require a large sequential
scan.
V8 introduces controlled materialization. The filtering of each snowflake is ranked, and only
those snowflakes that provide adequate filtering compared to the base table size will be
materialized.
The choice not to materialize can overcome the sort and workfile allocation overhead, and
rather than requiring an index to be built on the workfile, the indexes on the underlying
snowflake tables can be used for efficient joins after the fact table.

Selection of Filtering Dimensions


Besides the star join enhancements already described, DB2 V8 provides an improved cost
estimation algorithm that better estimates the filtering effect of dimensions. This results in a
better table join sequence (which tables are processed during outside-in and which tables
are processed during inside-out processing) and can yield a significant performance
improvement.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-141


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example of a Star Join Access Path

Query# Pln# Corr Table Join Join Acc Access Sort


Name Name Mtd Type Type Name New

11001 1 DP /BI0/D0SD_C01P 0 S I /BI0/D0SD_C01P~0 N


11001 2 DT /BI0/D0SD_C01T 1 S T Y
11001 3 DU /BI0/D0SD_C01U 1 S T Y
11001 4 F /BI0/F0SD_C01 1 S I /BI0/F0SD_C01~0 N
11001 5 D5 /BI0/D0SD_C015 1 I /BI0/D0SD_C015~0 N
11001 6 D3 /BI0/D0SD_C013 1 I /BI0/D0SD_C013~0 N
11001 7 DSN_DIM_TBLX(02) 1 T Y
11001 8 D2 /BI0/D0SD_C012 1 I /BI0/D0SD_C012~0 N

Access_type T indicates either "sparse


index" or "data caching" is used
The final decision is taken at runtime
and cannot be shown by EXPLAIN

© Copyright IBM Corporation 2004

Figure 9-80. Example of a Star Join Access Path CG381.0

Notes:
You can determine if the in-memory workfiles or sparse index enhancement is used from
the PLAN_TABLE. The visual shows an example of a pushdown star join access path.
When you see ACCESS_TYPE = ‘T’, this indicates that either data caching or a sparse
index is used for this query. The decision to use data caching or sparse index is made at
execution time, when the work file data is sorted. Therefore the EXPLAIN output in the
PLAN_TABLE cannot show which one of these features is actually used when running the
query.

9-142 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.11 Miscellaneous Performance Enhancements

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-143


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

REOPT(ONCE)
Bind option that controls when the Optimizer builds the access path
information for dynamic SQL applications
By default, access path is built at PREPARE
REOPT(ONCE)
Defers access path selection until OPEN
Values of host variables on OPEN are used to build access path
Resulting access path is cached in the global prepared cache
New terminology
REOPT(NONE) equivalent to NOREOPT(VARS)
REOPT(ONCE)
REOPT(ALWAYS) equivalent to REOPT(VARS)

© Copyright IBM Corporation 2004

Figure 9-81. REOPT(ONCE) CG381.0

Notes:
REOPT(ONCE) is a new bind option that tries to combine the benefits of REOPT(VARS)
and dynamic statement caching. For an SQL statement with input host variables, static or
dynamic, the access path chosen by the optimizer during bind time (before the values of
host variables are available) may not always be optimal.
The bind option, REOPT(VARS), solves this problem by (re)preparing the statement at run
time when the input variable values are available, so that the optimizer can re-optimize the
access path using the host variable values. However, for frequently called SQL statements
that take very little time to execute, re-optimization using different input host variable values
at each execution time is expensive, and it may affect the overall performance of
applications.
The idea of REOPT(ONCE) is to re-optimize the access path only once (using the first set
of input variable values) no matter how many times the same statement is executed. The
access path chosen based on the set of input variable values is stored in the dynamic
statement cache and used for all later executions (as with normal dynamic statement

9-144 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty caching). This solution is based on the assumption that the chosen set of host variable
values at run time are better than the default ones chosen by optimizer at bind time.
Three new options, REOPT(ONCE), REOPT(NONE), and REOPT(ALWAYS) can be
specified in the BIND and REBIND commands for plans and packages. REOPT(NONE) is
the default option. NOREOPT(VARS) is a synonym for REOPT(NONE). REOPT(VARS)
can be specified as a synonym for REOPT(ALWAYS). REOPT(ONCE) is valid only in the
new-function mode.
REOPT(ONCE) only applies to dynamic SQL statements and is ignored if you use it with
static SQL statements.DB2 for z/OS caches only dynamic statements. If a dynamic
statement in a plan or package that is bound with REOPT(ONCE) runs when dynamic
statement caching is turned off (DSNZPARM CACHEDYN=NO), the statement runs as if
REOPT(ONCE) is not specified.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-145


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DTT ON COMMIT DROP TABLE Option


V7
You must explicitly drop a DTT before COMMIT. Otherwise, the remote
connection cannot become inactive
V8
New ON COMMIT DROP table option on the CREATE DECLARED
GLOBAL TEMPORARY TABLE statement
DTT implicitly dropped at commit if no open held cursor against the DTT
Significant enhancement for DDF connections that can now become
inactive without an explicit drop of the DTT

DECLARE GLOBAL TEMPORARY TABLE ...

ON COMMIT DELETE ROWS

ON COMMIT PRESERVE ROWS


ON COMMIT DROP TABLE

© Copyright IBM Corporation 2004

Figure 9-82. DTT ON COMMIT DROP TABLE Option CG381.0

Notes:
Prior to Version 8, a declared global temporary table (DTT) is treated like a base table with
an open held cursor. So, a remote connection is not eligible to become inactive (and the
thread being returned to the pool) at commit time.
Scrollable cursors use DTTs. Therefore, connections that use scrollable cursors that are
open are not eligible to become inactive at a COMMIT time. (Note that when you CLOSE a
scrollable cursor, the DTT gets cleaned up.)
DTTs are created as “pseudo” release at deallocate structures. Any data inserted into the
DTT is removed at commit (unless it is created with ON COMMIT PRESERVE ROWS).
The DTT remains active across a commit. The DTT must be dropped explicitly if the user
wants it to go away at commit. If the thread reuse does not go through a "New User"
process (for example, a DDF inactive connection), then the DTT does not go away until the
thread is terminated and deallocated. The lock on the TEMP table space remains as long
as the DTT exists.
This has caused a number of problems in the past.

9-146 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty For example, when a Java bean accesses a stored procedure that creates and populates a
DTT and passes the values of that temp table as a result set back to the JAVA program,
after the bean issues the close.statement, close.resultset, and close.connection, the locks
on the TEMP table spaces are left in place, even with AUTOCOMMIT ON, and DB2
explicitly closes everything, the cursor, the result set, the connection.
Any application that uses connection pooling will always leave all temporary tables out
there for the duration the Web application server is up. And all locks on that table will be
preserved for this entire duration, even past COMMITS.
In DB2 for z/OS V8, you can define a declared global temporary table with a new option ON
COMMIT DROP TABLE, so the application does not have to explicitly delete the DTT
before committing.
This change is a significant change for distributed work that can now be switched from
active to inactive. Temporary tables that do not have any HELD cursors open are dropped
automatically at COMMIT. This will allow the connection to be switched to INACTIVE.
When the DTT is defined using the ON COMMIT DROP TABLE clause, the declared global
temporary table is implicitly dropped at COMMIT if there are no open cursors on the table
that are defined as WITH HOLD.
Note that you will have to change your applications to take advantage of this feature, as a
DTT is defined within the program to specify the ON COMMIT DROP TABLE clause.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-147


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Lock Avoidance Enhancement


Pointer records in DB2

Index
X

Data page X -> Y Data


Data page Y
Pointer row Overflow row

Lock avoidance enhancement


Isolation level Cursor Stability
V7 V8
with CURRENTDATA NO or YES
Lock and unlock request for the pointer record YES YES

Lock and unlock request for the overflow record YES NO

© Copyright IBM Corporation 2004

Figure 9-83. Lock Avoidance Enhancement CG381.0

Notes:
When you update a variable length row on a page, it may be that after the update, the row
no longer fits on its original page. In that case, the new row is stored on a different page. In
order to avoid updating all the indexes that point to that row (remember that an index entry
contains a RID, which contains the page number), a pointer record is created on the
original page. The pointer record then points to the actual row.
If a later update of the row decreases its size, it can be put back into its original (home)
page, again without any updates to the index.
You can have variable length rows when:
• Using variable length fields like VARCHAR and VARGRAPHIC
• Using DB2 data compression
• Altering a table to add a column, but you have not performed a REORG to materialize
all the rows to have the new column
• Using the new V8 online schema enhancements to enlarge the a columns data type,
but you have not ran REORG to materialize all the rows to the latest format

9-148 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The good thing about using the overflow pointer is that we avoid updating the index(es).
The disadvantage is that we potentially double the number of data I/Os and getpage
requests, and increase the number of lock and unlock requests.
You can check whether or not overflow records exist by looking at the FARINDREF and
NEARINDREF information in SYSIBM.SYSTABLEPART. This information is updated by
running RUNSTATS and provides information about the number of rows that have been
relocated far (> 16 pages) or near(< 16 pages) their “home” page. It is recommended to run
a REORG when (FARINDREF +NEARINDREF) exceed 5 to 10%.
To reduce the amount of overflow rows, and hence the number of double I/Os, getpages,
and lock/unlock requests, you can use a higher PCTFREE value (FREEPAGE will not help
in this case).
In V7, there is no lock avoidance on both the pointer record and the overflow record itself.
With DB2 Version 8, only the pointer is locked (Table 9-1).
Table 9-1 Lock avoidance of overflow record
Isolation Cursor Stability CURRENTDATA No or Yes Version 7 Version 8

Lock and unlock for the pointer record YES YES

Land and unlock for the overflow record YES NO

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-149


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Non-correlated EXISTS Subquery


Prior to V8, ALL qualifying rows of a non-correlated EXISTS
subquery are retrieved and stored in a workfile
Potentially a lot of processing to find all qualifying rows for the
subquery
Potentially a lot of workfile space to store all these rows
DB2 V8 stops evaluating the non-correlated EXISTS subquery
as soon as ONE qualifying row is found
One qualifying row is sufficient to evaluate the predicate as true
Example:
SELECT EMPNO,LASTNAME
FROM DSN8810.EMP
WHERE EXISTS (
SELECT *
FROM DSN8810.PROJ
WHERE PRSTDATE > ’2005-01-01’);

© Copyright IBM Corporation 2004

Figure 9-84. Non-correlated EXISTS Subquery CG381.0

Notes:
When you use the keyword EXISTS, DB2 simply checks whether the subquery returns one
or more rows. Returning one or more rows satisfies the condition; returning no rows does
not satisfy the condition.
For example, list all employees, if any project that is represented in the project table has an
estimated start date that is later than 1 January 2005:
SELECT EMPNO,LASTNAME
FROM DSN8810.EMP
WHERE EXISTS (
SELECT *
FROM DSN8810.PROJ
WHERE PRSTDATE > ’2005-01-01’);
Because this example is using a non-correlated subquery, the result of the subquery is
always the same for every row that is examined for the outer SELECT. Therefore, either
every row appears in the result of the outer SELECT, or none appears.

9-150 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Prior to DB2 Version 8, all qualifying rows are retrieved and stored in a workfile. If many
rows qualified, that could mean a lot of work and workfile space. However, the EXISTS
predicate is not interested in the actual qualifying values from the subselect. It only wants
to know if any rows qualify.
In V8, DB2 will only find the first qualifying row for the non-correlated EXISTS subselect. If
DB2 finds one, the predicate is true and DB2 stops processing the subselect. Depending
on the number of qualifying rows in the subselect and the amount of work it takes to find all
qualifying rows, this enhancement can be a big performance boost.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-151


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

IN List Processing Enhancements


V7 Enhancements to IN list processing
Predicate pushdown for IN list predicates (V7 APAR PQ73454)
Correlated subquery transformation enhancement (V7 APAR PQ73749)
Activated by INLISTP DSNZPARM (default value 0 in V7 = disabled)

By default enabled in V8 (INLISTP=50)


IN list predicate pushdown into nested table expression (NTE) or materialized
view (MV)
Better filtering inside NTE and MV; fewer resulting rows
Potential index usage on columns when resolving NTE or MV
Correlated subquery transformation enhancement
IN list predicates generated by predicate transitive closure "pulled up" to
parent query block
Filtering can be done at parent level, resulting in fewer invocations of
subquery executions
IN list predicate on parent query block can take advantage of existing
indexes and provide a better access path for parent query block

© Copyright IBM Corporation 2004

Figure 9-85. IN List Processing Enhancements CG381.0

Notes:
DB2 Version 7 has introduced numerous enhancements to IN-list processing, some of
which are mentioned in the following sections:
• Predicate pushdown for IN list predicates (V7 APAR PQ73454)
• Correlated subquery transformation enhancement (V7 APAR PQ73749)
• Select INLIST improvements (V7 APAR PQ68662)
The reason for mentioning them here is that they are not well known, as well as the fact
that some of them are activated via a DSNZPARM called INLISTP. The default value in V7
for INLISTP was 0 (zero), which means that the feature is not active. In V8, the default
value is INLISTP=50, which means that the feature is now active by default. (This also
expresses confidence in these new features, as they are now enabled by default.)
Predicate Pushdown for IN List Predicates (V7 APAR PQ73454)
Performance problems have been observed with complex vendor generated queries
involving IN list predicates and materialized views or table expressions. This performance
problem was due to the large workfile that resulted from a materialized view or table

9-152 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty expression. If qualified IN list predicates can be pushed down into the materialized view or
table expression, the materialized result set becomes much smaller. (This enhancement is
activated through the new DSNZPARM INLISTP).
Example 9-2 demonstrates the effect of this change.
Example 9-2. INLIST Predicate Pushdown

SELECT *
FROM (
SELECT * FROM T1
UNION
SELECT * FROM T2
) X(C1,C2)
WHERE X.C1 IN ('AAA','CDC','QAZ');

--The transformed query is equivalent to the following:

SELECT *
FROM (
SELECT * FROM T1 WHERE T1.C1 IN ('AAA','CDC','QAZ')
UNION
SELECT * FROM T2 WHERE T2.C1 IN ('AAA','CDC','QAZ')
) X(C1,C2)
WHERE X.C1 IN ('AAA','CDC','QAZ');
______________________________________________________________________
This transformation can result in an order of magnitude of performance improvement in
some cases, due to:
• Filtering at the early level to reduce the intermediate workfile size
• Possible exploitation of indexes on T1 and/or T2
Overall query performance should improve after setting the INLISTP parameter.
Correlated Subquery Transformation Enhancement (V7 APAR PQ73749)
Other performance issues have been observed with complex vendor generated queries
involving IN list predicates in correlated subqueries. These IN list predicates are
constructed in such a way that the elements are drawn in from a previously constructed
table, that is, the size of the list for a constructed IN list predicate depends on the size of
such a table.
Allowing a correlated IN list predicate to be generated by the transitive closure process,
and pulling the generated IN list predicate to the parent query block, can improve
performance. The addition of the IN list predicate to the parent query block allows filtering
to occur earlier and results in a reduction in the number of subquery executions. Additional
performance improvements may be achieved if the IN list predicate on the parent query
block allows a better access path as a consequence of indexability of the IN list predicate.
The following conditions must all be met in order for an IN list predicate to be generated
from transitive closure and bubbled up from the subquery to its parent query block.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-153


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Boolean term predicates, COL1 IN (Lit1, Lit2,...) and COL1 = COL2, both appear in the
query.
• This subquery is on the right-hand side of a Boolean term predicate of type EXISTS, IN,
or ANY.
• COL2 is correlated (to the parent query block or some ancestor of the parent query
block).
• Parent query block in inner join, a single base table, or a view. This feature is disabled if
the parent query block is an outer join.
• The subquery does not contain COUNT or COUNT_BIG.
An IN list predicate of more than one literal satisfying all of the above conditions will be
generated in the parent query block and will not appear in the subquery. Other predicates
that used to participate in transitive closure (including singleton IN list) will still be
generated in the subquery as before; if these predicates can be bubbled up, then they will
appear in both the parent query block and in the subquery.
Let us look at Example 9-3 in order to show the behavior prior to this enhancement.
Example 9-3. IN List Query Before Transformation

SELECT *
FROM A
WHERE (EXISTS (SELECT 1
FROM B,C
WHERE B1 = 5
AND B1 = C1
AND C1 = A.A1
AND B2 IN (10, 20)
AND B2 = A.A2
)
) ;
______________________________________________________________________
The SQL statement in Example 9-3 will generate the following three predicates within the
EXISTS subquery:
C1 = 5
A.A1 = 5
A.A1 = B1
This enhancement, combined with an appropriate setting for the INLISTP DSNZPARM
value, will cause a qualified IN list and equal predicates to be candidates for cross query
block transitive closure process. This process, when conditions allow, will result in an IN list
predicate being deduced by transitive closure and generated in the parent query block
rather than in the subquery.
The predicates generated, at the parent query level, by this process are:
A.A1 = 5
A.A2 IN (10,20)

9-154 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty This is equivalent to the transformation of the query shown in Example 9-4.
Example 9-4. Transformed IN List Query

SELECT *
FROM A
WHERE
A.A1 = 5
AND A.A2 IN (10,20)
AND (EXISTS (SELECT 1
FROM B,C
WHERE B1 = 5
AND B1 = C1
AND C1 = A.A1
AND B2 IN (10, 20)
AND B2 = A.A2
)
) ;
______________________________________________________________________

INLISTP Installation Parameter


It is evident that strategic placement of IN list predicates in an appropriate query block,
whether pushing down or pulling out, is an extremely effective way to achieve better
performance for some queries. These optimizations are focused on popular vendor
generated queries, so such performance improvement are expected to benefit a broad set
of customers. The effectiveness of these optimizations have already been tested at the
IBM Teraplex site, as well as at a customer site.
The reasons for proposing this system parameter INLISTP with respect to IN list predicate
optimization are as follows:
• Allowing customer to tune the INLISTP value to their workload
• Prevention of SQLCODE -101
• Prevention of potential degradation in the access path for some queries
In DB2 V8, the default value for INLISTP is 50. The parameter INLISTP will remain as a
hidden keyword installation parameter.
Notice that for the predicate pushdown optimization, the INLISTP value of 1 will have the
same effect as the value of 0. This is because an IN list predicate of a single element is
transformed into an equal predicate internally, and a qualified equal predicate is already
being pushed down into a materialized view or a table expression. However, for the cross
query block transitive closure optimization, the INLISTP value of would have a different
effect than the value of 0. This is because no predicate has ever been considered for a
cross query block transitive closure. For a positive INLISTP value, equal predicates, in
addition to IN list predicates of more than 1 element, will also be considered as candidates
to be pulled out to the parent query block position.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-155


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Select INLIST Improvements (V7 APAR PQ68662)


This enhancement does not change any behavior in DB2 V8. It is mentioned here merely
because it is not well known that this enhancement was introduced in V7.
The problem that is addressed here is poor access path selection on a single table query
with IN predicate on index columns. The optimizer struggles between I-scan and R-scan
and as the number of entries in the INLIST increases the query will tend to select the
R-scan. This enhancement improves the index scan cost model to more accurately reflect
the chance that the needed index or data page may already be cached in the buffer pool
when the last INLIST item is processed.
Using Dynamic Prefetch during IN List Index Access to for Non-contiguous Data
This is another IN list processing enhancement introduced in V6 and V7, via APAR
PQ71925.
When index access is used with an IN-list predicate and the list items are scattered in the
index key column’s domain, the index access may not be able to benefit from sequential
prefetch.
This enhancement turns off sequential prefetch at bind time so that runtime can kick off
dynamic prefetch whenever a sequential pattern is detected. When doing an SQL
EXPLAIN, the PLAN_TABLE may contain a new value in the PREFETCH field. 'D' may be
observed in the PREFETCH field, meaning that dynamic prefetch is expected by the DB2
optimizer.

9-156 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSC Short Prepare Enhancements
V7 Problem
When a matching statement is found in cache, it is copied into thread
related storage before it can be used
If no storage a GETMAIN is performed
To keep size of local pool limited initiate storage contraction regularly
GETMAIN/FREEMAIN make up large part of cost of short prepare
V8 Solution
Use a number of shared storage pools (assigned via hashing to
minimize contention) to copy statements in
Potentially fewer storage required (not each thread needs its own pool)
Using "best fit" algorithm to find space in assigned pool
Result: Fewer GETMAIN/FREEMAIN operations, faster short prepare

© Copyright IBM Corporation 2004

Figure 9-86. DSC Short Prepare Enhancements CG381.0

Notes:
When global caching is active (DSNZPARM CACHEDYN=YES), DB2 maintains “skeleton”
copies of each prepared statement in the EDM pool. This storage is not an issue for the
short prepare focus. However, whenever a thread issues a prepare, this indicates that
thread is going to need its own copy of the prepared statement, in order to execute the
statement.
When a thread finds a previously executed identical statement in the global cache, DB2
acquires storage and makes a copy of the statement for that thread. Prior to DB2 Version 8,
this storage comes from a thread-based storage pool. So each thread that gets anything
from the cache will have one of these pools.
In a system with many long-running threads, such as an SAP system, these pools can get
quite large and fragmented if left uncontrolled. The current design (prior to DB2 V8)
“contracts” the pool quite frequently, almost at every commit. The contraction logic frees
most of the unused space back to the OS, keeping the size to a minimum. This means that
when a thread performs a prepare after a commit, DB2 most probably has to go back to the
OS to GETMAIN a new piece of storage for the pool, so it has somewhere to put the

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-157


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

statement. This effectively means DB2 has to perform a GETMAIN and FREEMAIN pair for
almost every prepare. Measurements showed that this was the dominant cost of the short
prepare path (although the FREEMAIN part really happens at commit).
The new approach aims to virtually eliminate the cost of the OS level GETMAINs and
FREEMAINs by going to a centralized storage approach. With this approach, DB2 will use
a number of storage pools that are owned by the system, not a particular thread. To avoid
latch contention, the new implementation uses a fixed number of pools.
When a new piece of storage is needed, a hash function is used, to “randomly” assign a
pool. The hash uses the statement identifier as input. Each of the pools is a “bestfit” pool
and it extends its size as needed, in 1 MB chunks. With this approach, we rely on the best
fit logic to keep the pools at a minimum size. With the thread-based storage model, further
reductions in contractions would have lead to some storage increase.

Example
Assume we have three threads. Sequentially, that is, at discrete different times, each one
prepares and executes a statement that is 10K in size, then it commits, freeing the storage
back to the pool.
With the thread-based approach, at the end of the sequence, each thread will have 10 KB
allocated for a total of 30 KB. With the centralized approach, the storage would get reused
at each thread’s statement prepare, so there would be only ever be 10 KB allocated to the
pool. If the three threads’ statement executions in fact do not occur at different times, there
are still benefits. Assume, for example, that all three executed concurrently. The threads
will have used 30 KB in the centralized pool as well. But assume that after they all commit,
one thread subsequently prepares and executes a statement that is between 10 KB and 30
KB in size. This could be satisfied from the existing centralized pool. But with the previously
used thread-based pool, DB2 would need to GETMAIN a new 10 - 30 KB chunk to its
unique pool.
The centralized approach may use as much storage as the thread-based approach in the
worst case, but most likely the timing will be such that it will use much less. The major
benefit is then the removal of the majority of OS calls to GETMAIN and FREEMAIN
storage, and this benefit occurs regardless of the level of storage usage improvement.
Another benefit occurs in the case of short-running threads. When a thread that connects,
prepares, and executes only a few statements, then ends, the centralized approach is
clearly better. In this case the thread does not need to create its storage pool or GETMAIN
any storage. The central pool is highly likely to already contain sufficient free storage with
the normal coming and going of other activity to satisfy this small number of executed
statements.
The more centralized management of the available storage means that we can use the
available storage to the best advantage among all threads. Idle threads will also not be
holding storage that they are not using, leading to more efficient usage of storage
resources, and reduced induced overhead in a busy system.

9-158 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
New PLAN_TABLE Columns
TABLE_ENCODE CHAR(1)
Indicates the encoding scheme of the table. If the table has a single
CCSID set, then the column will contain 'E' for EBCDIC 'A' for ASCII
or 'U' for Unicode. If the table contains multiple CCSID sets, then the
column will be set to 'M' for multiple CCSID sets.
TABLE_SCCSID FIXED(16)
The SBCS CCSID value of the table. If the TABLE_ENCODE column
is 'M', the value is zero.
TABLE_MCCSID FIXED(16)
The Mixed CCSID value of the table. If the TABLE_ENCODE column
is 'M', the value is zero.
TABLE_DCCSID FIXED(16)
The DBCS CCSID value of the table. If the
TABLE_ENCODE column is 'M', the value is zero.
ROUTINE_ID INTEGER
Used to pinpoint the table function record from SYSIBM.SYSROUTINES.

© Copyright IBM Corporation 2004

Figure 9-87. New PLAN_TABLE Columns CG381.0

Notes:
The visual above shows the new columns that have been added to the plan_table.
In addition to that, the explanation of two columns has been changed in order to be able to
store MQT related information. Those columns are:
TNAME The name of a table, materialized query table, created or declared
temporary table, materialized view, table expression, or an
intermediate result table for an outer join that is accessed in this step,
blank if METHOD is 3.
TABLE_TYPE M for materialized query table.
If automatic query rewrite occurs and the final query plan comes from a rewritten query, the
PLAN_TABLE shows the name of the matched MQTs and the access path using the MQTs.
The TABLE_TYPE column of the PLAN_TABLE for the MQT rows show value ‘M” for an
MQT. For more information about MQTs, refer to Figure 9-3, "What Is a Materialized Query
Table (MQT)?", on page 9-6.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-159


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

You can use the CREATE TABLE statement shown in Example 9-5 to create the V8
PLAN_TABLE.
Example 9-5. DB2 V8 PLAN_TABLE Definition

CREATE TABLE userid.PLAN_TABLE (


QUERYNO INTEGER NOT NULL,
QBLOCKNO SMALLINT NOT NULL,
APPLNAME CHAR(8) NOT NULL,
PROGNAME VARCHAR(128) NOT NULL,
PLANNO SMALLINT NOT NULL,
METHOD SMALLINT NOT NULL,
CREATOR VARCHAR(128) NOT NULL,
TNAME VARCHAR(128) NOT NULL,
TABNO SMALLINT NOT NULL,
ACCESSTYPE CHAR(2) NOT NULL,
MATCHCOLS SMALLINT NOT NULL,
ACCESSCREATOR VARCHAR(128) NOT NULL,
ACCESSNAME VARCHAR(128) NOT NULL,
INDEXONLY CHAR(1) NOT NULL,
SORTN_UNIQ CHAR(1) NOT NULL,
SORTN_JOIN CHAR(1) NOT NULL,
SORTN_ORDERBY CHAR(1) NOT NULL,
SORTN_GROUPBY CHAR(1) NOT NULL,
SORTC_UNIQ CHAR(1) NOT NULL,
SORTC_JOIN CHAR(1) NOT NULL,
SORTC_ORDERBY CHAR(1) NOT NULL,
SORTC_GROUPBY CHAR(1) NOT NULL,
TSLOCKMODE CHAR(3) NOT NULL,
TIMESTAMP CHAR(16) NOT NULL,
REMARKS VARCHAR(254) NOT NULL,
PREFETCH CHAR(1) NOT NULL WITH DEFAULT,
COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT,
MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT,
VERSION VARCHAR(64) NOT NULL WITH DEFAULT,
COLLID VARCHAR(128) NOT NULL WITH DEFAULT,
ACCESS_DEGREE SMALLINT ,
ACCESS_PGROUP_ID SMALLINT ,
JOIN_DEGREE SMALLINT ,
JOIN_PGROUP_ID SMALLINT ,
SORTC_PGROUP_ID SMALLINT ,
SORTN_PGROUP_ID SMALLINT ,
PARALLELISM_MODE CHAR(1) ,
MERGE_JOIN_COLS SMALLINT ,
CORRELATION_NAME VARCHAR(128) ,
PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT,
JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT,
GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT,
IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT,
WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT,
QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT,
BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT,
OPTHINT VARCHAR(128) NOT NULL WITH DEFAULT,
HINT_USED VARCHAR(128) NOT NULL WITH DEFAULT,
PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT,
PARENT_QBLOCKNO SMALLINT NOT NULL WITH DEFAULT,

9-160 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty TABLE_TYPE CHAR(1) ,


TABLE_ENCODE CHAR(1) NOT NULL WITH DEFAULT,
TABLE_SCCSID SMALLINT NOT NULL WITH DEFAULT,
TABLE_MCCSID SMALLINT NOT NULL WITH DEFAULT,
TABLE_DCCSID SMALLINT NOT NULL WITH DEFAULT,
ROUTINE_ID INTEGER NOT NULL WITH DEFAULT
IN database-name.table-space
______________________________________________________________________

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-161


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

PLAN_TABLE Access via Aliases


V7:
CREATE ALIAS BART.PLAN_TABLE FOR
ADMF001.PLAN_TABLE;
EXPLAIN PLAN FOR SELECT * FROM DSN8710.EMP

SQLCODE = -219, ERROR: THE REQUIRED EXPLANATION


TABLE BART.PLAN_TABLE DOES NOT EXIST

V8:
CREATE ALIAS BART.PLAN_TABLE FOR
ADMF001.PLAN_TABLE;
EXPLAIN PLAN FOR SELECT * FROM DSN8710.EMP

(UID = BART)
© Copyright IBM Corporation 2004

Figure 9-88. PLAN_TABLE Access via Aliases CG381.0

Notes:
Up to V7, you never had the possibility to access explain tables with different OWNER and
QUALIFIER names. (OWNER refers to the creator of the explain table; QUALIFIER refers
to the user that issues the EXPLAIN command.) A limitation of our EXPLAIN command has
been that only the owner of the explain tables can issue the EXPLAIN command to
populate his/her tables. This has prevented you from being able to consolidate your plan
tables under a single AUTHID.
Starting with V8, can use the ALIAS mechanism to populate explain tables created under a
different AUTHID. The following external explain tables can now have aliases defined on
them:
• PLAN_TABLE
• DSN_STATEMNT_TABLE
• DSN_FUNCTION_TABLE
As you can see from the visual above, prior to this change, when you used the alias
feature, you got an error message saying that the required explain table does not exist.

9-162 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The alias name must have the format userid.table_name where tablename can be
PLAN_TABLE, DSN_STATEMNT_TABLE or DSN_FUNCTION_TABLE.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-163


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

9-164 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 9.12 Visual Explain Enhancements

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-165


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Visual Explain Enhancements


Component of no-charge Management Client Package feature
As in V7
Complete rewrite of V7 product
Rewritten in Java, connecting via DB2 Connect to DB2 for z/OS
Basic capabilities of V7 Visual Explain with better look and feel
Enhanced explain capabilities
Qualified row estimates
Including for individual tables in a join (as if table was the outer table)
Predicate information (Matching, screening, Stage 1, Stage 2)
Limited partition scan details
Parallelism details
Sort estimation
Enhanced reporting
Generate HTML reports
XML output file
Generate information for IBM Service Team

© Copyright IBM Corporation 2004

Figure 9-89. Visual Explain Enhancements CG381.0

Notes:
For those of you that know the Visual Explain product that shipped a part of the no-charge
Management Client Package with Version 7, the new Visual Explain that ships with Version
8 has almost nothing in common with the V7 product, other than its name. It is a complete
rewrite. This does not mean that the V7 Visual Explain is not a useful product, but its
Version 8 follow-on just does so much more.
First of all, it is still free, and comes as part of the no-charge Management Client Package
feature with DB2 for z/OS Version 8. It can be used to explain queries against V7 and V8
systems, but you will only get to use the gems of the product with DB2 Version 8.
As mentioned before, the V8 Visual Explain is a completely re-write of the V7 product. It is
now completely written in Java, and it is designed in such a way that it is easy to make new
features available as plug-ins. It uses DB2 Connect to connect to DB2 for z/OS.

9-166 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Overview of the New Features in Visual Explain


First of all, the V8 Visual Explain has all the functions the “old” Visual Explain has. It allows
you to:
• Explain SQL statements that you enter, and display the access path graphically.
• Explain SQL statement from the DB2 catalog, and display the access path graphically.
• Generate and print a report on those statements.
• Graphically display your DSNZPARMs (using the DSNWZP stored procedure).
However, it provides much more detail about the access paths than you ever had before.
When using Visual Explain against a DB2 Version 8 system, it not only uses the:
• PLAN_TABLE
• DSN_STATEMNT_TABLE
• DSN_FUNCTION_TABLE
• DB2 catalog statistics
It also uses a number of additional explain tables. Their content is not externalized and can
change without notice, but the information is exploited by Visual Explain (VE) for your
benefit. These tables contain a wealth of information that was not available to you in the
past, and can be of great help when analyzing access path selection problems. They are
automatically created by VE in a database/table space of your choice when enabling the
tool.
The following new information is now available in Visual Explain:
• Single predicate filter factor estimates
• Determine whether a predicate is sargable (stage 1) or not (stage 2)
• Estimated number of rows at different stages during the query execution:
- From a single table perspective (as though the table were the outer table)
- How many rows are estimated after first n tables have been joined?
• At what time during the processing is a predicate applied? Is a predicate? For example:
- A matching index predicate
- An index screening predicate
- A Stage 1 non-indexable predicate
- A Stage 2 non-indexable predicate
• Limited partition scan details; what partitions will be accessed during the limited
partition scan
• Index filter factor estimates:
- The filter factor of matching index predicates
- The total index filtering (matching and screening filter factors combined)
• Parallelism details:
- Key range or page range partitioning

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-167


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

- What each parallelism task does:


• What pages are scanned by each task
• What key ranges are scanned by each task
• Sort key length and sort data length:
This information can be useful for input into formulas for sort pool sizing, and sort
workfile sizing, where before you had to turn on a trace (IFCID 95 and 96) at run-time.
- VE also provides the estimated number of records to be sorted.
In addition to displaying all this information on a screen, Visual Explain also allows you to
generate reports, as html documents. In addition, you can also save your “explain analysis
output” as an XML document for later reuse, or send to others for analysis, like your
colleagues or IBM service people. When saved as an XML document, you can later reload
the file and continue your interactive analysis.
When you suspect that the access path problem you are investigating is an actual DB2
problem (and not just a lack of statistics information that gives the optimizer incomplete
information to determine the best access path), you normally open a problem ticket with
IBM, and provide the necessary information in order for the service team to analyze the
problem. This is often a painful process of gathering, sending, gathering additional
information or resending files. VE provides you with a “Service SQL” option, that gathers all
the required information about your suspect query, and allows you to FTP it immediately to
IBM for investigation, by pressing only a few buttons.
Describing all these enhancements in great detail is beyond the scope of this course.
However, we want to give you some examples of how much more information the new
Visual Explain provides on the following pages. If you want to learn more about Visual
Explain, the tool comes with extensive help and a complete tutorial.

Setting up Visual Explain


As mentioned before, VE comes as part of the DB2 Management Client packages on a
CD-ROM. However, it can also be downloaded from the Web at:
http://www.ibm.com/software/db2zos/dld.html
After you install the VE code, you must do some initial setup:
• You must set up your connection to the DB2 system you want explain queries on.
• You must set up the VE explain tables.

Cataloging a DB2 System


When connecting to your DB2 systems, Visual Explain uses DB2 Connect. (As in V7, a
restricted-use copy of DB2 Connect Personal Edition V8 for Windows is provided with the
Management Client Package).
Instead of having to use the Configuration Assistant tool that comes with DB2 Connect, you
can also set up your connection via Visual Explain. (Make sure the DB2 Connect packages

9-168 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty have been bound previously to that DB2 system and that you are using a TCP/IP
connection to the host, which most people are). The configuration is very simple and
straightforward. All required information can be entered on a single window (see Figure
9-2).

Figure 9-2 Catalog a Database

Enabling Visual Explain


After the connection is established, you have to enable VE. To do so, you must use the
Subsystem —> Enable Visual Explain menu option (Figure 9-3).

Figure 9-3 Enable Visual Explain

You are asked to provide a database and table space name, and table qualifier. During the
enablement, VE creates the necessary explain tables that are used by the tool. In case, the
database and table space does not exist, VE will ask whether you want to create them
before continuing. Based on your input parameters, VE will then create these objects.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-169


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

As you can see, you have some additional options within the subsystem menu. You can
also use the option to Maintain Visual Explain. This option allows you to delete entries
from the explain tables used by VE. The View External Explain Tables option allows you
to look at rows in the external explain tables, PLAN_TABLE, DSN_STATEMNT_TABLE and
DSN_FUNCTION_TABLE.
This is a good option if you are familiar with the “cryptic” output that the SQL EXPLAIN
statement produces, but for most people, the graphical representation is much easier to
understand. Lastly, you have the Browse Subsystem Parameters option, that shows all
the values of the currently active DSNZPARM module. It uses the DSNWZP stored
procedure to obtain this information (Figure 9-4).

Figure 9-4 Display DSNZPARM

Entering SQL Statements to Analyze


Here you have two options: You can retrieve the SQL statements you want to explain from
the catalog tables SYSSTMT and SYSPACKSTMT (option List Static SQL Statements), or
you can type in your own statements (Tune SQL). Both are available from the Tools menu
option.

List Static SQL Statements


As mentioned before, this option allows you to list static SQL statements that are stored in
the DB2 catalog. You have the option to list all statements (declares, open, fetch, etc.), or
only the explainable SQL statements.

9-170 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Listing all statements can be useful when analyzing problem SQL statements. An SQL
trace often only contains the statement number of the executing statement that takes a lot
of time, for example an OPEN statement, whereas you need to find the corresponding
DECLARE CURSOR statement to find out what the root of the problem is.
In addition, remember that the columns in SYSPACKSTMT and SYSSTMT that contain the
SQL statement text are marked FOR BIT DATA, and contain Unicode data, once you are in
DB2 V8 new-function mode. This means that they are not automatically converted to the
CCSID of the application’s ENCODING BIND parameter. However, when using Visual
Explain, VE takes care of this, and you can still read your SQL statement text information.
After retrieving the statements, you can explain them, provided they are explainable
statements, of course.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-171


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Tune SQL

© Copyright IBM Corporation 2004

Figure 9-90. TUNE SQL CG381.0

Notes:
This option allows you to type in your own statements, or import a saved SQL statement.
You can then explain the statement by clicking the Explain button. You can also execute
the statement, in case you would like to see the result.
Note that you can also specify:
• Current degree (for parallelism)
• Current refresh age (for MQTs)
• Current maintained table types (for MQTs)
When you click the Explain button, the tool starts demonstrating its true power.

9-172 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Access Plan

© Copyright IBM Corporation 2004

Figure 9-91. Access Plan CG381.0

Notes:
When VE is done explaining the statement and gathering all related statistics information,
the result is displayed in the Access Plan pane as shown in the visual above.
On the right hand side, you see the graphical representation of the query’s execution plan,
the access plan graph. An access plan graph consists of nodes and lines that connect
those nodes. The nodes represent data sources, operators, queries, and query blocks. The
arrows on the lines indicate the direction of the flow. Different node types have a different
color and/or shape (that are customizable).
If you move your mouse over a node, additional information is displayed in a pop-up
window, as shown in Figure 9-5. Note that for this node, the cardinality at that node is
shown. This means that the optimizer thinks that at this stage during the processing
(SORTRID), DB2 expects to sort 24930 RIDs. Looking at these cardinality numbers can
give you an easy way to validate whether the number is similar to what you would expect,
knowing the data that the query processes.

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-173


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

(1) QUERY

(2) QB1
24929.005

(3) FETCH
24929.005

(7)
(4) SORTRID
LINEITEM
24930
1114560

Node Type: Sort Record ld(4)


Caardinality: 24930
(5) IXSCAN
24930

(6)
SXL#PKSKOKEPDSON
1114560

Figure 9-5 Access Plan Graph

On the left-hand side, you have additional information on the currently selected node. In the
visual above the top node called query has the focus (this is also indicated on the left-hand
side). The information displayed about the query is the type of SQL statement (in our case
a SELECT), CPU cost (in ms and service units), the statement category (‘A’ in our case)
and the reason (in case of a category ‘B’ statement).

9-174 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Predicate Processing and Filtering

© Copyright IBM Corporation 2004

Figure 9-92. Predicate Processing and Filtering CG381.0

Notes:
When we select the IXSCAN node, the left-hand side brings up a very helpful window
related to the processing that goes on in the IXSCAN node. It is shown in the figure above.
The following information is presented for our query:
Input RIDs This is the number of RIDs that goes into the IXSCAN process. As this
IXSCAN is accessing the first table (index), the number of RIDs is the
same as the number of rows in the table, or keys in the index (as the
index is a unique index).
Index leaf pages This is the number of leaf pages in the index.
When dealing with an index, we have two types of predicates, index matching and index
screening predicates.
Filter factors For the matching predicate:
l_partkey between 12345 and 23456
the estimated filter factor is 0.0671 (the percentage of rows left
after applying this individual predicate).

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-175


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

For the screening predicate:


l_extendedprice > 500
the estimated filter factor is 0.333 (a default).

The Total (combined) Filter Factor (of the matching and


screening index predicates) is 0.0224.
Scanned Leaf Pages As only the index matching predicates determine the number of
leaf pages scanned, this field appears under the matching
predicates section.
Output RIDs The number of RIDs left after applying both matching and
screening predicates.
Matching Columns Indicates the number of columns the matching predicate used
for matching.
Note: As you can see, Visual Explain provides you with information about whether a
predicate is an index matching or index screening predicate, (whether a predicate is
stage 1 non- indexable or stage 2 — not shown here because we are looking at an
IXSCAN node), as well as the individual and combined filter factors. So, from now on —
no more guesses; in V8 you have the facts.
You can obtain other information by clicking on another node. A table node, for example
gives you all the information about the table itself, as well as all its related objects, like
columns, indexes, table space and partition, as shown in Figure 9-6.

Figure 9-6 Object Tree Information

By exploring the object tree, you can obtain all pieces of information, including statistics
about the objects involved in the query.

9-176 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Nested Loop Join Details

© Copyright IBM Corporation 2004

Figure 9-93. Nested Loop Join Details CG381.0

Notes:
The graph in the figure above represents a nested loop join. A nested loop join construct
consists of one NLJOIN node and two subtrees. The right subtree represents the inner
table and the left subtree represents the outer table. Both tables can be accessed with
either a table space scan, single-index access, or multiple-index access. The left subtree
can also include another join operation (nested loop join, merge scan join, hybrid join, or
star join).
The way to read the diagram is as follows:
• The outer table is accessed via a non-matching index scan (you can see this by clicking
on the IXSCAN node, with “views all” selected (instead of the default “cost estimation”,
in the node descriptor on the left side). The Nation table is also accessed since it
appears in the diagram (for an index-only access the node representing the Nation
table would not appear in the graph). The result is that all 25 rows are fetched (one at a
time).
• The inner table is accessed via a matching index scan (on the join predicate). Note that
you interpret the information about the inner table access on a per row of the outer table

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-177


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

basis. In this case, for each row from the outer table, the inner table will use a matching
index scan with data access, and is expected to return 6000 rows.
When you select the NLJOIN node, you see the information on the left-hand side of the
figure. It contains interesting information that we can only obtain using Visual Explain. It
shows the number of rows going into the nested loop join (outer input cardinality) and inner
input cardinality (this is the number of rows that are expected to qualify in the inner table,
as if it was the outer table). This is very valuable information, as it provides an easy way to
spot the “odd man out”. In case one of the single table row estimates is wrong, this often
leads to a poor join sequence and bad performance.

Sort Information
Previously, estimating the size of a sort usually ended up in a lot of guesswork, unless you
turned on some DB2 performance traces (class 9 in particular) to determine the size of the
sorts that are executed as part of the query.
With V8’s enhanced Visual Explain, the optimizer now tells you what your sorts are going to
look like (Figure 9-7).

Figure 9-7 Sort Node Description

Visual Explains shows:


• The expected number of rows that go into the sort.
• The expected number of rows that come out of the sort. In this case, the number is the
same because this is a simple ORDER BY. However, this does not have to be the case,
for example during a GROUP BY sort.
• The number of pages scanned (from the input table / workfile) during the sort.

9-178 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • The number of merges indicates how expensive the sort is. When the number of
merges is more than one, DB2 cannot merge all the runs in a single operation. The
result is that more resources will be required to perform the sort, more CPU, I/O, and
sort workfiles.
• The record size is the total size of the sort record. It includes the sort columns (also
called the key columns) and the data columns (non-key columns).
• The key size is the size of all the columns that we sort on, in our case the columns
specified in the ORDER BY.
Note: The sort key size has been increased from 4K to 16K in DB2 Version 8.

Analyzing Parallel Queries with Visual Explain


When analyzing SQL queries that use DB2 query parallelism, Visual Explain also provides
you with more information than you ever had before. Without going into too much detail, we
use a simple parallel query as an example (Figure 9-8).
When looking at a single table access parallelism graph, there are two new node types that
show up:
• Partition (of work) node (initiate a parallel group)
• Merge node (bring parallel streams back together, sometimes to continue with a
different degree (not used in our example))
Both nodes indicate what the degree of parallelism is. For the partition of work node, that is
the degree at which the index and/or table are accessed. For the merge node, the degrees
that were merged (both three in our example).

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-179


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Figure 9-8 Parallel Query in Visual Explain

When looking at the PARTITION node, you can see the parallelism details (on the left-hand
side of Figure 9-8). They include:
• The parallelism mode: I/O, CPU, or sysplex parallelism)
• The partition type (type of parallelism): Page range or key range. Our example uses key
range
• Other parallel task details, such as the number of CPUs, the expected elapsed time for
each parallel task in this parallel group, and the number of parallel tasks
When you expand the tree in the top-left window, you can explore the actual key ranges (or
page ranges, depending on the type of parallelism) as shown in Figure 9-9.

9-180 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty

Figure 9-9 Parallel Key Range

For the first parallel task, the key range is from 350 000 until 449 999.
When you click on the fetch node (Figure 9-10), you can also find out which partitions will
be accessed (qualified partition range 2-3).

Figure 9-10 Parallelism Fetch Node

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-181


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Generating Reports

© Copyright IBM Corporation 2004

Figure 9-94. Generating Reports CG381.0

Notes:
Other than printing individual screens, you can also generate a report detailing the explain
and statistics information related to your query. To do so, click the Report tab in the Tune
SQL window. This brings up the generate report selection pane where you can check the
items you want to include in the report (Figure 9-11).
After selecting the reporting options that you are interested in, you press the Generate
Report button. The result is shown in the figure above.

9-182 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty

Figure 9-11 Generate Report Options

© Copyright IBM Corp. 2004 Unit 9. Performance Enhancements 9-183


Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

9-184 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Unit 10. Data Sharing Enhancements

What This Unit Is About


This unit explains the new functions and enhanced functions for the
DB2 data sharing environment.

What You Should Be Able to Do


After completing this unit, you should be able to:
• Describe the V8 reduction in CF lock propagation
• Explain CF request batching
• Relate the improvements for LPL recovery

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-1
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
CF lock propagation reduction
CF request batching
Improved LPL recovery
Restart light enhancements
Change to IMMEDWRITE bind option
Change to -DISPLAY GROUPBUFFERPOOL command

© Copyright IBM Corporation 2004

Figure 10-1. List of Topics CG381.0

Notes:
In this unit, we introduce the following data sharing enhancements in DB2 Version 8:
• CF lock propagation reduction:
This enhancement remaps IX parent L-locks from XES-X to XES-S. Data sharing
locking performance will benefit because this allows IX and IS parent global L-locks to
be granted without invoking global lock contention processing to determine that the new
IX or IS lock is compatible with existing IX or IS locks.
• CF request batching:
Current architecture allows multiple pages to be registered to the coupling facility with a
single command. This enhancement utilizes new functionality in z/OS 1.4 and CF level
12 to enable DB2 to:
- Register and write multiple pages to a group buffer pool.
- Read multiple pages from a group buffer pool for castout processing.

10-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty This reduces the amount of traffic to and from the coupling facility for writes to group
buffer pools and for reads for castout processing, thus reducing the data sharing
overhead for most workloads.
• Improved LPL recovery:
Prior to Version 8, you had to manually recover pages that DB2 put into the logical page
list (LPL). DB2 Version 8 automatically attempts to recover LPL pages as they go into
LPL, when it determines the recovery will probably succeed.
Currently LPL recovery requires exclusive access to the table space page set. Version
8 introduces a new serialization mechanism whereby LPL recovery is much less
disruptive.
In addition, instrumentation has improved whereby message DSNB250E is enhanced
to indicate the reason the pages are added to the LPL.
• Restart light enhancements:
DB2 V7 introduced the concept of DB2 restart light which is intended to remove
retained locks with minimal disruption in the event of an MVS system failure. When a
DB2 member is started in restart light mode (LIGHT(YES)), DB2 comes up with a small
storage footprint, executes forward and backward restart log recovery, removes the
retained locks, and then self-terminates without accepting any new work. Restart light is
improved in DB2 Version 8. If indoubt units of recovery (UR) exist at the end of restart
recovery, DB2 will now remain running so that the indoubt URs can be resolved. After
all the indoubt URs have been resolved, the DB2 member that is running in
LIGHT(YES) mode will shut down and can be restarted normally.
• Change the IMMEDWRITE default bind option:
Currently, changed pages in a data sharing environment are written during phase 2 of
commit, unless otherwise specified by the IMMEDWRITE BIND parameter or
IMMEDWRI DSNZPARM parameter. This enhancement will change the default
processing to write changed pages during commit phase 1. DB2 will no longer write
changed pages during phase 2 of commit processing.
• Change to -DISPLAY GROUPBUFFERPOOL output:
Currently, the CF level displayed by the -DISPLAY GROUPBUFFERPOOL command
may be lower than the actual CF level as displayed by a D CF command. The
-DISPLAY GROUPBUFFERPOOL is now enhanced to display both the operational CF
level as before, and also the actual CF level. (The operational CF level indicates the
capabilities of the CF from DB2's perspective. The actual CF level is the microcode
level as displayed by the D CF command.)

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-3
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

10-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.1 CF Lock Propagation Reduction

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-5
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

CF Lock Propagation Reduction


Today it is not uncommon for parent L-locks to cause XES
contention
V8 remaps parent IX L-locks to XES-S locks
XES can grant IX parent L- locks locally when only IS or IX
L- locks are held on the object
Reduced global contention for table space L-locks
Reduced XES-level contention across members
Improved data sharing performance, especially for OLTP
RELEASE(DEALLOCATE) may not be needed
LOCKPART YES is 'forced'

© Copyright IBM Corporation 2004

Figure 10-2. CF Lock Propagation Reduction CG381.0

Notes:
DB2 Version 8 will remap parent IX L-locks from XES-X to XES-S locks. Data sharing
locking performance will benefit because parent IX and IS L-locks are now both mapped to
XES-S locks and are therefore compatible and can now be granted locally by XES. DB2
will no longer need to wait for global lock contention processing to determine that a new
parent IX or IS lock is compatible with existing parent IX or IS locks.
This enhancement will reduce data sharing overhead by reducing global lock contention
processing. It is not uncommon for parent L-Locks to cause global contention. On page set
open (an initial open or open after a pseudo close) DB2 normally tries to open the page set
in RW. To do this, DB2 must ask for an X or IX page set L-Lock. If any other DB2 member
already has the data set open, global lock contention now occurs.
The purpose of this enhancement is to avoid the cost of global contention processing
whenever possible. It will also improve availability due to a reduction in retained locks
following a subsystem failure.
In the next few visuals, we explain this enhancement in more detail.

10-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Data Sharing Locking Review
"P-Lock" (physical lock)
Used to track "inter-DB2 interest"
Used for coherency rather than concurrency
Can only be global
Associated with a DB2 member
"L-Lock" (logical lock)
Another name for "transaction lock"
Controls concurrency for access to objects
Local or global
Associated with programs
Explicit hierarchical locking
Locks organized in a parent/child relationship
Only the most restrictive lock needs to be propagated

© Copyright IBM Corporation 2004

Figure 10-3. Data Sharing Locking Review CG381.0

Notes:
Data Sharing Locking
DB2 data sharing uses two types of locks:
• Physical locks (P-locks):
Physical locks are used to do many different things. Next we discuss the two most
commonly used P-locks: page set P-locks and page P-locks. Other types of P-locks
include DBD, castout, GBP structure, index tree, and repeatable read tracking P-locks.
- Page set physical locks:
Page set P-locks are used to track inter-DB2 read-write interest, thereby
determining when a page set has to become GBP-dependent.
When a DB2 member requires access to a page set or partition, a page set P-lock is
taken. This lock is always propagated to the lock table in the coupling facility and is
owned by the member. No matter how many times the resource is accessed through
the member, there will always be only one page set P-lock for that resource for a

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-7
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

particular member. This lock will have different modes depending on the level (read
or write) of interest the member has in the resource.
The first member to acquire a page set P-lock on a resource takes the most
restrictive mode of lock possible, that is, an S page set P-lock for read or an X page
set P-lock for write interest. An X page set P-lock indicates that the member is the
only member with interest (read/write) in the resource. Once another member
becomes interested in the resource, the page set P-lock mode can be negotiated,
that is, it can be made less restrictive if the existing page set P-lock is incompatible
with the new page set P-lock request.
The negotiation always allows the new page set P-lock request to be granted,
except when there is a retained X page set P-lock. A retained P-lock cannot be
negotiated. (Retained locks are locks that must be kept to protect possibly
uncommitted data left by a failed DB2 member.) Page set P-lock negotiation
signifies the start of GBP dependence for the resource.
Although it may seem strange that a lock mode can be negotiated, remember that
page set P-locks do not serialize access to a resource; they are used to track which
members have interest in a resource and for determining when a resource must
become GBP-dependent.
Page set P-locks are released when a page set or partition data set is closed. The
mode of page set P-locks is downgraded from R/W to R/O when the page set or
partition is not updated within an installation-specified time period or a number of
system checkpoints. When page set P-locks are released or downgraded, GBP
dependency is re-evaluated.
- Page physical locks:
Page P-locks are used to ensure the physical consistency of a page across
members of a data sharing group in much the same manner as latches do in a
non-data sharing environment. A page P-lock protects the page while the structure
is being modified. Page P-locks are used when row locking is in effect, or when
changes are being made to GBP-dependent space map pages. Page physical locks
are also used to read and update index pages.
• Logical locks (L-locks)
Logical locks are also referred to as transaction locks. L-locks are used to serialize
access to data to ensure data consistency.
L-locks are owned by a transaction, and the lock duration is controlled by the
transaction. For example, the lock is generally held from the time the application issues
an update until the time it issues a commit. (Exceptions are share locks associated with
cursors defined WITH HOLD, and table space and partition locks acquired by SQL,
associated with plans and packages bound using RELEASE(DEALLOCATE).) The
locks are controlled locally per member by each member’s IRLM.

10-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty P-locks and L-locks work independently of each other, although the same processes are
used to manage and maintain both. The lock information for all these locks are stored in the
same places (the IRLM, XES and the coupling facility).
Explicit Hierarchical Locking
Conceptually all locks taken in a data sharing environment are global locks; that is, they are
effective groupwide, even though all locks do not have to be propagated to the lock
structure in the coupling facility.
DB2 data sharing has introduced the concept of explicit hierarchical locking, to reduce the
number of locks that must be propagated to the coupling facility.
Within IRLM, a hierarchy exists between certain types of L-locks, where a parent L-lock is
the lock on a page set and a child L-lock is the lock held on either the table, data page, or
row within that page set.
By using explicit hierarchical locking, DB2 is able to reduce the number of locks that must
be propagated to the lock structure in the coupling facility. The number of locks propagated
to the lock structure for a page set or partition is determined by the number of DB2
members interested in the page set and whether their interest is read or write. Wherever
possible, locks are granted locally and not propagated to the coupling facility.
If a lock has already been propagated to XES protecting a particular resource for this
member, subsequent lock requests for the same lock do not have to be sent to XES by the
same member for the same resource. They can be serviced locally. In addition, a parent
L-lock is propagated only if it is more restrictive than the current state that XES knows
about for this resource from this member.
Parent L-locks are released either when the transaction commits, or when the thread
terminates, depending on the value you have specified for the RELEASE parameter on the
bind. Child L-locks are propagated to the lock table in the coupling facility only when there
is inter-DB2 read-write interest for the page set.
Child locks (page and row locks) are propagated to XES and the coupling facility based on
inter-DB2 interest on the parent (table space or partition) lock. If all the table space locks
are IS, then no child locks are propagated. However, if there is a parent IX lock on the table
space or partition, (which indicated read/write interest), then all the child locks must be
propagated.
For example, assume that transactions A and B are running in the same member:
• If transaction A has a parent IS L-lock, the IS L-lock gets propagated to the lock
structure on the coupling facility. If transaction B has a parent IX L-lock, the IX L-lock
gets propagated to the lock structure on the coupling facility.
• If transaction A has a parent IX L-lock, the IX L-lock gets propagated to the lock
structure on the coupling facility. If transaction B has a parent IS L-lock, the IS L-lock
does not get propagated to the lock structure on the coupling facility; that is, the
resultant state does not change.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-9
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• If transaction A has a parent S L-lock, the S L-lock gets propagated to the lock structure
on the coupling facility. If transaction B has a parent IX L-lock, the SIX L-lock gets
propagated to the lock structure on the coupling facility
• If transaction A has a parent IX L-lock, the IX L-lock gets propagated to the lock
structure on the coupling facility. If transaction B has a parent S L-lock, the SIX L-lock
gets propagated to the lock structure on the coupling facility.

10-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Lock Contention

MVS System 1 MVS System 2

DB2A IRLM XES XES IRLM DB2B


TX1 TX2
Lock P1 Coupling Facility Lock P1 S
X
"OK" "SUSPEND"

LOCK
Structure
Update P1
Commit;

Force Log "OK"


Unlock Read P1

When "No contention", Global lock granted synchronously for


execution of the transaction
No need to "Suspend" the transaction task (measured in
microseconds)
"Lock contention" is detected quickly

© Copyright IBM Corporation 2004

Figure 10-4. Lock Contention CG381.0

Notes:
This visual presents a logical overview of how each IRLM works together to maintain data
integrity for a page set where both DB2 members have interest in the page set.
Consider transaction TX1 on DB2A which needs a X lock on page P1. IRLM passes this
lock request to XES and the lock is granted. Now, transaction TX2 on DB2B needs a S lock
on page P1. IRLM passes this lock request through XES to the coupling facility. As
transaction TX1 already has a X lock for page P1, transaction TX2 must be “suspended”.
Transaction TX1 now updates page P1 and commits. The IRLM releases the X lock and
passes an unlock request through XES to the coupling facility. The S lock is now granted
and transaction TX2 can be un-suspended to continue its work.
Now, let us have a closer look at the various reasons why a transaction may be suspended.
Lock information is held in three different components:
• IRLM
• SLM component of XES, which in many publications is simply referred to as XES.
• Lock structure on the coupling facility.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-11
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The types of lock granularity supported by each component differ. IRLM contains the most
detailed lock information, whereas XES and the lock table on the coupling facility recognize
only two types of locks — S and X. Each IRLM lock maps to a particular XES lock. IS and S
map to XES-S locks, while U, IX, SIX and X map to XES-X locks.
Lock contention occurs when a task is requesting a lock for a resource and the lock may
already be held by another task. In data sharing, the other task could be running in the
same DB2 subsystem or running in another DB2 member. For this discussion we are only
concerned about global contention, when the contention is across DB2 members.
Global Lock Contention
In data sharing, three types of global contention can occur. These are listed in order of
increasing time needed for their resolution:
• False Contention:
False contention occurs when the hashing algorithm for the lock table provides the
same hash value for two different resources. The different resources then share that
one lock table entry.
False contention can occur only when the lock table entry is managed by a global lock
manager or when the lock request causes global management to be initiated, (we have
inter-DB2 R/W interest in the page set.). The XES requesting the lock needs to know
the owning resource name to resolve this apparent contention. That information already
resides in the XES that is the global lock manager for the lock table entry. If the global
lock manager is not the requesting XES, communication between XES components is
needed to resolve the false contention.
In our example, false contention would occur if transaction TX2 were to request a lock
for a different resource, say page P2, and the lock request hashed to the same lock
table entry in the coupling facility.
Transaction TX2 must be suspended while the XES who is the global lock manager for
the lock table entry, determines that the lock can be granted.
• XES Contention:
The MVS XES component is aware of only two lock modes, share and exclusive. IRLM
locking supports many additional lock modes. When the MVS XES component detects
a contention because of incompatible lock modes for the same resource, that
contention is not necessarily a real contention by IRLM standards. For example, the
IRLM finds the IX-mode to be compatible with the IS-mode. For the MVS XES
component, however, these are not IX-mode and IS-mode, but X-mode and S-mode
which are incompatible. To see if a real contention exists, MVS XES must give control to
the IRLM contention exit associated with the global lock manager. The IRLM contention
exit must determine if the contention is real or not, that is, if the locks are incompatible.
If the contention is not real, it is called “XES contention” and the requested lock can be
granted.

10-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty In our example, XES contention would occur if transaction TX1 held an IX lock on the
page set that contains page P1 and transaction TX2 was requesting an IX lock on the
same page set that contains page P1. Both of these lock requests are passed to XES
as X locks. XES sees these lock requests as not compatible, however IRLM knows they
are compatible.
Transaction TX2 must be suspended while the XES who is the global lock manager for
the lock table entry, must defer to IRLM to decide if the lock request can be granted.
• Real Contention:
Real contention is caused by normal IRLM lock incompatibility between two members.
For example, two transactions may try to update the same resource at the same time.
DB2 PE reports real contentions as IRLM contentions.
This is the example we have just explained. Transaction TX2 is requesting a lock which
is not compatible with a lock already help by transaction TX1. Transaction TX2 must be
suspended while XES defers to RLM who cannot grant the lock.
Resolving Contention
Contentions require additional XES and XCF services if the requesting member is not
global lock manager, that is, the owner of the lock registered in the lock table entry.
Information about locks that IRLM has passed to XES is stored in XES. When contention
occurs (false, XES, or real contention), one of the XESs is assigned to be the global lock
manager to resolve the contention. This resolution involves all of the other XESs in the
group that have locks which have been assigned to the lock table entry, passing their lock
information to the global lock manager XES. This global lock manager XES can then drive
resolution of the contention.
When any contention occurs, execution of the requester’s SQL statement is suspended
until the contention is resolved. If the contention is real, the requester remains suspended
until the incompatible lock is released.
Therefore, any contention can adversely impact performance. The SQL is suspended while
the contention is resolved and extra CPU is consumed resolving the contention.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-13
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Page Set L- Lock Contention

ZOSA ZOSB
1 2
XES X ,DB2A XES
1 2
3

DB2A DB2B
DB2 Lock Structure

1 A program on DB2A requests an IX lock on table space TS123,


and XES propagates this as an X lock on the lock table entry

2 A program on DB2B requests an IS lock on the same object,


and XES propagates this request to the lock table

3 ZOSB gets a contention response from the coupling facility

© Copyright IBM Corporation 2004

Figure 10-5. Page Set L-Lock Contention CG381.0

Notes:
The flow of a global logical lock with inter-DB2 read-write interest and with a XES
contention is as follows:
1. An application on DB2A issues an SQL UPDATE:
a. DB2A decides a lock is required and requests an IX page set L-lock to IRLMA.
b. IRLMA registers the lock and passes a request for an X lock to XES on ZOSA.
c. XES on ZOSA sends the X lock to the coupling facility.
2. An application on DB2B issues an SQL SELECT on the same page set:
a. DB2B decides a lock is required and requests an IS page set L-lock to IRLMB.
b. IRLMB registers the lock and passes a request for an S lock to XES on ZOSB.
c. XES on ZOSB sends the S lock to the coupling facility.
3. The coupling facility finds a contention, in this case, XES contention.

10-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty XES contention on page set L-locks is reasonably common in data sharing environments.
For example, when DB2 opens a page set (either as a real data set open or logical open
after a pseudo close), DB2 will normally try to open the page set in RW. To do this DB2
must ask for an X or IX page set L-lock. If any other DB2 member already has the data set
open, global lock contention now occurs. (It is much more common for DB2 to open a page
set in R/W rather than R/O.)

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-15
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

XES Contention Before Version 8


Various IRLM lock levels can ONLY map to one of two XES lock
levels
IRLM IS and S locks map to XES -S lock
IRLM U, IX, SIX and X locks map to XES -X lock
Previous example with two members holding IX locks on same
table space
Both IRLM IX locks map to XES- X locks - hence lock conflict
XES detects contention
Global contention processing invoked by IRLM
Determine if IX is really compatible with IX
Grant lock request

© Copyright IBM Corporation 2004

Figure 10-6. XES Contention Before Version 8 CG381.0

Notes:
To resolve the contention described on the previous visual:
1. The coupling facility replies to XES on ZOSB with a finding of contention and the global
lock manager identifier.
2. XES on ZOSB asks IRLMB to suspend SQL statement.
3. XES on ZOSB use XCF to query the XES global lock manager, (XES on ZOSA).
4. The global lock manager finds that the lock is in contention.
5. XES on ZOSA drives the IRLMA contention exit identifying contention.
6. IRLMA determines this is not real contention and tells XES on ZOSA. The contention is
XES contention.
7. XES on ZOSA replies to XES on ZOSB through XCF that no real contention exists.
8. XES om ZOSB replies to IRLMB: No contention.
9. IRLMB replies to DB2B.
10. DB2B resumes the SQL statement and passes the result to the application.

10-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
The Version 8 Enhancement
IRLM will now map parent IX L-locks to XES-S locks
Grant parent IX (global) L-locks locally (since there is no contention)
when only IS or IX L-locks held on the object
How do we ensure that IX remains incompatible with S?
Parent S L-locks must now map to XES-X locks
Additional overhead of global contention processing
Verify that a page set S L-lock is compatible with another page set S
L-lock - Rare
Majority of cases are IS - IS, IS - IX and IX - IX - Hence performance
benefits

© Copyright IBM Corporation 2004

Figure 10-7. The Version 8 Enhancement CG381.0

Notes:
In DB2 Version 8, parent IX L-locks will be remapped to XES-S locks, rather than XES-X
locks. This will allow the parent global IX L-locks to be granted without having to invoke the
contention exit (it can be granted by local system’s XES) when only IS or IX L-locks are
held on the object.
To ensure that parent IX L-locks remain incompatible with parent S L-locks, S table and
table space locks are remapped to XES-X locks. This means that additional global
contention processing will now be done to verify that a page set S L-lock is compatible with
another page set S L-lock, but this is a relatively rare case (executing read-only SQL
against a page set, and there is only one other member who currently has some read-only
SQL active against the same page set).
The majority of cases are as follows:
• IS-IS: We want to execute some read-only SQL against a page set and there are some
other members who currently have some read-only SQL active against the same page
set.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-17
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• IS-IX: We want to execute some update SQL against a page set and there are some
other members who currently have some read-only SQL active against the same page
set.
• IX- IX: We want to execute some update SQL against a page set and any number of
other members who currently have some update SQL active against the same page
set.
Hence, global contention processing will be reduced. Parent lock contention with parent S
L-locks is less frequent than checking for contention with parent IS and IX L-locks.

10-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
V8 Child L-Lock Propagation
Child L-lock propagation no longer based on the cached state of
parent L-lock
Now based on cached (held) state of page set P-lock
If page set P-lock negotiated from X to SIX or IX, then child L-locks
propagated
Reduced volatility
If P-lock not held at time of child L-lock request, child lock will be
propagated
Parent L-locks no longer need to be held in retained state after
DB2 failure
For example, a page set IX L-lock no longer held as a retained X-lock
Important availability benefit in data sharing

© Copyright IBM Corporation 2004

Figure 10-8. V8 Child L-Lock Propagation CG381.0

Notes:
Another impact of this change is that child L-locks are no longer propagated based on the
parent L-lock. Instead, child L-locks are propagated based on the held state of the page set
P-lock. If the page set P-lock is negotiated from X to SIX or IX, then child L-locks will be
propagated.
It may be that some child L-locks are acquired before the page set P-lock is obtained. In
this case child L-locks will automatically be propagated. This situation occurs because DB2
always acquires locks before accessing the data. In this case, DB2 acquires the page set
L-lock before opening the page set to read the data.It can also happen during DB2 restart.
An implication of this change is that child L-locks will be propagated for longer that they are
needed, however this should not be a concern. There will be a short period from the time
where there is no inter-system read/write interest until the page set becomes
non-GBP-dependent, that is, before the page set P-lock reverts to X. During this time, child
L-locks will be propagated unnecessarily.
Another consequence of this enhancement is that, since child L-lock propagation is no
longer dependent upon the parent L-lock, parent L-locks will no longer be held in retained

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-19
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

state following a system failure. This means, for example, that a page set IX L-lock will no
longer be held as a retained X-lock after a system failure. This can provide an important
availability benefit in a data sharing environment. Because there is no longer a retained
X-lock on the page set, most of the data in the page set remains available to applications
running on other members. Only the pages (assuming page locking is used) with a retained
X-lock will be unavailable.

10-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
LOCKPART Considerations
It is now important that L-locks and P-locks are maintained at
the same level of granularity, because
P-Locks now determine child L-lock propagation
LOCKPART NO table spaces can no longer get a lock only on
the last partition
Changed to LOCKPART YES
Data sharing and non-data-sharing
DDL restriction on LOCKPART YES and LOCKSIZE TABLESPACE
remains for now
We ONLY lock the partitions as we need them
May see additional locks acquired on individual partitions even though
LOCKPART NO is specified

© Copyright IBM Corporation 2004

Figure 10-9. LOCKPART Considerations CG381.0

Notes:
It is now important that L-locks and P-locks are maintained at the same level of granularity.
Page set P-locks now determine when child L-locks must be propagated to XES.
For partitioned table spaces defined with LOCKPART NO we currently lock only the last
partition to indicate we have a lock on the whole table space. There are no page set
P-locks held on each of the partition page sets. So, when should we propagate the child
L-locks for the various partitions that are being used? (We cannot tell by looking at the
page set parent L-locks that we need to propagate the child locks, and we cannot
determine how each partition page set is being used by looking at the page set P-locks that
are held.)
To overcome this problem, LOCKPART NO table spaces will now obtain locks at the part
level. LOCKPART NO will behave the same as LOCKPART YES.
In addition, LOCKPART YES is not compatible with LOCKSIZE TABLESPACE. However, if
LOCKPART NO and LOCKSIZE TABLESPACE are specified then we will lock every
partition, just as every partition is locked today when LOCKPART YES is used with

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-21
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

ACQUIRE(ALLOCATE). With this change you may see additional locks being acquired on
individual partitions even though LOCKPART(NO) is specified.
This change applies to both data sharing and non-data sharing environments.
With this change you may see additional locks acquired on individual partitions even
though LOCKPART NO is specified.

10-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Benefits of Less CF Lock Propagation
Faster lock processing for IX and IS parent L-locks
IX-IX, IX-IS, IS-IS
Child locks propagated based on page set P-lock
Less volatility
Less need for RELEASE(DEALLOCATE)
Used in the past to reduce CF lock propagation
Avoids the cost of global contention whenever possible
Reduce XES contention
Improve availability by reducing "Retained Locks" following a
subsystem failure

© Copyright IBM Corporation 2004

Figure 10-10. Benefits of Less CF Lock Propagation CG381.0

Notes:
Data sharing locking performance will benefit because this allows IX and IS parent L-locks
to be granted locally without invoking global lock contention processing to determine that
the new IX or IS lock is compatible with existing IX or IS locks.
In data sharing, the recommendation for RELEASE(DEALLOCATE) (and thread reuse) to
reduce XES messaging for page set L-locks is no longer required. This is good news
because using RELEASE(DEALLOCATE):
• Can cause increased EDM pool consumption, because plans and packages stay
allocated longer in the EDM pool.
• May also cause availability concerns due to parent L-locks being held for longer. This
can potentially prevent DDL from running, or cause applications using the LOCK TABLE
statement and some utilities to fail.
However, as in previous versions of DB2, to avoid locking overhead, you should use
ISOLATION UR, or try to limit the table space locks to IS on all data sharing members to
avoid child lock propagation.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-23
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Additionally, the ability to grant IX and IS locks locally implies less thrashing on changing
inter-system interest levels for parent locks, requiring less IRLM SRB time and less XCF
messaging. When DB2 decides it must propagate its locks to the coupling facility for a
given page set, DB2 must collect and propagate all the locks it currently owns for that page
set to the coupling facility. This can cause some overhead, particularly when a page set is
not used often enough for lock propagation to occur all the time. Page set P-locks are long
duration locks and tend to be more static than L-locks, so the chances are higher that lock
propagation will continue for longer.

10-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Fallback/Co-existence/Enablement
New locking protocol cannot co-exist with old
Enabling feature
Migrate to new-function mode (NFM)
Restart of first member after successful quiesce of all members in group
Disabling feature
Returning to CM (full restore of environment is the only option)
Restart of first member

© Copyright IBM Corporation 2004

Figure 10-11. Fallback/Co-existence/Enablement CG381.0

Notes:
Since the new locking protocol cannot co-exist with the old, the new protocol will only take
effect after the first group-wide shutdown when the data sharing group is in new-function
mode (NFM). No other changes are required to take advantage of this enhancement.
If you recover the catalog and directory to a point-in-time prior to the point where
new-function mode was enabled, a group-wide shutdown is required. On the next restart,
whether it be on Version 7 or Version 8, the new locking protocol will be disabled.
Note: You have to be in new-function mode to be able to benefit from this new way of
mapping IRLM lock states to XES lock states. The new mapping takes effect after the
restart of first member, after successful quiesce of all members in the DB2 data sharing
group. So, to enable this feature, a group-wide outage is required.
You can use the -DIS GROUP command to check whether the new locking protocol is used
(mapping IX IRLM L-locks to an S XES lock), as shown in Example 10-1. Protocol level(2)
indicates that the new protocol is active.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-25
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Example 10-1. -DIS GROUP Output

DSN7100I -DT21 DSN7GCMD


*** BEGIN DISPLAY OF GROUP(DSNT2 ) GROUP LEVEL(810) MODE(N)
PROTOCOL LEVEL(2) GROUP ATTACH NAME(DT2G)
--------------------------------------------------------------------
DB2 DB2 SYSTEM IRLM
MEMBER ID SUBSYS CMDPREF STATUS LVL NAME SUBSYS IRLMPROC
-------- --- ---- -------- -------- --- -------- ---- --------
DT21 1 DT21 -DT21 ACTIVE 810 STLABB9 IT21 DT21IRLM
DT22 3 DT22 -DT22 FAILED 810 STLABB6 IT22 DT22IRLM
......
______________________________________________________________________

10-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.2 CF Request Batching

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-27
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

CF Request Batching
Batching of GBP writes and castout reads
Objectives
Write/castout multiple pages to/from the CF in a single operation
Reduce traffic to and from the CF
Improve data sharing performance for most workloads, especially
for batch
Workloads that update large numbers of pages for GBP-dependent
objects

© Copyright IBM Corporation 2004

Figure 10-12. CF Request Batching CG381.0

Notes:
The current architecture allows multiple pages to be registered to the coupling facility with a
single command. z/OS 1.4 and CF level 12 introduces two new “batch” processes to:
• Write And Register Multiple (WARM) pages of a group buffer pool with a single
command.
• Read multiple pages from a group buffer pool for castout processing with a single CF
read request. The actual command is called Read For Castout Multiple (RFCOM).
This enhancement reduces the data sharing overhead for most workloads. The most
benefit is expected for workloads which update large numbers of pages belonging to
GBP-dependent objects, for example, batch workloads.

10-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Inter-DB2 Buffer Pool Coherency

MVS System 1 MVS System 2


Coupling Facility
DB2A DB2B
TX1: GBP
Lock P1 S
Read P1 into BP (1)
Use P1 TX2:
Commit; P1'
Release Locks 6 4 Lock P1 X
Read P1 into BP (2)
TX3: Change P1 in BP to P1' (3)
Commit;
Lock P1 S 5(XI) Write P1' to GBP (4,5)
Detect P1 Invalid Release Locks
Refresh P1' from GBP (6)
Use P1' in BP

BP BP
1 2
P1' P1 P1 P1'
P1 3

1) Updater tells 2) Reader detects 3) Reader refreshes

© Copyright IBM Corporation 2004

Figure 10-13. Inter-DB2 Buffer Pool Coherency CG381.0

Notes:
Applications can access data from any DB2 subsystem in the data sharing group. Many
subsystems can potentially read and write the same data. DB2 uses special data sharing
locking and caching mechanisms to ensure data consistency. This visual provides a brief
overview of how shared data is updated and how DB2 protects the consistency of that
data.
Suppose that an application issues an UPDATE statement from DB2A and that the data
does not reside in the member’s buffer pool or in the group buffer pool. In this instance,
DB2A must retrieve the data from disk and get the appropriate locks to prevent another
DB2 from updating the same record at the same time.
Because no other DB2 subsystem shares the table at this time, DB2 does not need to use
data sharing integrity mechanisms to process for DB2A’s update.
Next, suppose another application, running on DB2B, needs to update that same data
page. Now inter-DB2 interest exists (both DB2A and DB2B are using this page set).After
DB2B updates the data, it must moves a copy of the data page into the group buffer pool

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-29
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

(both primary and secondary) in the coupling facility, and the data page is invalidated in
DB2A’s buffer pool. Cross-invalidation occurs from the group buffer pool.
Now, when DB2A needs to read the data, the data page in its own buffer pool is invalid.
Therefore, it reads the latest copy from the (primary) group buffer pool.
If the group buffer pool is allocated in a coupling facility with CFLEVEL= 0 or 1, then DB2
registers one page at a time in the group buffer pool.
When the group buffer pool is allocated in a coupling facility with CFLEVEL= 2 or higher,
DB2 can register a list of pages that are being prefetched with one request to the coupling
facility. This can be used for sequential prefetch (including sequential detection) and list
prefetch.
DB2 does not include on the list any valid pages that are found in the local virtual buffer
pool or hiperpool.
For those pages that are cached as “changed” in the group buffer pool, or those that are
locked for castout, DB2 still retrieves the changed page from the group buffer pool one at a
time. For large, sequential queries, there most likely won’t be any changed pages in the
group buffer pool.
For pages that are cached as “clean” in the group buffer pool, DB2 can get the pages from
the group buffer pool (one page at a time), or can include the pages in the DASD read I/O
request, depending on which is most efficient.

10-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CASTOUT Processing

When will CASTOUT occur?


Shared
data
DB2A DB2B
CLASST exceeded Castout
buffer
GBPOOLT exceeded
Castout

GBP checkpoint

No more inter-DB2 interest Local Local


in page set buffer
buffer
GBP being rebuilt, but
alterrnate GBP is not big pool Global pool
enough to contain pages buffer
pool
GBP dependency goes away

© Copyright IBM Corporation 2004

Figure 10-14. CASTOUT Processing CG381.0

Notes:
Periodically, DB2 must write changed pages from the group buffer pool to disk. This
process is called castout.
There is no physical connection between the group buffer pool and DASD, so the castout
process involves reading the pages from the group buffer pool into a group member′s
private buffer (not part of the member′s buffer pool storage) and writing the page from the
private buffer to DASD.
Castout is triggered when:
• A GBP checkpoint is taken.
• The GBP castout threshold is reached.
• The class castout threshold is reached.
• GBP dependency is removed for a page set.
Within a group buffer pool, there are a number of castout classes; the number of classes is
an internal value set by DB2. Data sets (DB2 page sets or partitions) using the group buffer
pool are mapped to a specific castout class. DB2 will preferably have only one data set

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-31
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

assigned to a particular castout class, although it is possible to have more than one data
set mapped into the same castout class, depending on how many data sets are using the
group buffer pool concurrently.
Castout classes are used to limit the number of changed pages a data set can have in the
group buffer pool at any one time, thereby limiting the amount of I/O to the data set at
castout time. (Large amounts of I/O could cause DASD contention.) This limitation is
achieved through the use of the castout class threshold.
The default value of the castout class threshold parameter is 5 (new default in V8 — the V7
default is 10), which means that castout is initiated for a particular class when 5% of the
group buffer pool contains pages for that class or, if only one data set is assigned to that
class, when 10 percent of the group buffer pool contains pages for that data set. The
castout class threshold applies to all castout classes. You can change the castout class
threshold by using the ALTER GROUPBUFFERPOOL command.
Data sets have a group buffer pool castout owner assigned to them. The group buffer pool
castout owner is the first member to express write interest in the data set. After castout
ownership is assigned, subsequent updating DB2 subsystems become backup owners.
One of the backup owners becomes the castout owner when the original castout owner no
longer has read-write interest in the page set or partition. At castout time, the castout owner
is responsible for enforcing the actual castout process for all changed pages for the data
set.

10-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CF Request Batching
GBP Writes and CASTOUT processing
What are the exploitational prerequisites?
New commands in z/OS 1.4
CF control code shipped with CFLEVEL 12
z/OS 1.4 commands
WARM - Write And Register Multiple command
Registers and writes multiple pages to a GBP
RFCOM - Read For Castout Multiple
Read multiple pages from a GBP for CASTOUT processing

© Copyright IBM Corporation 2004

Figure 10-15. CF Request Batching CG381.0

Notes:
The current architecture allows multiple pages to be registered to the coupling facility with a
single command.
z/OS 1.4 and CF level 12 introduces two new “batch” processes to:
• Write And Register Multiple (WARM) pages of a group buffer pool with a single
command.
• Read multiple pages from a group buffer pool for castout processing with a single CF
read request. The actual command is called Read For Castout Multiple (RFCOM).
When available, Version 8 data sharing will use these new CF commands to reduce the
amount of traffic to and from the coupling facility for writes to group buffer pools and reads
for castout processing, thus reducing the data sharing overhead for most workloads.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-33
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Benefits and Management


Reduced traffic to and from CF
Reducing data sharing overhead for most workloads
Workloads updating large numbers of pages for
GBP-dependent objects - BATCH
Currently uses the batched commands if >1 page
Will change based on performance measurements
Statistics and accounting records updated for measurement

© Copyright IBM Corporation 2004

Figure 10-16. Benefits and Management CG381.0

Notes:
CF request batching allows Version 8 to reduce the amount of traffic to and from the
coupling facility for both writes to group buffer pools and reads from the group buffer pool
for castout processing, thus reducing the data sharing overhead for most workloads.
The most benefit is expected for workloads that update large numbers of pages belonging
to GBP-dependent objects, for example, batch workloads.
CF request batching also benefits DB2 performance in other ways. DB2 commit processing
performance is improved, where any remaining changed pages must be synchronously
written to the group buffer pool during commit processing. For GPBCAHE(ALL) page sets,
DB2 is able to more efficiently write prefetched pages into the group buffer pool as it reads
them from DASD.
DB2 currently uses the CF batching commands to read and write pages to and from the
group buffer pool if more than one page needs to be read or written. However, this behavior
may change after further performance implications are known.

10-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty DB2 instrumentation (statistics IFCID 2 and accounting 3 and 148) records are enhanced
to reflect the usage of these new commands. The DB2 PM and PE accounting and
statistics reports are enhanced to externalize the new counters that indicate the number of
WARM and RFCOM requests.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-35
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

10-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.3 Improved LPL Recovery

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-37
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Improved LPL Recovery


Today you have to manually initiate recovery of LPL pages
Issue the -START DATABASE command
In V8, recovery is simpler and faster
DB2 initiates automatic recovery of LPL pages
New LPL recovery locking
Enhanced LPL messages

© Copyright IBM Corporation 2004

Figure 10-17. Improved LPL Recovery CG381.0

Notes:
Prior to Version 8, you have to recover pages that DB2 put into the logical page list (LPL)
manually. DB2 Version 8 automatically attempts to recover LPL pages as they go into LPL,
when it determines that the recovery will probably succeed. (DB2 does not attempt
automatic LPL recovery for pages put into LPL by disk I/O errors.)
Currently (V7 and before), LPL recovery requires exclusive access to the table space page
set. Version 8 introduces a new serialization mechanism whereby LPL recovery is much
less disruptive.
In addition, instrumentation has improved whereby message DSNB250E is enhanced to
indicate the reason the pages are added to the LPL.

10-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
LPL Recovery Today
Use DISPLAY DATABASE command to establish what page sets
have pages on the LPL
DB2 can automatically recover pages when group buffer pools
are defined with AUTOREC(YES), the default
Only for GRECP, not LPL
Issue the command START DATABASE
Run the RECOVER utility on the object
Run the LOAD utility with the REPLACE option on the object

None of the above works if retained locks held


on the object.
You must restart any failed DB2 that is holding
those locks.

© Copyright IBM Corporation 2004

Figure 10-18. LPL Recovery Today CG381.0

Notes:
The logical page list (LPL) contains a list of pages in logical error that could not be read or
written for "must-complete" operations such as commit or a restart.
DB2 can put pages into logical error status and place them in LPL for a number of reasons:
• Transient disk read and write problems that can be fixed without redefining new disk
tracks or volumes, (data sharing and non-data sharing)
• A problem with the coupling facility (CF)
• Channel failure to the CF
• Channel failure to DASD
• Locks being held by a failed subsystem, preventing access to the desired page
DB2 customers are demanding higher and higher levels of availability. However, once a
page is entered into the LPL, that page is inaccessible until it is recovered.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-39
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The LPL is kept in the DBET and therefore in the SCA in data sharing environments. The
information is therefore accessible to all members. Applications requiring access to data in
LPL will receive the usual "resource unavailable" SQLCODE.
The -DISPLAY DATABASE command can be used to find out what page sets have pages
in LPL which must be recovered:
-DB1G DIS DB(DSNDB01) SPACENAM(*) LIMIT(*) LPL ONLY
If LPL entries exist, you need to manually issue the START DATABASE command with the
SPACENAM option, to initiate LPL recovery, for example:
-DB1G STA DB(DSNDB01) SPACENAM(*) ACCESS(RW)
DB2 will then read the DB2 log and apply any changes to the page set. The -START
DATABASE command drains the entire page set or partition, therefore making the entire
page set or partition unavailable for the duration of the LPL recovery process, even if only
one page is in the LPL for that page set or partition.
The RECOVER and LOAD utilities can also be used to recover LPL pages. If the START
DATABASE command fails to successfully recover the LPL pages, you are forced to
recover the whole page set using the RECOVER utility.

10-40 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Automatic LPL Recovery
Recovery is simplified and faster
Requires less manual intervention
DB2 initiates automatic recovery of LPL pages
At point when pages are added to the LPL
Automatic recovery not initiated for ALL LPL situations
Enhanced messages:
As pages are added to LPL, DSNB250E shows the reason
During LPL recovery (automatic or not) DSNI0042I, DSNI0043I and
DSNI0044I with log record ranges being applied
(Automatic) LPL recovery has started, DSNI006I
Automatic LPL recovery suppressed, DSNB357I
(Automatic) LPL recovery progress message(s), DSNI022I
(Automatic) LPL recovery successful, DSNI021I
(Automatic) LPL recovery failed, DSNI005I
Recover as per Version 7

© Copyright IBM Corporation 2004

Figure 10-19. Automatic LPL Recovery CG381.0

Notes:
DB2 Version 8 will automatically attempt to recover pages that are added to the LPL at the
time that they are added to LPL. When the pages are added into LPL, DB2 issues
message DSNB250E, to indicate the LPL page range and the names of the database, the
page set or partition, and the reason for adding the page to the LPL.
Automatic LPL recovery is not initiated by DB2 in the following situations:
• DASD I/O error
• During DB2 restart or end_restart time
• GBP structure failure
• GBP 100% loss of connectivity
DB2 issues the DSNI006I message to indicate the start of automatic LPL recovery,
otherwise DB2 issues the message DSNB357I to indicate the reason why the automatic
LPL recovery processor is suppressed.
If the automatic LPL recovery runs successfully, LPL pages are deleted from the LPL and
DB2 issues the message DSNI021I to indicate the completion.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-41
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

If the automatic LPL recovery does not run successfully, pages are kept in the LPL and
DB2 issues the DSNI005I to indicate the failure of the automatic LPL recovery. You must
now recover the pages in LPL manually, as in Version 7.
To recover the LPL pages manually, you will need to first check for the reason type and
take action based on console messages for any system conditions. To recover pages from
the LPL, perform one of the following actions:
• Issue the START DATABASE command with the SPACENAM option.
• Run the RECOVER utility or LOAD utility with REPLACE option.

10-42 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Less Disruptive LPL Recovery
What happens today?
Issuing the -START DATABASE command, drains the entire page set
or partition
Drain forces requesting users to wait for completion of recovery
Very disruptive especially where only a few pages exist in the LPL
and/or DB2 must go far back in the logs to apply all the changes
New support
Less disruptive in the automatically initiated LPL recovery and -START
DATABASE command processing
Avoids the DRAIN operation

© Copyright IBM Corporation 2004

Figure 10-20. Less Disruptive LPL Recovery CG381.0

Notes:
Today, when you issue the -START DATABASE command to recover LPL pages, the
command must drain the entire page set or partition. The "drain" means that the command
must wait until all current users of the page set or partition reach their next commit point.
All users requesting new access to the page set or partition are also suspended and must
wait until the recovery completes (or until the user times out). Therefore, the drain
operation can be very disruptive to other work that is running in the system, especially in
the case where only one or a few pages are in LPL.
In Version 8, the -START DATABASE command, and also the automatic LPL recovery
function we have just talked about, now help to avoid the drain operation so that the
recovery of the LPL pages can be done much less disruptively.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-43
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

New LPL Recovery Locking


Acquire a WRITE CLAIM instead of DRAIN ALL
Other pages can still be accessed during LPL recovery
Serializes with utility functions like RECOVER or LOAD, which can
also resolve LPL recovery
Serialize with DDL
A conditional X mode LPL recovery lock with COMMIT duration
Acquired by the automatic LPL recovery process
Released when LPL processing finishes
Manual -START DATABASE method also acquires the conditional
X mode lock
Serializes with the automatic LPL recovery processor
Serializes with other -START DATABASE commands

© Copyright IBM Corporation 2004

Figure 10-21. New LPL Recovery Locking CG381.0

Notes:
The locking and serialization schemes in the -START DATABASE command have changed
when doing the LPL recovery. DB2 V8 makes a WRITE CLAIM on the page set or partition.
In prior versions of DB2, the -START DATABASE command acquires a DRAIN ALL lock on
the page set or partition when doing the LPL recovery. By acquiring a WRITE CLAIM
instead of a DRAIN ALL, the “good” pages can still be accessed by SQL while the -START
DATABASE is recovering the LPL pages.
This new locking strategy is implemented for both automatic LPL recovery and LPL
recovery resulting from a -START DATABASE command.
The WRITE CLAIM serializes with the utility functions like RECOVER or LOAD, which can
also recover objects from an LPL pending status. The WRITE CLAIM also lets automatic
LPL recovery and the -START DATABASE command serialize with the -STOP DATABASE
command and the DROP TABLESPACE statement which currently acquire DRAIN ALL
lock.
A new "LPL recovery" lock type is also introduced to enforce that only one LPL recovery
process is running at a time for a given page set or partition. The conditional X mode LPL

10-44 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty recovery lock must be acquired by the LPL recovery process and released when it finishes
the job. If an LPL recovery process is already in progress when a subsequent one is
initiated (either automatically or manually), then the second recovery process is not
scheduled. It is blocked by the conditional LPL recovery lock already held. When the first
LPL recovery process completes, it will check for more work that is outstanding on the
same page set or partition before it terminates.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-45
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

LPL Serviceability Enhancements


New reason-type and trace-id are added into the DSNB250E
message:
DSNB250E csect-name ,............
LPL TRACE ID=traceid, LPL REASON TYPE=rsn
The rsn identifies the reason why the pages are added into LPL
The traceid identifies the spot in the DB2 code that added the
page to LPL
DSNI042I DSNI043I and DSNI044I
Log ranges applied during LPL recovery

More help for us:


Determining why and where the page is added into LPL
Determining applied log ranges in case of LPL failure

© Copyright IBM Corporation 2004

Figure 10-22. LPL Serviceability Enhancements CG381.0

Notes:
Currently, message DSNB250E is issued whenever a page is added to LPL, but this
message does not provide sufficient information to know exactly why DB2 decided that the
page should be added to LPL. Many times customers report that they have LPL pages, but
it is not apparent why they encountered LPL pages in the first place. Knowing why a page
is added to LPL is the first step to avoiding pages being added to LPL in the future.
DB2 now provides more detailed information as to why a page has been added to the LPL.
A new reason type and a new trace id are added to the message DSNB250E. The new
reason type will explain why the pages are added into LPL. The reason types reported in
message DSNB250E are as follows:
• DASD: DB2 encountered a DASD I/O error when trying to read or write pages on
DASD.
• LOGAPPLY: DB2 cannot apply log records to the pages.
• GBP: DB2 cannot successfully read/externalize the pages from/to the group buffer pool
due to link or structure failure, GBP in rebuild, or GBP was disconnected.

10-46 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • LOCK: DB2 cannot get the required page latch or page P-lock on the pages.
• CASTOUT: The DB2 Castout processor cannot successfully cast out the pages.
• MASSDEL: The DB2 encountered an internal error at the mass delete processor during
the phase 2 commit.
In addition, during LPL recovery (automatic or not), DB2 will produce additional messages
indicating the log ranges that are being applied during LPL recovery. The following three
new messages can be seen:
• DSNI042I: This message displays the header page RBA that is used to determine the
LPL or GRECP recovery range for the specified page set. It is displayed once per LPL
recover, per page set, per data sharing group.
• DSNI043I: This message displays the broad LRSN or RBA range, merged from all
members of the data sharing group. The range is used to determine the LPL or GRECP
recovery range for the specified page set. It is displayed once per LPL recovery, per
member.
• DSNI044I: This message displays the LRSN or RBA range that is used to determine
LPL or GRECP recovery range for the specified page set in the data sharing group
member. This message is displayed once per LPL recovery, per page set, per member.
These messages should allow you to identify more easily which log records have been
applied by LPL recovery, in case LPL recovery fails, and you need to determine the cause
of the failure.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-47
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

10-48 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.4 Restart Light Enhancements

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-49
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 Restart Light

DB1G DB2G

MVSA MVSB

DB2G

Quickly restart a failed DB2 member on another


OS/390 image with minimal disruption
to release retained locks

© Copyright IBM Corporation 2004

Figure 10-23. DB2 Restart Light CG381.0

Notes:
DB2 Version 7 introduced the concept of DB2 “restart light”, which is intended to remove
retained locks with minimal disruption in the event of an MVS system failure. When a DB2
member is started in restart light mode (-START DB2 LIGHT(YES)), DB2 comes up with a
small storage footprint, executes forward and backward restart log recovery, removes the
retained locks, and then self-terminates without accepting any new work.
However, retained locks that pertain to any indoubt units of recovery (URs) will persist, and
the indoubt URs remain in the failed member's log. The data protected by these retained
locks is not available to any other DB2 member until the indoubt URs have been resolved.

10-50 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Restart Light Enhancements
Now attempts to resolve in doubt URs
Remains up after successful restart
DDF will start if allowed in DSNZPARM
Only in doubt UR commit coordinators allowed to connect
Otherwise RC00F300A2
Can issue -RECOVER INDOUBT to manually resolve the URs
Once all in doubt URs are resolved
DB2 will terminate normally

© Copyright IBM Corporation 2004

Figure 10-24. Restart Light Enhancements CG381.0

Notes:
Restart light is improved in DB2 Version 8 to handle indoubt units of recovery.
When DB2 is started with LIGHT(YES) and indoubt URs exist at the end of restart
recovery, DB2 will now remain up and running so that the indoubt URs can be resolved,
either automatically via resynch processing with the commit coordinators or manually via
the -RECOVER INDOUBT operator command. DB2 will also issue a new message,
DSNR052I, to indicate that a LIGHT(YES) DB2 is remaining up and running to resolve
indoubt URs.
If DDF startup is allowed via DSNZPARM, Restart Light will also start DDF to facilitate the
resolution of any distributed indoubt URs. However, no new DDF connections are allowed.
Clients that attempt to connect to a restart light DB2 will be rejected with a return code
indicating that MAXDBAT has been reached. Only resynch requests will be allowed from
DB2 clients.
As with previous versions of DB2, when DB2 is started with LIGHT(YES), it starts with only
a small storage footprint and cannot support SQL requests. For example, the EDM pool
has not been initialized (since it is not needed for log recovery or removing retained locks).

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-51
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Therefore, a LIGHT(YES) DB2 member that remains up and running to resolve indoubt
URs will not accept any new connection requests, except those that originate from
connection names that have indoubt URs.
If an attempt is made to connect to a LIGHT(YES) DB2 from a connection name that does
not have indoubt URs, then return code 8 with new reason code 00F300A2 is returned with
SQLCODE -923 (similar to the 00F30056 that gets returned for ACCESS MAINT). If
00F300A2 is received from the DSN command processor, then new message DSNE136I is
issued (similar to DSNE132I for ACCESS MAINT).
A connection name with indoubt URs is allowed to connect to the LIGHT(YES) DB2, but it
is not allowed to create thread. If create thread is attempted, then return code 8, reason
code 00F300A2 is returned with SQLCODE -923.
Connection requests using the group attach name will not attempt to connect to a DB2
member that is started with LIGHT(YES). Connectors wanting to resynch with a
LIGHT(YES) DB2 member must use that member's subsystem name to connect. (A DB2
member started in light mode will not post startup ECBs that are associated with the group
attach name. It will only post startup ECBs that are associated with that member's
subsystem name.)
For example, the RESYNCHMEMBER(YES) option added in CICS TS 2.2 causes CICS to
force re-connection back to the original DB2 member (using that member's subsystem
name instead of the group attach name) should CICS think that indoubt URs are
outstanding for the last member connected to.
While DB2 remains up and running in light mode, the -DISPLAY THREAD command can
be used to monitor the progress of the indoubt resolution and to display the detailed
information about any indoubt URs that still exist. Also, the -RECOVER INDOUBT
command can be used to manually resolve indoubt URs. However, the following
commands are not allowed (new message DSN9038I):
• DISPLAY, START, STOP DATABASE
• DISPLAY, START, STOP RLIMIT
• SET SYSPARM
Once the final indoubt UR has been resolved, DB2 issues new message DSNR053I to
indicate that there are no remaining incomplete URs, and that the DB2 member will
self-terminate via the normal DB2 shutdown process. Alternatively, you can manually shut
down DB2 running in light mode, with the -STOP DB2 command. If this is done and there
still exist indoubt URs, then existing message DSNR046I is issued to inform you that
incomplete URs still exist.

10-52 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.5 Change to IMMEDWRITE Bind Option

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-53
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Change to IMMEDWRITE Option


Default 'Immediate Write' process changed:
Write changed pages during phase 1 of commit processing
Previously in data sharing default was to write during phase 2
ZPARM IMMEDWRI(PH1) option removed
BIND IMMEDWRITE(PH1) option kept for compatibility
CPU cost of writing changed pages to GBP charged to allied TCB
Previously IMMEDWRITE NO option charged cost to MSTR SRB
Pages were written as part of phase 2 commit processing
Still can use IMMEDWRITE(YES)
For improved concurrency for spawned applications
Can then see the update from the previous transaction
Beware of performance impact

© Copyright IBM Corporation 2004

Figure 10-25. Change to IMMEDWRITE Option CG381.0

Notes:
Consider the situation where one transaction, updates DB2 data using INSERT, UPDATE,
or DELETE, and then, before completing (phase 2 of) commit, spawns a second
transaction that is dependent on the updates that were made by the first transaction. This
type of relationship is referred to as “ordered dependencies” between
transactions.Consider the following scenario.
We have a two way data sharing group DB0G with members DB1G and DB2G.
Transaction T1, running on member DB1G, makes an update to a page. Transaction T2,
spawned by T1 and dependent on the updates made by T1, runs on member DB2G. If
transaction T2 is not bound with isolation repeatable read (RR), and the updated page (on
DB1G) has been used previously by DB2G and is still in its local buffer pool, there is a
chance, due to lock avoidance, that T2 uses an old copy of the same page in the virtual
buffer pool of DB2G if T1 still has not committed the update.
Here are some possible work-arounds for this problem:
• Execute the two transactions on the same member.
• Bind transaction T2 with ISOLATION(RR).

10-54 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Make T1 commit before spawning T2.


DB2 V5 APAR PQ22895 introduced a new bind/rebind option that can be considered when
none of the above actions are desirable. IMMEDWRITE(YES) allows the user to specify
that DB2 should immediately write updated GBP dependent buffers to the Coupling Facility
instead of waiting until commit or rollback.
DB2 V6 APAR PQ25337 delivers the functionality introduced by APAR PQ22895 in DB2 V5
with the addition of a third value for IMMEDWRITE and a new DSNZPARM parameter.
IMMEDWRITE(PH1) allows the user to specify that a given plan or package should write
updated group buffer pool dependent pages to the Coupling Facility at or before Phase 1 of
commit. If the transaction subsequently rolls back, the pages will be updated again during
the rollback process and will be written again to the CF at the end of abort. This option is
only useful if the dependent transaction is spawned during syncpoint processing of the
originating transaction.
In prior versions of DB2, changed pages in a data sharing environment are written during
phase 2 of the commit process, unless otherwise specified by the IMMEDWRITE BIND
parameter, or IMMEDWRI DSNZPARM parameter.
This enhancement changes the default processing to write changed pages during phase 1
of commit processing. The options you can specify for the IMMEDWRITE BIND parameter
remain unchanged. However, whether you specify “NO”' or “PH1”, the behavior will be
identical, changed pages are written during phase 1 of commit processing. The “PH1”
option remains for compatibility reasons, but its usage should be discouraged. The
DSNZPARM IMMEDWRI parameter will no longer accept a value of “PH1”. With this
change pages are either written at (the latest at) phase 1, never at phase 2 of the commit
processing.
The impact of IMMEDWRITE YES remains unchanged. Changed pages are written to the
group buffer pool as soon as the buffer updates are complete (so definitely before
committing). Specifying this option may impact performance.
With IMMEDWRITE NO (or PH1) and YES options, the CPU cost of writing the changed
pages to group buffer pool is charged to the allied TCB. Prior to Version 8, this was true
only for PH1 and YES options. For the NO option, the CPU cost of writing the changed
pages to group buffer pool was charged to MSTR SRB, since the pages were written as
part of phase 2 commit processing under MSTR SRB.
This enhancement provides a more accurate accounting for all DB2 workloads. DB2 is now
able to charge more work back to the user who initiated the work in the first place.
Attention: Customers who use the allied TCB time for end user charge back may see
additional CPU cost with this change.
The IMMEDWRITE enhancements are immediately available when you migrate to DB2
Version 8. You do not have to wait until new-function mode.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-55
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

10-56 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 10.6 -DISPLAY GROUPBUFFERPOOL Enhancement

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-57
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

-DIS GBPOOL Enhancement


Current CFLEVEL displayed is the 'operational' level
Typically this is the level requested by DB2 during 'connect'
processing
May be lower than the 'actual' CFLEVEL displayed by the
D CF command
DB2 Version 8 -DIS GBPOOL output will now display both
CFLEVELs:
'Operational'
Indicating the capability of the CF from a DB2 functionality
perspective
If GBP duplexed, CFLEVEL equates to primary GBP
'Actual'
The CFCC level as displayed by the D CF command
If GBP duplexed, CFLEVEL equates to primary GBP

© Copyright IBM Corporation 2004

Figure 10-26. -DIS GBPOOL Enhancement CG381.0

Notes:
Currently, the CF level displayed by the -DISPLAY GROUPBUFFERPOOL command may
be lower than the actual CF level as displayed by a D CF command.
This enhancement changes -DISPLAY GROUPBUFFERPOOL command output. Instead
of having the CFLEVEL field, the command now displays both, the OPERATIONAL CF
LEVEL as before, and also the ACTUAL CF LEVEL.
The operational CF level indicates the capabilities of the CF from DB2's perspective. The
actual CF level is the CF control code level as displayed by the D CF command.
When DB2 connects to the coupling facility, it requests a CF “function” level. DB2 Version 7
requests a CF level of 7 and DB2 Version 8 requests a CF Level of 12. The CF level

10-58 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty determines the level of function DB2 is able to use when it interacts with the coupling
facility.
Note: The first DB2 member to connect to the coupling facility determines the CF level
that the data sharing group will use.
Note for mixed release groups:
• If the first member to connect to the coupling facility is Version 7, then the actual CF
level will be 7. Subsequent DB2 members (Version 7 or Version 8) interact with the
coupling facility using CF level 7. Implications here are that DB2 Version 8 cannot use
CF request batching.
• If the first member to connect to the coupling facility is Version 8, then the requested
CF level will be 12. Subsequent DB2 members (Version 7 and Version 8) can interact
with the coupling facility using CF level 12. Implications are that the Version 8
members can use CF request batching. (DB2 Version 7 cannot because CF request
batching is not implemented.)
In addition, a number of other messages in the -DIS GBPOOL output have been tidied up.
For example, some counters related to secondary GBP have been removed (because
page writes to the secondary group buffer pool are always the same as writes to the
primary group buffer pool).
The messages that have changed include: DSNB758I, DSNB762I, DSNB764I, DSNB775I,
DSNB776I, DSNB777I, DSNB777I, DSNB779I, DSNB786I, DSNB787I, DSNB789I, and
DSNB799I.
Attention: If you have any automation of programs in place to interrogate the output of
the -DISPLAY GROUPBUFFERPOOL command, we recommend you to review these
facilities, as the output from the command has significantly changed.

© Copyright IBM Corp. 2004 Unit 10. Data Sharing Enhancements 10-59
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

-DIS GBPOOL Output


-DIS GBPOOL(GBP0) MDETAIL(INTERVAL) GDETAIL(INTERVAL)
DSNB750I =DBA1 DISPLAY FOR GROUP BUFFER POOL GBP0 FOLLOWS
DSNB755I =DBA1 DB2 GROUP BUFFER POOL STATUS
CONNECTED = YES
CURRENT DIRECTORY TO DATA RATIO = 5
PENDING DIRECTORY TO DATA RATIO = 5
CURRENT GBPCACHE ATTRIBUTE = YES
PENDING GBPCACHE ATTRIBUTE = YES
DSNB756I =DBA1 CLASS CASTOUT THRESHOLD = 10%
GROUP BUFFER POOL CASTOUT THRESHOLD = 50%
GROUP BUFFER POOL CHECKPOINT INTERVAL = 8 MINUTES
RECOVERY STATUS = NORMAL
AUTOMATIC RECOVERY = Y
DSNB757I =DBA1 MVS CFRM POLICY STATUS FOR DSNDB2A_GBP0 = NORMAL
. . . . . . .
DSNB758I =DBA1 ALLOCATED SIZE = 8192 KB
VOLATILITY STATUS = VOLATILE
REBUILD STATUS = NONE
CFNAME = NSD1CF
CFLEVEL - OPERATIONAL = 12
CFLEVEL - ACTUAL = 12
DSNB759I =DBA1 NUMBER OF DIRECTORY ENTRIES = 6628
. . . . . . . .

DISPLAY GBPOOL(GBP0) MDETAIL(INTERVAL) GDETAIL(INTERVAL)

© Copyright IBM Corporation 2004

Figure 10-27. -DIS GBPOOL Output CG381.0

Notes:
Here is a partial output from the -DISPLAY GROUPBUFFERPOOL command, showing the
operational and actual CFLEVEL.

10-60 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Unit 11. Installation and Migration

What This Unit Is About


Version 8 contains significant changes to the architecture and internals
of DB2. These changes are extensive enough to require a migration to
this new version rather than a simple update. Learn about the major
steps in migrating from DB2 UDB for z/OS and OS/390, Version 7 to
DB2 UDB for z/OS Version 8.

What You Should Be Able to Do


After completing this unit, you should be able to:
• Plan for Version 8
• Discuss the migration process and implications
• Describe the changes to the catalog

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-1
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

List of Topics
Planning for Version 8
Installation
Migration and fallback
DB2 catalog changes
msys for Setup DB2 Customization Center
Samples
DB2 Version 8 packaging

© Copyright IBM Corporation 2004

Figure 11-1. List of Topics CG381.0

Notes:
Version 8 of DB2 brings major changes to the installation and migration processes.
In this unit, we describe these changes. We assume that you are already familiar with the
installation and migration procedures used by earlier versions of DB2. Please refer to the
DB2 UDB for z/OS Version 8 Installation Guide, GC18-7418 and the DB2 UDB for z/OS
Version 8 Data Sharing Planning and Administration Guide (SC18-7417) for more details.
We define installation as the process of installing a new DB2 subsystem. In this case there
are no compatibility and regression issues. With a newly installed DB2 Version 8
subsystem, you can immediately take advantage of all the new functions in Version 8.
Migration is the process of converting an existing DB2 Version 7 subsystem, user data, and
catalog data, to Version 8. This process is changed with Version 8 of DB2 in order to
minimize the possible impact of regression and fallback incompatibilities.
The key changes to the installation and migration processes are as follows:
• Valid CCSIDs must be defined for ASCII, EBCDIC, and Unicode.

11-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • You must supply your own tailored DSNHDECP module. You can no longer start DB2
with the DB2-supplied DSNHDECP.
• DSNHMCID is a new data-only module in V8. It is required by DB2 facilities that run in
DB2 allied address spaces, such as attaches and utilities.
• Buffer pools of sizes 4 KB, 8 KB, 16 KB, and 32 KB must be defined.
• Only WLM-established stored procedures can be defined.
• The migration process now consists of three distinct phases:
- CM: compatibility mode: During this phase, which can last as long as deemed
necessary, you should execute all the tests needed to ensure that you happy with
the new version and will not have to fallback to Version 7 later on. In CM, you are
able to fallback to Version 7 in case of problems.
- ENFM: enabling-new-function mode: During this phase you will convert the DB2
subsystem to the format ready to support the new function in Version 8, by using the
on-line REORG Utility. No fallback to DB2 Version 7 is allowed once ENFM is
entered.
- NFM: new-function mode: This is the target phase, triggered when you execute a
job confirming that all the previous conversion steps have completed successfully
and update the DB2 subsystem or data sharing group as being in new-function
mode.
These phases are discussed in more detail under the following topics:
• Planning for version 8
• Installation
• Migration and fallback
• DB2 catalog changes
• msys for Setup DB2 Customization Center
• Samples
• DB2 packaging

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-3
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

11-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.1 Planning for Version 8

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-5
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Planning for Version 8


Review the available documentation
Preventive service planning for latest maintenance
DB2 Version 8 Installation Guide
DB2 Version 8 Program Directory
Review planning considerations
What has changed since DB2 Version 7?
To migrate - must have the SPE applied - PQ48486
DB2 enforces for both data sharing and non-data sharing
New for non-data sharing
PQ48486 -> PTF UQ81009 (11/03)
See also info APAR II13695

© Copyright IBM Corporation 2004

Figure 11-2. Planning for Version 8 CG381.0

Notes:
Attention: Version 8 is the first release of DB2 to fully exploit the new 64-bit hardware
and 64 bit operating system. It therefore comes with some firm hardware and software
prerequisite requirements that did not exist in previous versions of DB2. Planning for this
new version is more important than ever.
In the following visuals, we introduce the major z/OS prerequisites for DB2 Version 8. We
will also introduce the significant DB2 prerequisites and release incompatibilities.
Before you migrate to any new version of DB2, it is very important that you fully understand
what has changed from the previous version. This is no less important for DB2 Version 8.
Please refer to the DB2 UDB for z/OS Version 8 Program Directory for an up-to-date list of
prerequisites for Version 8, and to the DB2 UDB for z/OS Version 8 Installation Guide,
SC18-7418 for a complete list of release incompatibilities.
It is also important to keep up-to-date with current software maintenance levels. This is
even more important with DB2 Version 8. DB2 now enforces that you have the correct
prerequisite maintenance implemented before it will allow you to migrate to Version 8. DB2
now enforces this requirement for both data sharing and non-data sharing environments.

11-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The V8 fallback SPE for V7 is PQ48486. Its PTF, UQ81009 has been available since early
November 2003. You must start DB2 V7 at least once with the fallback SPE applied before
you migrate to Version 8. There is also an Informational APAR II13695 that is used to
document any important migration and fallback topics.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-7
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

z/OS Prerequisites
Important prerequisites exist for DB2 for z/OS Version 8
zSeries box (z800 or z900 or z990)
z/OS 1.3
z/OS 1.3 requires WLM Goal Mode
z/OS running in 64-bit mode

WARNING!
Disaster Recovery

Some functions will require z/OS 1.4 or later


System level point in time functionality requires z/OS 1.5 plus data
storage control units with Flashcopy support
Multi-level (row level) security requires z/OS 1.5 SecureWay Server
(RACF) or equivalent
Up to 100,000 open data sets requires z/OS 1.5
CF request batching in data sharing required z/OS 1.4 and CF Level 12

© Copyright IBM Corporation 2004

Figure 11-3. z/OS Prerequisites CG381.0

Notes:
Version 8 can only run on z/Architecture machines and requires that those machines are
running in 64 bit addressing mode. If an attempt is made to start DB2 Version 8 on a
non-64 bit machine, DB2 issues an error message during startup and self terminates.
DB2 version 8 also requires z/OS V1R3 or above, or more precisely z/OS V1.3 Base
Services (5694-A01) or z/OS.e (5655-G52) plus APARs OW56073, OW56074, OA03519,
and OA03095 with DFSMS V1.3, Language Environment and z/OS V1.3 Security Server
(RACF). If an attempt is made to start DB2 Version 8 on an OS/390 or a z/OS R1 or R2
system, then once again DB2 issues an error message during startup and self terminates.
Attention: Please note that these prerequisites have important implications for disaster
recovery and sysplex cross-system restart scenarios. All systems that DB2 Version 8 runs
on or may run on in an emergency or test scenario must meet these hardware and
software prerequisites. So you really need to plan to have these important prerequisites in
place in all the environments where you may need to start DB2 Version 8, before you
install Version 8 in any environment.
In addition, some new functions in Version 8 have further prerequisites:

11-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • The new DB2 System level backup and recovery solution requires DFSMS shipped with
z/OS 1.5 plus data storage control units which support Flashcopy.
• The multi-level security functionality requires the z/OS 1.5 Secureway Server (RACF) or
equivalent.
• When you are using z/OS 1.5 you have the possibility to increase the maximum number
of open data sets (DSMAX) to 100 000. When running on z/OS 1.3 or 1.4, DB2 will
enforce limit of 32 767 open data sets. For more information, see “Up to 100 000 Open
Data Sets” on page 1-54.
• The data sharing enhancement where DB2 can batch page requests to the group buffer
pool, require z/OS 1.4 and CF Level 12.
• The lock holder priority increase enhancement requires z/OS 1.4 WLM functionality.
Once again, please refer to the DB2 UDB for z/OS Version 8 Program Directory for an
up-to-date list of these functional prerequisites.
Despite the work DB2 Version 8 has done in removing many restrictions which contribute to
memory constraints and the new 64 bit architecture, not all of the restrictions have been
removed, and some memory constraint problems may still occur for very large systems.
DB2 V8 has gone a long way; however, there is still more work to be done.
DB2 Version 8 will require some more memory than Version 7, typically around 10%. This
is largely because many memory structures have changed to support 64 bit addressing.
Although Version 8 introduces many changes designed to provide memory relief, with
many memory structures moving above the bar (for example; buffer pools, RID pool, sort
pool, EDM pool, and compression dictionaries), you are not isolated from some memory
constraints in very large systems as many thread related structures still remain below the
bar. In addition, DB2 is not the only user of the real memory on the machine. It has to be
shared by all other subsystems and applications on the LPAR.
We, therefore, suggest that after you migrate to Version 8, you continue to monitor the
paging rates on your system. This will give an indication of your memory usage and show if
it is over-committed.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-9
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Other Prerequisites
Customize the z/OS Conversion Services
Used to convert to and from Unicode for example
COBOL
Migrate to IBM COBOL V2 or V3
Older COBOL compilers cannot be used with DB2 V8
No OS/VS COBOL or VS COBOL II
Can run older COBOL load modules under LE
PL/I
Migrate to Enterprise PL/I for z/OS and OS/390 V3R2 or
PL/I for MVS and VM V1R3
All versions of OS PL/I (including 2.3) and VisualAge
PL/I no longer supported
Obtain latest version of other products
IMS V7 or above
CICS TS 1.3, 2.2 or 2.3

© Copyright IBM Corporation 2004

Figure 11-4. Other Prerequisites CG381.0

Notes:
In order to be able to run DB2 Version 8, a number of other prerequisites must also be in
place, such as Unicode conversion services and programming languages have to be at a
certain level as well to be able to work with the V8 precompiler.

Unicode Conversion Services


You need to define and customize the z/OS Conversion Services as described in the z/OS
V1R3.0 Support for Unicode Using Conversion Services, SA22-7649-01. DB2 uses this
z/OS service to convert to and from Unicode.
The z/OS Conversion Services must be configured and active before you migrate DB2 to
Version 8 compatibility mode. Even in compatibility mode, when no new function is
enabled, DB2 needs to convert all the SQL to Unicode in order to process it. All SQL
statements are parsed in Unicode. In fact, DB2 Version 8 will not start if there is no
conversion available to and from the EBCDIC and ASCII CCSIDs defined in DSNHDECP
and UTF-8 (1208).

11-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Further information can be found in Appendix A of the DB2 UDB for z/OS Version 8
Installation Guide, SC18-7418 and the Information APARs II13048 and II13049.

COBOL Support
DB2 has a commitment to support currently supported releases of other IBM software.
However, OS/VS COBOL and COBOL II are no longer supported by IBM, and therefore
DB2 Version 8 is also removing support for these products. Only the following COBOL
compilers are supported by the DB2 precompiler; however, older COBOL load libraries can
still run with DB2 and LE:
• Enterprise COBOL for z/OS and OS/390 Version 3 (5655-G53)
• IBM COBOL for OS/390 & VM Version 2 Release 2 (5648-A25)
When using the integrated SQL coprocessor, you should use:
• Enterprise COBOL for z/OS and OS/390 V3.2 or V3.3 (5655-G53) with APAR PQ83744

PL/I
To use PL/I with DB2 Version 8, you should use any of the following products:
• IBM Enterprise PL/I for z/OS and OS/390 V3.2 (5655-H31)
• IBM PL/I for MVS and VM V1.1 (5688-235)
Using the DB2 precompiler services requires the DB2 coprocessor provided with:
• IBM Enterprise PL/I for z/OS and OS/390 V3.2 and APAR PQ84513 or later releases.

C/C++
When coding in C or C++, make sure to use any of the following products:
• C/C ++ (with or without Debug Tool), optional feature of z/OS
• SAA AD/Cycle C/370 Compiler V1.2 (5688-216)
DB2 UDB for z/OS V8 does not yet support the coprocessor or precompiler services with C
or C++. Use of DB2 precompiler services with C requires, the DB2 coprocessor provided
with z/OS V1.5 and the DB2 UDB for z/OS and OS/390 V7 libraries. For use of DB2 UDB
for z/OS V8 function, use the precompiler as an alternative.
DB2 also supports programming languages such as Java, FORTRAN, Assembler, and
REXX. To find out what products are required, refer to the IBM software announcement
letter for DB2 Version 8, 204-029 for the US, or equivalent for other countries.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-11
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

IMS
The following versions of IMS (Information Management System) can work with DB2:
• IMS V9 (5655-J38)
• IMS V8 (5655-C56)
• IMS V7 (5655-B01)

CICS
CICS (Customer Information Control System) is required to be at any of the following
versions to support DB2 V8:
• CICS Transaction Server for z/OS V2.2 or V2.3 (5697-E93)
• CICS Transaction Server for OS/390 V1.3 (5655-147)
Although CICS TS V1.3 is supported, we strongly suggest that you plan to use the latest
versions of these products, to help you to maximize the use of the new functions in DB2.

11-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
IBM DB2 Tools
Continue to support all IBM supported releases of DB2
Support DB2 Version 8 at GA
Some tools will need a version/release upgrade
Some tools will need a PTF upgrade
Available through normal service
All tools must be at the right level BEFORE upgrading DB2 to
Version 8 CM
Otherwise, the tool and/or some tool functions may no longer work

© Copyright IBM Corporation 2004

Figure 11-5. IBM DB2 Tools CG381.0

Notes:
Version 8 brings many changes to DB2 that impact almost every IBM DB2 tool:
• Unicode catalog tables
• Long names in the catalog
• Online schema evolution
• New log records
• And so on
Many of these enhancements require substantial changes to some IBM DB2 Tools, which
require a new release/version to be shipped. While other tools may be less impacted and
support for DB2 Version 8 can be delivered through the normal service stream (via PTF).
Irrespective of the tool, we highly recommend that you plan to implement the correct level
and version of the tool before you migrate from DB2 Version 7 to Version 8. If you have not
installed a version of the tool that supports DB2 Version 8, unpredictable results may occur.
Some tools will no longer continue to work, while others may suffer from loss of function.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-13
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 Early Code Considerations


Early code MUST be installed very early on in your plans
Version 8 early code works fine with V7 system
Version 7 early code requires:
APAR PQ59805 / PTF UQ67466
IPL is required for early code changes / installation

© Copyright IBM Corporation 2004

Figure 11-6. DB2 Early Code Considerations CG381.0

Notes:
As in previous releases and versions, the DB2 V8 early code, living in SDSNLINK, is
downward compatible with version 7. Likewise, if your V7 is at the prerequisite
maintenance level (with APAR PQ59805, PTF UQ67466 applied), your V7 early code is
upward compatible with V8. Therefore, you can run both V7 and V8 systems on the same
LPAR. However, it is probably a good idea to be current on maintenance for your V7
systems, and have the fallback SPE (PQ48486) and its prerequisites applied.
In case you also have V6 systems running on the same LPAR, you should use the V7 early
code, as that is downward compatible with V6 and upward compatible with V8.
Note that activating changes in the early code require an IPL.

11-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Other Recommendations / Information
Ensure a fully tested "Stand Alone Dump" is available
Install the z/OS version of SAD with the "High Virtual Option"
Watch and grow CSA when increasing thread limits
Start by running a DB2 for OS/390 V7 subsystem under a
64-bit operating system
Don't go 64-bit and V8 on the same day
Set low values for everything in the beginning
Buffer pools, EDM, Thread limits
MVS runs out of auxiliary storage slots at 4TB
SDSNLOAD library is now (mandatory) a PDSE

© Copyright IBM Corporation 2004

Figure 11-7. Other Recommendations / Information CG381.0

Notes:
In order to minimize the impact of changing z/OS and DB2, it is appropriate to experiment
Global Trace and diagnostics — in general, in a pilot system with minimal users. A fully
tested Stand Alone Dump procedure should be available with the High Virtual Option.
The growth of threads and corresponding ECSA should be kept under control. The ECSA
previously used by IRLM is now freed up if you were using PC=NO.
Start by running a DB2 UDB for OS/390 Version 7 subsystem under a 64-bit virtual
Operating system and set low values for everything in the beginning, then gradually
increase the values based on resources consumption due to the increase of buffer pools,
EDM pool, number of threads.
The DB2 SDSNLOAD data set, that contains most of the DB2 executable code, is now
mandatory allocated as a PDSE data set.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-15
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Release Incompatibilities (1 of 3)
TYPE 2 keyword has been deprecated on CREATE INDEX
If specified, it will be ignored
DB2 now requires BP8K0 and BP16K0 buffer pools
Some catalog tables now use these buffer pools
Declared temporary tables need at least one table space with
a page size of 8 KB or greater
Global temporary tables need a 16 KB buffer pool
Change to data types and lengths for some special registers
May need to review your applications
You must now have a customized DSNHDECP
You must now specify valid CCSIDs in DSNHDECP
New data-only load module DSNHMCID

© Copyright IBM Corporation 2004

Figure 11-8. Release Incompatibilities (1 of 3) CG381.0

Notes:
Here we discuss the major incompatibilities that you may encounter when migrating to DB2
Version 8.

Type 1 Indexes
Hopefully, now we have seen the end of Type 1 indexes.
Support for Type 1 indexes was planned to be dropped in DB2 Version 6. Before migrating
from Version 5 to Version 6 or Version 7, you were asked to migrate all of your indexes from
Type 1 to Type 2. The migration job DSNTIJTC would abend if it found any unsupported
objects, including Type 1 indexes. Some customers found this too restrictive.
After APAR PQ38035, the migration job DSNTIJTC completes successfully even if it finds
any unsupported objects, including Type 1 indexes. However, when you try to use these
unsupported objects on Version 6 or Version 7, DB2 returns a resource unavailable error
with SQLCODE -904 and reason code 00C900CF. Although these indexes were unusable
in Version 6 and Version 7, you could still DROP them or convert them to Type 2 indexes.

11-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The DB2 Version 8 catalog migration job, DSNTIJTC, will now once again abend if it
encounters any Type 1 indexes. In addition, DB2 does not pay any attention to the Type 2
keyword if it is specified on the CREATE INDEX or ALTER INDEX statement.

DB2 Now Requires BP8K0 and BP16K0 Buffer Pools


Support for longer names has caused some DB2 catalog table rows to grow beyond the
current catalog page size of 4K. So, DB2 Version 8 moves some catalog table spaces from
BP0 to BP8K0 and BP16K0.

TEMP Database Needs At Least One 8K Table Space


In the database that is defined “AS TEMP” (TYPE=’T’ in SYSDATABASE), you need to
define at least one table space with an 8K page size or more, if you want to use declared
global temporary tables in DB2 Version 8. Otherwise you receive:
DSNT408I SQLCODE = -904, ERROR: UNSUCCESSFUL EXECUTION CAUSED BY AN UNAVAILABLE
RESOURCE. REASON 00E7009A, TYPE OF RESOURCE 200, AND RESOURCE NAME TABLESPACE IN tempdb
This must be available before you move to Version 8 compatibility mode. DB2 needs to
create a copy of some catalog tables when a declared temporary table is created and used
in Version 8 and these tables have page sizes larger than 4K.

Changes to Some Special Registers


DB2 Version 8 changes the data type and length of some special registers. These are the
changes:
• CURRENT OPTIMIZATION HINT is now VARCHAR(128)
• CURRENT PACKAGESET is now VARCHAR(128)
• CURRENT SQLID is now VARCHAR(8)
• CURRENT USER is now VARCHAR(8)
• CURRENT PATH is now VARCHAR(2048)
If your application program uses these registers in comparison statements such as a LIKE
predicate, you may need to adjust your application program for the new lengths.

DB2 Start-up and Precompile Require a User-supplied DSNHDECP


This change only impacts you if you have not previously generated your own DSNHDECP
module and have relied on the IBM supplied default. You will now need to maintain your
own DSNHDECP with your own tailored defaults.
DSNHDECP is a data only module that supplies various application programming defaults,
like default date and time formats and default code pages. This module is normally tailored
by the DSNTINST CLIST and generated by the installation job DSNTIJUZ.
DB2 ships an IBM supplied default version of DSNHDECP in SDSNLOAD, and continues
to do so in DB2 V8 from compatibility with older applications.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-17
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

However, Version 8 is more reliant than previous versions on DSNHDECP. Therefore, DB2
checks to see whether the DSNHDECP is the default DSNHDECP that ships with DB2, or
whether it loaded a customized version of DSNHDECP. When a default DSNHDECP is
loaded during DB2 start-up, or during the invocation of the DB2 precompiler, start-up, and
precompilation fail. A customized DSNHDECP must exist in a library that is before the
default DSNHDECP in the STEPLIB concatenation or link list.
If you normally link-edit your customized DSNHDECP into SDSNLOAD (and override the
default), you can continue to do so; if not, you will need to link-edit your DSNHDECP into
SDSNEXIT and concatenate that data set ahead of SDSNLOAD.

You Must Now Specify Valid CCSIDs


Recent versions of DB2 permit specifying a so-called undefined CCSID of 0 when creating
the DSNHDECP module. (In DSNHDECP, the single-byte EBCDIC CCSID is specified as
the argument to the SCCSID parameter. The single-byte ASCII CCSID is specified as the
argument to the ASCCSID parameter.)
Over time, with the evolution of new DB2 functions and features, string data has become
increasingly dependent on CCSIDs. CCSIDs can be specified explicitly when binding an
object but in most cases the CCSID is determined from the DSNHDECP module. The
original requirements for setting SCCSID in DSNHDECP were for the use of distributed
data, or the use of mixed and graphic (DBCS) data.
In Version 5, support for optionally storing data in ASCII format was added. With this
support DB2 needed a way to distinguish between the ASCII and EBCDIC data and this
was done through CCSID tagging. Creation of an ASCII table required specification of
ASCII CCSIDs in DSNHDECP in conjunction with EBCDIC CCSIDs.
In Version 6, new object types were added. When a large-object (LOB) data type, distinct
type, user-defined function, or stored procedure that references EBCDIC “string” data
types is created via an SQL CREATE statement, there is a requirement that defined
(non-zero) EBCDIC CCSIDs be provided in DSNHDECP. Furthermore, if ASCII was used
for these object types, then ASCII CCSIDs are also required in DSNHDECP.
In Version 7, support was added for storing data encoded in Unicode. Although
DSNHDECP has fields for Unicode, creation of Unicode objects also requires defined
CCSIDs for EBCDIC.
Any specification of a string data type in the CREATE statement for a LOB or distinct type,
user-defined function, or stored procedure requires an implicit or explicit specification of an
encoding scheme — EBCDIC, ASCII, or (V7 only) UNICODE — via the CCSID clause for
any specification of a string data type.
If no CCSID parameter is specified in the CREATE statement, the encoding scheme is the
value of the DEFAULT ENCODING SCHEME on installation panel DSNTIPF. DB2 then
determines the actual, “numeric” CCSID from the encoding scheme in place (specified
implicitly/explicitly as part of the CREATE statement). If the CCSID (such as in

11-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty DSNHDECP) determined for a column/parameter is invalid or undefined (0), then an error
such as SQLCODE -879 or -189 is returned.
In recognition of the increasingly essential role of CCSIDs in storing and manipulating
string-type data, the “undefined” EBCDIC CCSID of 0 is being discontinued in DB2 Version
6 and subsequent versions. Also, the notion of a DB2-provided “default” EBCDIC CCSID is
eliminated because it is not possible or appropriate for DB2 to provide a “correct” default
CCSID. Therefore, use of the DB2-supplied DSNHDECP module in SDSNLOAD is no
longer recommended.
PQ56697 adds provisions to alert you if you are using an undefined CCSIDs or the
DB2-supplied DSNHDECP in SDSNLOAD that these practices are unsupported in DB2
Versions 6 and 7.
PQ71079 adds restrictions to prevent you from creating a DSNHDECP module that
specifies an undefined CCSID for single-byte ECBCIC, that is SCCSID=0.
In V8, DB2 startup processing checks the CCSIDs that are specified in DSNHDECP. If the
values are not valid, then DB2 issues message DSNT109I with reason code 00E3009B,
and DB2 startup processing terminates. This is to avoid any potential data corruption
issues.
Appendix A of the DB2 UDB for z/OS Version 8 Installation Guide, SC18-7418 has a list of
the valid CCSIDs for ASCII and EBCDIC. Note that there are two tables; one for
MIXED=NO, another one for MIXED=YES.
Attention: The CCSID of character strings at your site is determined by the CCSID that
you specify on installation panel DSNTIPF. If this CCSID is not correct, character
conversion produces incorrect results. The correct CCSID for your installation is
determined by the coded character set that is supported by your site’s I/O devices (for
example, 3270 terminals or terminal emulation programs such as IBM’s PCOMM), local
applications such as IMS and QMF, and remote applications such as CICS Transaction
Server. If you find a mismatch between what your applications use and what DB2 uses,
do not change your DSNHDECP without evaluating all of the implications, and certainly
not without talking to IBM Service personnel before doing so.

New Data-only Load Module DSNHMCID


The new data-only load module DSNHMCID contains EBCDIC CCSIDs (single byte,
double byte, mixed) for offline message conversion. Version 8 utilities and applications
must have access to this module. DSNHMCID is generated by DSNTIJUZ and is
link-edited into both SDSNLOAD and SDSNEXIT.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-19
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Release Incompatibilities (2 of 2)
Reduced support for DB2 established stored procedures
Can no longer create DB2 established stored procedures
NO WLM ENVIRONMENT clause is now invalid
Existing stored procedures can still run in a DB2-established
stored procedure address space
Move to WLM as soon as possible

COMPJAVA stored procedures no longer supported


Use JIT
Multiple calls to the same stored procedure at same level
Second call closes previous open cursor in V7
Multiple identical open cursors now allowed in SP

© Copyright IBM Corporation 2004

Figure 11-9. Release Incompatibilities (2 of 3) CG381.0

Notes:

Reduced Support for DB2-established Stored Procedures


In Version 8, you can no longer specify the NO WLM ENVIRONMENT option when you
CREATE or ALTER stored procedure definitions. Although existing stored procedures can
still run in a DB2-established stored procedure address space, you should plan to move
your stored procedures to WLM environments as soon as possible. DB2-established stored
procedures will probably be no longer available in future releases of DB2.
In earlier versions of DB2 the supplied stored procedure DSNWZP had to run in a
DB2-established stored procedure address space. DB2 Version 8 defines DSNWZP to run
in a WLM-established stored procedure address space.
So, when you fallback from Version 8 to Version 7, or re-migrate from Version 7 to Version
8, the DSNWZP stored procedure will not work. You will need to manually issue the
appropriate ALTER commands to change the external name for the stored procedure.
We describe this in a little more detail in later visuals.

11-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty COMPJAVA Stored Procedures No Longer Supported


DB2 Version 8 will no longer support LANGUAGE COMPJAVA stored procedures, since
Visual Age Java will no longer support compiled Java link library files, known as High
Performance Java (HPJ).
After migrating to Version 8 compatibility mode, you can no longer define or run
COMPJAVA stored procedures. DB2 will return a SLQCODE -471 with reason code
00E79000 and message DSNX900E will be written to the system console, if you try to
execute a stored procedure with LANGUAGE COMPJAVA. You need to convert all
LANGUAGE COMPJAVA stored procedures to LANGUAGE JAVA before migrating to
Version 8 compatibility mode, by following these steps:
1. Use ALTER PROCEDURE to change the LANGUAGE and the WLM ENVIRONMENT
parameters. (The recommendation is that COMPJAVA stored procedures do not run in
the same WLM environment as JAVA stored procedures, for performance reasons. So,
you will probably need to change the WLM ENVIRONMENT parameter as well.) The
EXTERNAL NAME clause must also be specified even if it has not changed, as DB2
needs to verify it.
Use the following example as a model:
ALTER PROCEDURE SYSPROC.JAVADVR
LANGUAGE JAVA
EXTERNAL NAME ’display.display.main’
WLM ENVIRONMENT WLMENVJ;
2. Ensure that the WLM environment is configured and that the required JVM is installed.
3. Ensure that the .class file that is identified in the EXTERNAL NAME clause of the
ALTER PROCEDURE is present in one of the following places:
- In a JAR that was installed to DB2 by an invocation of the INSTALL_JAR stored
procedure.
- In a directory in the CLASSPATH ENVAR of the data set that is named on the
JAVAENV DD statement of the WLM stored procedures address space JCL.

Multiple Calls to the Same Stored Procedure at the Same Nesting Level
In previous DB2 versions, if a stored procedure was called twice from the same program
and at the same nesting level, DB2 closes the result set cursors and releases storage for
the first instance of the stored procedure before making the second call.
In DB2 Version 8, if the requester and the server are both DB2 Version 8 subsystems in
new-function mode, when the second call is made, both instances of the stored procedure
can run at the same time. DB2 does not close the result sets from the first call or release
storage for the first instance of the stored procedure. This is an “incompatible” change.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-21
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Release Incompatibilities (3 of 3)
NO outstanding Version 7 utilities
Cannot restart or terminate any outstanding V7 utilities in V8
And many more:
Job DSNTIJPM (pre-migration checking for potential problems)
Preventing CCSID changes
Auto rebind all packages bound prior to V2.3
Review the migration considerations chapter in the Installation Guide

© Copyright IBM Corporation 2004

Figure 11-10. Release Incompatibilities (3 of 3) CG381.0

Notes:

No Outstanding Version 7 Utilities


DB2 Version 8 enforces a restriction that you can restart or terminate a utility only on the
same release on which it was started. So, any outstanding utilities prior to Version 8 cannot
be restarted or terminated after you have migrated from Version 7 to Version 8 compatibility
mode. To ensure that you do not have outstanding utility jobs before you migrate the
Version 8, issue the following command:
-DISPLAY UTILITY(*)
If you find that you have an outstanding Version 7 Utility after you have migrated to Version
8 compatibility mode, you must first fallback to Version 7 to restart or terminate the utility.

11-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty And Many More


We have only identified the key differences between DB2 Version 8 and earlier releases. In
your planning for DB2 Version 8, we urge you to review the section on “Migration
Considerations” in the DB2 UDB for z/OS Version 8 Installation Guide, SC18-7418.
DB2 Pre-migration Checks - DSNTIJPM
DB2 provides a tailored job which we strongly recommend that you run prior to migrating
your DB2 subsystem from Version 7 to Version 8. Job DSNTIJPM, which is shipped in the
Version 8 SDSNSAMP data set, searches for any release incompatibilities and
unsupported objects, which will prevent a successful migration. The jobs checks the
following things:
• Existence of Type 1 indexes
• DB2 catalog tables on which DATACAPTURE is enabled
• Partitioned table spaces that use selective partition locking (SPL)
• Partitioned table spaces that have a truncated limit key
• Stored procedures that use LANGUAGE COMPJAVA
• Stored procedures that use the DB2 stored procedures address space
• Use of the DSNWZPR module by DSNWZP
• Existence of the V7 sample database
• Evidence of multiple CCSIDs in the same encoding scheme
• Packages for routines and plans for callers of routines that need to be rebound because
of a incompatible change in the DBINFO control block

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-23
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

In order to provide customers plenty of time to prepare for V8 migration before they actually
buy DB2 V8, a job similar to V8’s DSNTIJPM will be shipped in V7. The new job is called
DSNTIJP8. It is shipped with the PTF for APAR PQ84421.
Attention: As a part of your migration planning for DB2 Version 8, we strongly
recommend that you review your environment and the code pages you have been using.
Any discrepancies may cause problems after you migrate to Version 8 when DB2 parses
in Unicode and starts to perform code page conversions more often. We recommend that
you check the following items in all of your DB2 environments before migration:
•Do your DB2 environment’s CCSIDs match your terminal emulators and local
applications?
•Identify if there are any objects in DB2 with different CCSIDs within the same encoding
scheme. The SQL shipped in the V8 job DSNTIJPM or the V7 equivalent called
DSNTIJP8, may be able to help to a certain extent, as it searches for cases where
more than one CCSID exists. For example, you should have only one table space
with EBCDIC CCSID 37 and 500 in a single system.
If either of these checks raise any issue with your DB2 environments, please contact IBM
for advice on how to resolve these issues before migrating to Version 8. We highly
recommend that you do not change any DB2 CCSID values without first consulting IBM
support.
These problems can be much harder to identify and fix once you have moved to Version
8 and are converting your data to Unicode. They are also better resolved in Version 7
before you start seeing incorrect results as soon as you migrate to Version 8.
We recommend that you perform this code page health check as soon as you begin
planning for Version 8, in order to have enough time for any remedial action, if required.
Automatic Rebind of Plans Bound Prior to Version 2.3
DB2 Version 8 will autobind plans and packages that were bound prior to DB2 Version 2.3.
Many changes have been made over the last 10 years to DB2 when it binds plans and
packages. There are many old code paths in DB2 that are in place to deal with special
cases and issues with old plans and packages. The change sets a precedence to retire old
plans and packages (execution and runtime structures) on an ongoing basis as new
releases of DB2 are introduced.
Preventing CCSID Changes
In Version 8, the CCSID information that is specified at installation (on panel DSNTIPF) is
stored in the BSDS. Currently this information is stored in the DSNHDECP module. This
change is introduced to ensure that you do not change your CCSIDs, either accidentally or
by intention (something that is not supported by DB2).
At startup, DB2 checks the BSDS to see if the CCSIDs are recorded in the BSDS. If the
CCSIDs are not recorded, we will place them in the BSDS. If the CCSIDs are recorded in
the BSDS, we will check to make sure they match the values in the DSNHDECP. If the

11-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty values do not match, message DSNT108I will be issued and DB2 startup processing will
terminate.
In addition, the Change Log Inventory Utility (DSNJU003) is also changed to delete the
CCSIDs in the BSDS. A new clause, DELETE CCSIDS, deletes the CCSID information in
the BSDS.
A complete list of things to watch out for can be found in the “Migration considerations”
section of the DB2 UDB for z/OS Version 8 Installation Guide, GC18-7418.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-25
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 Universal Driver for SQLJ/JDBC


Both the Universal Driver and JDBC/SQLJ Driver for OS/390
are shipped with DB2 Version 8
Many enhancements in the Universal Driver
Base for all future Driver development
Strongly encouraged to migrate to the Universal Driver
Some differences exist that may impact existing applications
URL syntax
Security design
SQLJ default connection
SQLJ program preparation process
New utilities, for example, db2sqljcustomize instead of db2profc
No more DBRMs
New profile customization with new customized profile layout
Can upgrade existing .ser files (on z/OS) with db2sqljupgrade utility

© Copyright IBM Corporation 2004

Figure 11-11. DB2 Universal Driver for SQLJ/JDBC CG381.0

Notes:
Both the DB2 Universal JDBC Driver (new) and the JDBC/SQLJ Driver for OS/390 (now
also called the legacy Driver) are shipped with DB2 Version 8.
The legacy Driver is put into the HFS in the same directory path structure as it is in Version
7:
/usr/lpp/db2/db2810/
The Universal driver, by default, is stored in the directory:
/usr/lpp/db2/db2810/jcc/
As a lot of the classes are identical between both drivers, you must change your
CLASSPATH in order to be able to use the DB2 Universal Driver.
The DB2 Universal JDBC Driver differs from the JDBC/SQLJ Driver for OS/390 in many
ways. Listing all the differences between them is an almost impossible job. In this topic, we
briefly introduce some of the differences that can impact your existing applications. A list of
the supported JDBC APIs can be found in the “Comparison of driver support for JDBC

11-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty APIs” section of the manual, DB2 Application Programming Guide and Reference for Java,
SC18-7414.

Difference in URL Syntax


The syntax of the URL parameter in the DriverManager.getConnection method is different
for each driver.
Using the JDBC/SQLJ Driver for OS/390, you use:
jdbc:db2os390:location-name
jdbc:db2os390sqlj:location-name
With the Universal JDBC Driver, for Type 2 connectivity, you use:
jdbc:db2:database
However, for downward compatibility, you can still use jdbc:db2os390:location-name and
jdbc:db2os390sqlj:location-name. Our recommendation is to make the change, and start
using the new syntax. In case you are using the data source interface, your programs are
not affected, and you can make the change in the data source definition.
Note that the Universal Driver also supports Type 4 connectivity. In that case the syntax is:
jdbc:db2://server:portnumber/database
Note also that database in the context of the Universal Driver, is the DB2 location name of
the system you want to connect to.

Difference in Error Codes and SQLSTATEs Returned for Driver Errors


The DB2 Universal JDBC Driver does not use existing SQLCODEs or SQLSTATEs when
an error occurs inside the driver itself, as the other drivers do. You can look up the error
codes and SQLSTATEs issued by the Universal Driver in the “Error codes issued by the
DB2 Universal JDBC Driver” and “SQLSTATEs issued by the DB2 Universal JDBC Driver”
sections of DB2 Application Programming Guide and Reference for Java, SC18-7414. The
JDBC/SQLJ driver for z/OS returns SQLSTATE FFFFF when such an error occurs.

Security Mechanisms
The JDBC drivers have different security mechanisms. Therefore it is important to
understand their differences. For information on DB2 Universal JDBC Driver, and
JDBC/SQLJ Driver for OS/390 security mechanisms, see the “Security under the DB2
Universal JDBC Driver” and “Security under the JDBC/SQLJ Driver for OS/390” of DB2
Application Programming Guide and Reference for Java, SC18-7414.

How Connection Properties are Set


With Universal Driver Type 4 connectivity, you set properties for a connection by setting the
properties for the associated DataSource or Connection object.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-27
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

With Universal Driver type 2 connectivity, you set properties for a connection in one of
these ways:
• You can set properties only for a connection by setting the properties for the associated
DataSource or Connection object.
• You can set driver-wide properties through an optional run-time properties file.
For the JDBC/SQLJ driver for z/OS driver, you set properties through the JDBC/SQLJ
run-time properties file.

Results Returned from ResultSet.getString for a BIT DATA Column


The DB2 Universal JDBC Driver returns data from a ResultSet.getString call for a CHAR
FOR BIT DATA or VARCHAR FOR BIT DATA column as a lowercase hexadecimal string.
The JDBC/SQLJ Driver for OS/390 returns the data in the encoding scheme of the caller.

Exceptions for PreparedStatement.setXXXStream with Length Mismatch


Another difference between the two drivers is the point in them when an exception is
thrown for PreparedStatement.setXXXStream with a length mismatch. When you use the
PreparedStatement.setBinaryStream, PreparedStatement.setCharacterStream, or
PreparedStatement.setUnicodeStream method, the length parameter value must match
the number of bytes in the input stream. If the number of bytes for these does not match:
• The DB2 Universal JDBC Driver does not throw an exception until the subsequent
PreparedStatement.executeUpdate method executes. Therefore, for the DB2 Universal
JDBC Driver, some data might be sent to the server when the lengths do not match.
That data is truncated or padded by the server. The calling application needs to issue a
rollback request to undo the database updates that include the truncated or padded
data.
• The JDBC/SQLJ 2.0 Driver for OS/390 throws an exception after the
PreparedStatement.setBinaryStream, PreparedStatement.setCharacterStream, or
PreparedStatement.setUnicodeStream method executes.

Default Mappings for PreparedStatement.setXXXStream


With the DB2 Universal JDBC Driver, when you use the
PreparedStatement.setBinaryStream PreparedStatement.setCharacterStream, or
PreparedStatement.setUnicodeStream method, and no information about the data type of
the target column is available, the input data is mapped to a BLOB or CLOB data type.
For the JDBC/SQLJ driver for z/OS, the input data is mapped to a VARCHAR FOR BIT
DATA or VARCHAR data type.

11-28 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty How Character Conversion is Done


When character data is transferred between a client and a server, the data must be
converted to a form that the receiver can process:
• For the DB2 Universal JDBC Driver, character data that is sent from the database
server to the client is converted using Java’s built-in character converters. The
conversions that the DB2 Universal JDBC Driver supports are limited to those that are
supported by the underlying JRE implementation. A DB2 Universal JDBC Driver client
sends data to the database server as Unicode.
• For the JDBC/SQLJ driver for z/OS, character conversions can be performed if the
conversions are supported by the DB2 server.

Implicit or Explicit Data Type Conversion for Input Parameters


If you execute a PreparedStatement.setXXX method, and the resulting data type from the
setXXX method does not match the data type of the table column to which the parameter
value is assigned, the driver returns an error unless data type conversion occurs:
• With the DB2 Universal JDBC Driver, conversion to the correct SQL data type occurs
implicitly if the target data type is known and if the deferPrepares connection property is
set to false. In this case, the implicit values override any explicit values in the setXXX
call. If the deferPrepares connection property is set to true, you must use the
PreparedStatement.setObject method to convert the parameter to the correct SQL data
type.
• For the JDBC/SQLJ driver for z/OS, if the data type of a parameter does not match its
default SQL data type, you must use the PreparedStatement.setObject method to
convert the parameter to the correct SQL data type.

Data Returned from ResultSet.getBinaryStream against a Binary


Column
With the DB2 Universal JDBC Driver, when you execute ResultSet.getBinaryStream
against a binary column, the returned data is in the form of lowercase, hexadecimal digit
pairs.
With the JDBC/SQLJ driver for z/OS, when you execute ResultSet.getBinaryStream
against a binary column, a string value is returned. The driver uses the Java client’s default
local encoding to construct the string from bytes.

Result of Using getBoolean to Retrieve a Value from a CHAR Column


With the DB2 Universal JDBC Driver, when you execute ResultSet.getBoolean or
CallableStatement.getBoolean to retrieve a Boolean value from a CHAR column, and the
column contains the value “false” or “0”, the value false is returned. If the column contains
any other value, the value “true” is returned.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-29
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

With the JDBC/SQLJ driver for z/OS, when you execute ResultSet.getBoolean or
CallableStatement.getBoolean to retrieve a Boolean value from a CHAR column, and the
column contains the value “0”, the value “false” is returned. If the column contains any
other value, the value “true” is returned.

Differences for SQLJ


SQLJ support in the DB2 Universal JDBC Driver differs from SQLJ support in the other
DB2 JDBC drivers in the following areas.
Connection Associated with the Default Connection Context
If you are using the DataSource interface to connect to a data source, before you can use a
default connection context, the logical name jdbc/defaultDataSource must be registered
with JNDI. The JDBC/SQLJ Driver for OS/390 creates a connection to the local data source
for the default connection context.
To create a default connection context, the SQLJ runtime now does a JNDI lookup for
jdbc/defaultDataSource. If nothing is registered, a null context exception will be thrown
when the driver attempts to access the context. The recommended solution is to use an
explicit connection context on the SQLJ clause. However, registering a
jdbc/defaultDataSource with JNDI will also suffice.
Production of DBRMs during SQLJ Program Preparation
The SQLJ program preparation process for the DB2 Universal JDBC Driver does not
produce DBRMs. Therefore, with the DB2 Universal JDBC Driver, you can produce DB2
packages only by using the DB2 Universal JDBC Driver utilities.
Difference in Connection Techniques
As mentioned earlier, the connection techniques that are available, and the driver names
and URLs that are used for those connection techniques, vary from driver to driver.
New Layout for Customized Profiles
The new db2sqljcustomize program that is used to customize your serialized profile, and by
default also, to bind your packages on the DB2 for z/OS, generates a serialized profile with
a new format. The new format is not compatible with the old format. To be able to use the
existing, installed SQLJ programs customized with the legacy Driver, they must be
upgraded first.
To convert serialized profiles that you customized under JDBC/SQLJ Driver for OS/390 to a
format that is compatible with the DB2 Universal JDBC Driver, run the db2sqljupgrade
utility. After you run the db2sqljupgrade utility, you do not need to bind new packages for
the associated SQLJ applications.
Before you can run the db2sqljupgrade utility, your CLASSPATH must contain the full path
names for the db2j2classes.zip file for the JDBC/SQLJ Driver for OS/390, and the
db2jcc.jar and sqlj.zip files for the DB2 Universal JDBC Driver. For example:
db2sqljupgrade -collection new_collection_name MyinputFileName.ser

11-30 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The upgrade utility will save the existing profile as .ser_old. Customers can revert back to
the old .ser in case upgrade is not successful. It is strongly recommended that customers
back up original files, including but not limited to .ser, .class, .java, and .sqlj files, to another
directory before attempting to upgrade the profile.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-31
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

11-32 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.2 Installation


You can install DB2 Version 8 either as a host-based installation or via the “msys for Setup
Facility”. We will briefly explore the msys for Setup DB2 Customization Center in later
visuals in this unit. Briefly, the msys Setup DB2 Customization Center is a
workstation-based facility that replaces the DB2 Installer workstation tool. The remainder of
this discussion will concentrate on the TSO, or host-based, installation and migration
processes.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-33
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Install CLIST - Panel DSNTIPA1


DB2 VERSION 8 INSTALL, UPDATE, MIGRATE, AND ENFM - MAIN PANEL
===>

Check parameters and reenter to change:

1 INSTALL TYPE ===> INSTALL Install, Update, or Migrate


or ENFM (Enable New Function Mode)
2 DATA SHARING ===> NO Yes or No (blank for Update or ENFM)

Enter the data set and member name for migration only. This is the name used
from a previous Installation/Migration from field 7 below:
3 DATA SET(MEMBER) NAME ===>

Enter name of your input data sets (SDSNLOAD, SDSNMACS, SDSNSAMP, SDSNCLST):
4 PREFIX ===> DSN810
5 SUFFIX ===>

Enter to set or save panel values (by reading or writing the named members):
6 INPUT MEMBER NAME ===> DSNTIDXA Default parameter values
7 OUTPUT MEMBER NAME ===> DSNTID8A Save new values entered on panels

PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIPA1: Install, update, and migrate DB2 - main panel


© Copyright IBM Corporation 2004

Figure 11-12. Install CLIST - Panel DSNTIPA1 CG381.0

Notes:
The process to install a new DB2 Version 8 subsystem is the same as installing a new DB2
Version 7 subsystem. However, there are a few small differences from previous versions of
DB2, which we highlight in the next few visuals.
Migration (from V7) to Version 8 is quite different from previous migrations. The migration
process now consists of three distinct phases:
• CM: Compatibility Mode: During this phase, which can last as long as deemed
necessary, you will make all the tests needed to ensure that you are happy with the new
version and will not have to fallback to Version 7 later on. In CM, you are able to fallback
to Version 7 in case of problems.
• ENFM: Enabling-New-Function Mode: During this phase you will convert the DB2
subsystem to the format ready to support the new function in Version 8, by using the
on-line REORG Utility. No fallback to DB2 Version 7 is allowed once the ENFM is
entered.

11-34 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • NFM: New-Function Mode: This is the target phase, triggered when you execute a job
confirming that all the previous conversion steps have completed successfully and
update the DB2 subsystem or data sharing group as in new-function mode.

Install CLIST - Panel DSNTIPA1


After completing the SMP/E work to create and populate the DB2 libraries, you are now
ready to invoke the DB2 Installation CLIST to generate the jobs required for installation.
Nothing has changed here for Version 8.
First, make the DB2 ISPF libraries available to TSO. This can be done by concatenating
the DB2 ISPF libraries to your normal allocations. (Refer to the DB2 UDB for z/OS Version
8 Installation Guide, SC18-7418 for more details.). You can now invoke the DB2 Installation
CLIST in either of two ways:
1. To use DB2 Online help:
EXEC ’prefix.SDSNCLST(DSNTINS0)’
2. To bypass the DB2 Online help:
EXEC ’prefix.SDSNCLST(DSNTINST)’
The panel DSNTIPA1 is the first panel displayed. From here, you tell the installation
process what you want to do. In addition, the DB2 Installation CLIST needs a set of default
values and uses them on the subsequent panels.
You will notice a new option on the panel, ENFM, which is highlighted. This is the option
you use after you have successfully migrated to Version 8 and now want to generate the
jobs to enable the new-function mode in DB2. We will talk more about this in the migration
visuals.
To install DB2 for the first time, use the IBM-supplied defaults in member DSNTIDXA for
the INPUT MEMBER NAME. To install DB2 by using parameters from a previous run as
defaults, you must supply the member that contains the output from the previous run. It
was the OUTPUT MEMBER NAME during the last run.
Specify the member name of the output data set in which to save the values that you enter
on the panels. This member is stored in prefix.SDSNSAMP (not the data set created by the
DSNTINST CLIST). To avoid replacing any members of prefix.SDSNSAMP that are
shipped with the product, specify DSNTIDxx as the value of OUTPUT MEMBER NAME,
where xx is any alphanumeric value except XA or VB. Always give a new value in the
OUTPUT MEMBER NAME field for a new panel session. You supply the name from your
current session in the INPUT MEMBER NAME field for your next session. You should not
use the same member name for output as for input.
The outputs from the DB2 Installation CLIST session are:
• A new data set, prefix.NEW.SDSNSAMP, that contains the edited JCL
• A new data set, prefix.NEW.SDSNTEMP, that contains tailored CLISTs for input to job
DSNTIJVC

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-35
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• A new member in prefix.SDSNSAMP, containing the resulting parameter values

11-36 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIPA1 Processing Options
Specify whether you are installing, updating, migrating, or
converting to NFM
INSTALL to install DB2 for the first time
Default for the first run of the CLIST
Creating a new Version 8 DB2 subsystem
In "New Function Mode"

UPDATE to update parameters for an existing DB2 subsystem


MIGRATE to migrate from Version 7 to Version 8 (CM)
Compatibility Mode (CM) - no new function available
ENFM to convert the DB2 catalog from Version 8 (CM) to NFM
You must run the CLIST in MIGRATE mode before you can
choose this option

© Copyright IBM Corporation 2004

Figure 11-13. DSNTIPA1 Processing Options CG381.0

Notes:
As you can see from the previous visual, the DB2 Installation CLIST has a new option,
ENFM, to support conversion from DB2 for z/OS, Version 8 compatibility mode (CM) to
DB2 for z/OS, Version 8 new-function mode (NFM). We shall discuss this in more detail
when we talk about the DB2 migration process in later visuals.
However, in summary:
• INSTALL is used to generate the jobs to create a new DB2 subsystem.
• UPDATE is used to update and maintain the DSNZPARM and DSNHDECP parameters.
• MIGRATE is used to generate the jobs used to migrate a DB2 Version 7 subsystem to
Version 8, running in compatibility mode (CM). In this mode no new Version 8 functions
are available.
• Once the CLIST has successfully completed while specifying MIGRATE, it can be
executed specifying ENFM to generate the jobs used to migrate the DB2 subsystem to
enabling-new-function mode (ENFM). It is recommend to run the CLIST in ENFM mode

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-37
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

only after you have successfully migrated to V8 (including all members if data sharing)
and are committed to commencing enabling-new-function mode.

11-38 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Installation CLIST Changes
Materialized query tables
CURRENT REFRESH AGE
CURRENT MAINT TYPES
Remove hiperpool definitions and VPTYPE settings
No longer supported in V8
DDF panels have new terminology
Use “Inactive DBAT" instead of "Type 1 Inactive Thread"
Use "Inactive Connection" instead of "Type 2 Inactive Thread"
Panel DSNTIP7
VARY DS CONTROL INTERVAL
OPTIMIZE EXTENT SIZING
and many more

© Copyright IBM Corporation 2004

Figure 11-14. Installation CLIST Changes CG381.0

Notes:
As you move through the DB2 Installation CLIST panels, you will notice a number of
changes from the previous versions of the panels. There have been some new fields added
to support new Version 8 function, as well as some panels have been removed as they are
no longer required. Please refer to the DB2 UDB for z/OS Version 8 Installation Guide,
SC18-7418, for a description of all of the Installation panels and their contents.
The major changes are as follows:
• The Performance and Optimization panel, DSNTIP8, provides the defaults for two new
special registers which have been created to support Materialized Query Tables:
- CURRENT REFRESH AGE:
Specifies the default value to be used for the CURRENT REFRESH AGE special
register when no value is explicitly set using the SQL statement SET CURRENT
REFRESH AGE. The values can be 0 or ANY). The default of 0 disables query
rewrite using deferred materialized query tables.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-39
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

- CURRENT MAINT TYPES:


Specifies the default value to be used for the CURRENT MAINTAINED TABLE
TYPES FOR OPTIMIZATION special register when no value is explicitly set using
the SQL statement SET CURRENT MAINTAINED TABLE TYPES FOR
OPTIMIZATION.
Acceptable values are NONE, SYSTEM, USER, ALL. The default (SYSTEM) allows
query rewrite using system-maintained materialized query tables when the
CURRENT REFRESH AGE is set to ANY. Alternatively, specifying USER allows
query rewrite using user-maintained materialized query tables when CURRENT
REFRESH AGE is set to ANY, and specifying ALL means query rewrite using both
system-maintained and user maintained materialized query tables.
• The buffer pool panels have been re-designed to remove hiperpool definitions and
rename virtual buffer pools to buffer pools.
In DB2 Version 8, there are no longer any concept of hiperpools, buffer pools in data
spaces, or virtual buffer pools. They are all just buffer pools.
In addition, the default size for BP0 has been raised from 2000 to 20000 and the default
size for BP32K has been raised from 24 to 250. As BP8K0 and BP16K0 are now
required buffer pools (used by the DB2 catalog), their default value is increased from 0
to 1000, and 500 respectively.
• The Data Set Names panels have been re-designed.
The Data Set Names panels have been re-designed to remove old data sets which are
no longer required. For example, the old COBOL compiler libraries have been removed.
• The Distributed Data Facility panel, DSNTIPR, has renamed MAX TYPE 1 INACTIVE to
MAX INACTIVE DBAT.
DB2 Version 8 uses the term “Inactive DBAT” instead of “Type 1 Inactive Thread” and
uses the term “Inactive Connection” instead of “Type 2 Inactive Thread”. These terms
are much more descriptive of the actual status of the threads and brings the terminology
more in line with DB2 UDB on other platforms.
• The Data Set Size panel, DSNTIP7, has four new fields that change how DB2 manages
VSAM data sets:
- TABLE SPACE ALLOCATION and INDEX SPACE ALLOCATION:
Specify the amount of space in KB for primary and secondary space allocation for
DB2-defined data sets for table spaces and index spaces that are being created
without the USING clause. A value of 0 for a non-LOB table space or index indicates
that DB2 is to use a default value of one cylinder, or ten cylinders for a LOB table
space. These parameters were introduced via APAR PQ53067 in DB2 V6 and V7,
but did not appear on the install panels until V8.

11-40 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - VARY DS CONTROL INTERVAL:


This field specifies whether DB2-managed data sets created by CREATE
TABLESPACE will have variable VSAM control intervals. If you specify YES, when
DB2 creates a DB2-managed data set for a table space. It will have a VSAM control
interval that corresponds to the buffer pool used for the table space. A value of NO
indicates that DB2-managed data sets are to be created with a fixed control interval
of 4-KB, regardless of the buffer pool size (as in Version 7). Also, if you specify YES
in this field then the following installation jobs will take this into account:
• DSNTIJIN is configured to use variable CI sizes when defining the data sets for
the DB2 catalog and directory (in INSTALL and MIGRATE mode)
• DSNTIJNE is configured to use variable CI sizes for DB2 catalog and directory
data sets when enabling new-function mode.
- OPTIMIZE EXTENT SIZING:
This field specifies whether secondary extent allocations for DB2-managed data
sets are to be sized according to a sliding scale that optimizes the likelihood of
reaching the maximum data set size before secondary extents are exhausted. If you
select NO, the default value, you will manage secondary extent allocations
manually. If you select YES, DB2 will automatically optimize the secondary extent
allocations.
When the sliding scale is used, secondary extent allocations that are allocated
earlier are smaller than those allocated later, until a maximum allocation is reached.
The maximum secondary extent allocation is 127 cylinders for data sets with a
maximum size of 16 GB or less, and 559 cylinders for data sets with a maximum
size of 32 GB or 64 GB. For more information on this enhancement, see “SMART
DB2 extent sizes for DB2 managed objects” on page 2-149.
• MAX OPEN CURSORS on panel DDNTIPX (MAX_NUM_CUR DSNZPARM) allows you
to specify maximum number of cursors, including allocated cursors, that are open at a
given DB2 site per thread. The default is 500.
• MAX STORED PROCS (MAX_ST_PROC) allows you to specify a maximum number of
stored procedures per thread. The default is 2000.
The previous two new DSNZPARMs are introduced as safety valves. Now that V8 allows
you to call the same stored procedure multiple times at the same nesting level, as well as
allowing the same SQLJ iterator being instantiated multiple times in the same program, this
increases the number of open cursors and active stored procedures for a single thread.
In case of an application error that causes some sort of loop and continues to open cursors
or keeps calling the same stored procedure over and over, DB2 needs to make sure that
such errors cannot bring down the system. To prevent this from happening,
MAX_NUM_CUR and MAX_ST_PROC are activated and will return an SQLCODE -904 to
the application before things get out of control.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-41
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• LONG-RUNNING READER is a new field on panel DSNTIPE:


Specify the number of minutes that a read claim can be held by an agent before DB2
issues a warning message to report it as a long-running reader. If you specify a value of
0, DB2 will not report long-running readers. For more information, see “Detecting Long
Readers” on page 2-146.
• PAD INDEXES BY DEFAULT is a new field on panel DSNTIPE:
This field specifies whether new indexes are padded by default. YES indicates that a
new index will be padded unless the NOT PADDED option is specified on the CREATE
INDEX statement. The default value, NO, indicates that a new index will not be padded
unless the PADDED option is specified on the CREATE INDEX statement.
• The Tracing panel, DSNTIPN, introduces two new fields to enhance DB2 accounting
reporting:
- DDF/RRSAF ACCUM:
Specify whether DB2 accounting data should be accumulated by the user for DDF
and RRSAF threads. If NO is specified, DB2 writes an accounting record when a
DDF thread is made inactive or when signon occurs for an RRSAF thread. If a value
between 2 and 65535 is specified, DB2 writes an accounting record every n
occurrences of the user on the thread, where n is the number that is specified for this
parameter.
- AGGREGATION FIELDS:
Specify the aggregation fields to be used for DDF and RRSAF accounting rollup.
The choices are to rollup accounting data by:
0. End user ID, transaction name, and workstation name
1. End user ID
2. End user transaction name
3. End user workstation name
4. End user ID and transaction name
5. End user ID and workstation name
6. End user transaction name and workstation name
• The Storage Calculation panel, DSNTIPC, allows you to change some new values:
- EDM STATEMENT CACHE:
Specify the size (in KB) of the statement cache that can be used by the EDM. This is
a new storage pool which is located above the 2-GB bar. The CLIST calculates a
default statement cache size; however, you can change it here.
- EDM DBD CACHE:
Specify the minimum size (in KB) of the DBD cache that can be used by the
environmental descriptor manager (EDM). This is another new storage pool which is
located above the 2-GB bar. The CLIST calculates the DBD cache size however you
can change it here.

11-42 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - STG AVAILABLE ABOVE 2GB:


This is the default amount of storage available above 2 GB that is available to the
DBM1 address space. This value is used to initialize the MEMLIMIT parameter in
the DSNDBM1 address space.
In addition, the fields EDMPOOL DATA SPACE SIZE and EDMPOOL DATA SPACE
MAX are removed from the panel (as the EDMPOOL no longer resides in a data
space).
• The IRLM parameters PC and MAXCSA parameters are no longer used.
DB2 Version 8 required IRLM 2.2. In Version 2.2, IRLM ships both 32 bit code and
64-bit code (so, you can expect your SDXRRESL to almost double in size). IRLM 2.2 no
longer supports placing locks in ECSA. All IRLM locks are now placed in the IRLM
private address space.
The PC and MAXCSA parameters are no longer used, but you must maintain them for
compatibility reasons. In the IRLMPROC JCL, you must specify the parameters and
values, but their values are not used. The amount of available storage for IRLM private
control blocks, including locks, is now determined by the operating system and
site-specific IPL parameters.
IRLM reserves approximately 10% of the available private storage to be used for
must-complete lock requests. Instead you can now use the MLMT parameter on the
IRLM startup procedure to control the maximum amount of private storage available for
IRLM above the 2 GB bar for storing locks (active and retained locks). MLMT is used to
set the z/OS MEMLIMIT value for the IRLM address space.
• “TEMPORARY UNIT NAME” on the DSNTIPA2 panel
As before, this field is used to specify the device type or unit name for allocating
temporary data sets. It is the direct access or disk unit name that is used for the
precompiler, compiler, assembler, sort, linkage editor, and utility work files in the tailored
jobs and CLISTs. However in V8, DB2 utilities that dynamically allocate temporary data
sets also use this parameter. Therefore, this field is now also a DSNZPARM named
VOLTDEVT.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-43
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Changed Defaults
ALTER BUFFERPOOL
Deferred Write Threshold (DWQT) decreased from 50% to 30%
Vertical Deferred Write Threshold (VDWQT) decreased from 10%
to 5%
ALTER GROUPBUFFERPOOL
Class Castout Threshold (CLASST) decreased from 10% to 5%
Group Buffer Pool Castout Threshold (GBPOOLT) decreased from
50% to 30%
Installation CLIST Changes
Cache Dynamic SQL now enabled by default
Fast Log Apply enabled (100 MB) by default
Checkpoint Frequency increased from 50000 to 500000 log records
Archive Log Blocksize reduced from 28672 to 24576

© Copyright IBM Corporation 2004

Figure 11-15. Changed Defaults CG381.0

Notes:
As a result of work done by the DB2 Performance team, a number of DB2 Installation
CLIST default settings for new systems change in Version 8. This visual shows the major
changes.
The initial defaults for the ALTER BUFFERPOOL command change:
• DWQT: Specifies the buffer pools deferred write threshold as a default percentage of
the total virtual buffer pool size. The initial default is decreased from 50% to 30%.
• VDWQT: Specifies the virtual deferred write threshold for the virtual buffer pool. This
parameter accepts two arguments. The first argument is a percentage of the total virtual
buffer pool size. The default is decreased from 10% to 5%.
The initial defaults for the ALTER GROUPBUFFERPOOL command change:
• CLASST: Specifies the threshold at which class castout is started. It is expressed as a
percentage of the number of data entries. The default is decreased from 10% to 5%.

11-44 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • GBPOOLT: Specifies the threshold at which the data in the group buffer pool is cast out
to DASD. It is expressed as a percentage of the number of data entries in the group
buffer pool. The default is decreased from 50% to 30%.
• GBPCHKPT: Specifies the time interval in minutes between group buffer pool
checkpoints. The default is lowered from 8 minutes to 4 minutes.
In addition, a number of Installation CLIST defaults change:
• The dynamic SQL cache is now enabled by default.
• Fast log apply is now enabled by default.
• The DB2 checkpoint frequency is increased from 50,000 to 500,000 log records written.
• The default block size of the archive log data set is reduced from 28672 to 24576.
The block size must be compatible with the device type you use for the archive log data
sets. If the archive log is written to tape, using the largest possible block size improves
the speed of reading the archive logs, while specifying a lower block size for DASD
devices reduces the amount of DASD required. A block size of 28672 is the best block
size to use for TAPE devices while 24576 is the best block size to use for 3390 type
devices.
• The default for “Describe for static” (DESCSTAT) is changed from NO to YES.
As stored procedures have become increasingly prevalent, the number of folks that turn
on this DSNZPARM parameter has correspondingly grown. In addition, all users of the
new Universal JDBC driver and new CLI drivers (on DB2 for Linux, UNIX, and
Windows) will be dependent on this column information for retrieving metadata from the
catalog correctly. Note that the native ODBC driver on DB2 for z/OS does not use the
common metadata stored procedures, so does not have the same dependency.
Important: If you are migrating, and DESCSTAT=NO in V7, make sure to change it to
YES, when migrating to V8. It can save you a lot of headaches when starting to use the
new CLI driver on the DB2 distributed platforms, or the new Universal JDBC driver later
on.
The table below details all of the changes that are made to the DB2 installation CLIST
default values.
Table 11-1 DB Installation CLIST Default Changes
Panel Id Panel Parameter Current Default Version 8 Default

DSNTIP7 User LOB value storage 2048(kb) 10240(kb)

DSNTPIE Max users 70 200

DSNTIPE Max remote active 64 200

DSNTIPE Max remote connected 64 10000

DSNTIPE Max TSO connect 40 50

DSNTIPE Max batch connect 20 50

DSNTIPF Describe for static NO YES

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-45
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Panel Id Panel Parameter Current Default Version 8 Default

DSNTIPN SMF statistics YES YES


(1,3,4,5) (1,3,4,5,6)

DSNTIP8 Cache dynamic SQL NO YES

DSNTIPP Plan auth cache 1024 bytes 3072 bytes

DSNTIPP Package auth cache 32 KB 100 KB

DSNTIPP Routine auth cache 32 KB 100 KB

DSNTIPL Log apply storage 0 100 (MB)

DSNTIPL Checkpoint frequency 50 000 records 500 000 records

DSNTIPA Block size (archive log) 28672 24576

DSNTIPR DDF threads ACTIVE INACTIVE

DSBTIPR IDLE tread time-out 0 120

DSNTIPR Extended security NO YES

DSBTIP5 TCP/IP KEEPALIVE ENABLE 120 (seconds)

DSNTPIC Max open data sets 3000 10000

DSNTIPC EDMPOOL Storage Size 7312 (KB) 32768 (KB)

DSNTIPC EDM Statement Cache n/a 102400 (KB)

DSNTIPC EDM DBD Cache n/a 102400 (KB)

11-46 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Installation Considerations
Same as prior releases
MUST run job DSNTIJTC
A new installation step

NEW!

All new functions are immediately available


No need to enable new functions (see later)
The Version 8 IVP jobs are created by the installation CLIST
as part of the Installation customization process
Migration is different (see later)

© Copyright IBM Corporation 2004

Figure 11-16. Installation Considerations CG381.0

Notes:
The process to install a new DB2 Version 8 subsystem is the same as installing a new DB2
Version 7 subsystem. However, there is one extra job that must be run.
The DB2 Version 8 Installation CLIST generates the job DSNTIJTC, which must also be
run. The job DSNTIJTC executes the CATMAINT utility. In previous versions, you run the
CATMAINT utility only during migration, to update the DB2 catalog for the new version.
With DB2 V8, job DSNTIJTC must be run for new installations, to update the DB2 catalog
for the table spaces which are not moving to Unicode, with the default EBCDIC encoding
schemes from DSNHDECP. The CATMAINT UPDATE utility should be very quick.
There is another little difference from a V7 installation. It is related to the sequence in which
you execute the installation jobs. In V7, the DB2 installation guide directed you to run job
DSNTIJID (initialization of the DB2 system data sets, including catalog and directory) prior
to DSNTIJUZ (creating the system parameters). In V8, you need to run DSNTIJUZ before
you can successfully run DSNTIJID, because DSNTIJID executes the DSNJU003 (change
log inventory) utility which is dependent on the DSNHMCID module created by DSNTIJUZ.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-47
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The procedure to migrate a DB2 subsystem to Version 8 has changed from previous
releases (more on this later). After the DB2 catalog has been migrated to Version 8, no new
DB2 Version 8 functions are available to you until you first “enable” the DB2 subsystem for
the new functions. This restriction does not apply when you install a brand new DB2
subsystem. All the new Version 8 functions are immediately available to you.
When you use the DB2 Installation CLIST to install a new DB2 subsystem, the DB2 Version
8 IVP jobs are also generated. You can run these jobs to verify the new DB2 Version 8
subsystem. When you migrate, the IVP jobs are only generated when using the ENFM
migration option. In order to be able to do some basic testing in V8 compatibility mode, you
should install the V7 samples (through phase 3) before migrating.

11-48 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.3 Migration and Fallback

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-49
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Migration/Fallback

Cut June 30, 2002


End of Marketing V6

V5 V6 V7 V8
End of Service: End of Service:
31 Dec 2002 30 Jun 2005

© Copyright IBM Corporation 2004

Figure 11-17. Migration/Fallback CG381.0

Notes:
As you can see by this visual, you can only migrate to DB2 Version 8 from DB2 Version 7.
The “skip” migration from DB2 Version 5 to DB2 Version 7 was a “once off” migration
strategy, to allow customers to move forward quickly and take advantage of the rich new
function provided in later releases of DB2 after getting over the Y2K hurdle. This strategy
will not be continued in future migrations.
You will need to first migrate your DB2 subsystems to Version 8 and have moved to
new-function mode before you will be allowed to migrate to the next version of DB2 after
Version 8.
Please also note that migrating from DB2 Version 5 to DB2 Version 6 is no longer
supported. This is because DB2 Version 6 was withdrawn from marketing in June 2002.
Also note that DB2 Version 6 will be going out of service on 30 June 2005; again another
reason to move to Version 7 and beyond.

11-50 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Typical Migration Process Today
Process built up over years of DB2 migration experience
1. Test install and migration/fallback on test system
2. Rollout across strategic development environments
Verify compatibility of old functions on new release
3. Finally migrate production to the new release
4. Begin to use new functions
When satisfied with new release

Use no new function during migration timeframe

© Copyright IBM Corporation 2004

Figure 11-18. Typical Migration Process Today CG381.0

Notes:
Here is the typical migration process we have today. It is built up over many successful
DB2 migrations; from DB2 Version 1.1 through until Version 7:
1. Test the install and migration/fallback processes on a software testing environment.
This is to make sure the migration and fallback processes works, you are able to
successfully migrate your DB2 environment to the new version, you have tested
fallback, and you are familiar and comfortable with what is involved in migration and
fallback.
2. Roll out the new version into development environments.
This provides more extensive testing of the new version and is done to ensure that your
current applications execute correctly with the new software.
3. Migrate the new release into production.
Once you are satisfied that your existing applications function correctly under the new
version of DB2, you are ready to migrate your production environment to the new
version of DB2.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-51
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

4. Begin to exploit new features.


We recommend that you begin to exploit the new features, beginning in the
development DB2 environments, only once you are satisfied with the new level of
software and have successfully migrated all of your production DB2 environments.
This migration process is designed to avoid objects becoming unavailable and “frozen”
should you have to fallback to the previous release.

11-52 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Possible Fallback Issues
No control over use of new functions when verifying
compatibility of old functions on new release
Multiple user groups eager to test / validate DB2 software
New function dabbling
Use of "new functions" sometimes precludes fallback
Firefight through the problems
Use of "new function" usually not discovered until fallback
A time for panic!

Fallback requires application of fallback SPE


Sometimes overlooked during migration
Continuous migration and fallback scenario
Data sharing coexistence

© Copyright IBM Corporation 2004

Figure 11-19. Possible Fallback Issues CG381.0

Notes:
While this typical migration strategy has served well in the past, it is not without its faults.
We have no control of when new features can be used and by whom. As soon as a DB2
environment is migrated to the new version, anyone can potentially begin to explore the
new function. Some features may require extra DB2 objects or extra security to be defined.
The use of these new features can of course be controlled, but other features cannot be
restricted.
Many developers are keen to dabble in the new features offered by the new release and
sometimes these new features find their way into new application releases before you
would like them to be. This brings about its own set of problems.
Application developers may be putting pressure on you to implement the new version of
DB2 into production sooner than you would like, as the new application release is now
dependent on the new DB2 release.
After the production DB2 environments have been migrated to the new version, new
features can find their way into production without you knowing. This may complicate any
fallback that you may have to perform, because after fallback, some applications (or parts

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-53
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

of applications) may no longer work. In more extreme cases, you may not be able to
fallback to the previous version because you cannot afford to lose the application.
All supported fallback scenarios require the fallback SPE to be applied to the previous
version of DB2. However, sometimes the fallback SPE is overlooked! This may place your
fallback plans in jeopardy, and there is no worse time to find out than during a fallback.
In addition, DB2 Version 8 has so many enhancements over Version 7, that the Version 7
fallback SPE would be huge to develop and get Version 7 code to run on a Version 8
catalog. The one-step migration strategy that has been used in past releases has therefore
become very expensive and error prone (maintaining the fallback SPE).

11-54 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Why a "New" Migration Process
Significant enhancements to the product - Largest release ever!
Support for long names
Unicode catalog tables

More control over the migration process


In line with system availability / maintenance window
Timing the introduction and use of the new functions

A tighter, more robust migration process


Fewer migration and fallback errors - Forced SPE Installation

© Copyright IBM Corporation 2004

Figure 11-20. Why a “New” Migration Process CG381.0

Notes:
Migrating to Version 8 is by far the largest migration of any release of DB2. For example,
the DB2 catalog will be converted from EBCDIC to Unicode and many object names will be
increased to 128 characters long. During the migration process you are essentially
replacing the whole DB2 catalog with a new one. Almost nothing remains unchanged.
DB2 Development has designed the new migration process to be a number of distinct
phases, to break the whole migration task into a number of smaller steps which you can
better manage. Now, you do not have to plan to perform all of the migration tasks at the one
time, thereby increasing availability and allowing you to better plan your DB2 upgrade
activities. The migration process to Version 8 also becomes less complicated and therefore
less prone to failure because the one big task is broken down into a number of smaller less
complicated tasks.
The introduction of “phases” has some benefits for you as well, as it addresses the
problems we have described on the previous visual.
It gives you control over when you want to release all the new function in DB2. Application
developers can no longer implement the new features offered by DB2 into their

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-55
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

applications in development and subsequently move them to production, without you


having to first ‘enable’ those DB2 environments for the new features. You can now control
what environments are enabled for the new functions and when. A suggested strategy is to
not enable any DB2 environment for the new function until all of your DB2 environments
have been migrated to Version 8, they are stable, you are happy with them and you no
longer need to fallback to Version 7. Once you have decided you are not going to fallback
to Version 7, you can first enable the development DB2 environment(s) for the new
function, then the production DB2 environment(s) for the new function.
Planning for DB2 migration and fallback should now be much easier.
DB2 now also checks to see if the fallback SPE(PQ48486) has been applied to your DB2
Version 7 before it allows you to migrate to Version 8. You must apply the fallback SPE and
restart your V7 with the fallback SPE at least once.This restriction is now enforced for both
data sharing and non-data sharing. Previously the restriction was only enforced for data
sharing environments. This enhancement should also reduce the number of fallback errors.
In case the fallback has not been applied when you want to migrate to V8, the following
message appears (Example 11-1) and DB2 will terminate.
Example 11-1. Starting V8 Without the Fallback SPE Applied

DSNR045E -D38C DSNRRPRC DB2 SUBSYSTEM IS STARTING AND


IT WAS NOT STARTED IN A
PREVIOUS RELEASE WITH THE FALLBACK SPE APPLIED.
FALLBACK SPE APAR: PQ48486
NEW RELEASE LEVEL: 0000D301
KNOWN LEVEL(S): 0000D202000000000000000000000000000
______________________________________________________________________
In addition, the fallback SPE is smaller, easier to manage, and less error prone.

11-56 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Overview of V8 Migration Process
V8 V8 V8
Compatibility Enabling New New Function
Mode Function Mode Mode
V7

DSNTIJTC
(CATMAINT
UPDATE) DSNTIJNE DSNTIJNF
(CATENFM (CATENFM
START) COMPLETE)

V7 catalog:
EBCDIC
"short names"
Padded IXs
V8 catalog: V8 catalog:
EBCDIC EBCDIC/Unicode V8 catalog:
"short names" Short/long names Unicode
Padded IXs (Not) padded IXs long names
MIXED situation NOT PADDED
V7 code IXs on C/D

Data sharing
coexistence

V8 code

© Copyright IBM Corporation 2004

Figure 11-21. Overview of V8 Migration Process CG381.0

Notes:
DB2 Version 8 moves through three distinct phases during migration from Version 7 to
Version 8:
• Version 8 compatibility Mode (CM):
This is only a transitional phase where no new Version 8 external functions are enabled
(for example, no new long names). Compatibility mode allows you to test and ensure
that the new Version 8 code is functioning correctly and your existing applications are
working correctly under the new code.
• Enabling-New-Function Mode (ENFM):
This is the phase where the catalog and directory are converted to a Version 8
new-function mode catalog. Once again, most of the new V8 functions are not
available.
• New-Function Mode (NFM):
DB2 enters new-function mode after all the migration tasks are complete. All the new
function in DB2 Version 8 is now available.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-57
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Migration now consists of two distinct migration processes to advance DB2 from one phase
to the next:
1. New release migration processing:
This migration consists of “normal” release migration processing as we have had in
previous releases. It uses the CATMAINT utility to convert the DB2 catalog and directly
to Version 8 format. The completion of this migration phase places the DB2 subsystem
into Version 8 compatibility mode.
2. Enabling-new-function mode processing:
This process consists of additional Version 8 conversion processing. Before you
consider moving to new-function mode, you must first ensure that you are happy with
the Version 8 code that is running, that it is in production, and it is stable.
During the enabling-new-function mode phase, the DB2 catalog and directory are
converted to Unicode. This phase can be scheduled over a period of time. After the
completion of this phase, you can move the DB2 into new-function mode. This is the
end of the DB2 Version 8 migration process and all new DB2 Version 8 function is now
available.
Although you can return from “new-function mode” to “enabling-new-function mode” (the
transition phase from “compatibility mode” to “new-function mode”), there is no path to
fallback to DB2 Version 7, once you are either in “enabling-new-function mode” or
“new-function mode”. More information on this will be given later.
This new migration process merely enforces and formalizes what many people already do
today when they plan their migration to a new version of DB2.
We explain the migration phases in more detail in the next few visuals.

11-58 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Overview of Fallback

V8 V8 V8
Compatibility Enabling New New Function
V7
Mode Function Mode Mode

CATMAINT
V7 catalog: Enable: Enable: V8 catalog:
DSNTIJNE DSNTIJNF
EBCDIC Unicode
"short names" long names
Padded IXs NOT PADDED
IXs

V7 code V7

Data sharing
coexistence

V8 code

Fall back to V7
with SPE Return to ENFM
Restore
Job DSNTIJEN

Backup here
for PIT recovery

© Copyright IBM Corporation 2004

Figure 11-22. Overview of Fallback CG381.0

Notes:
What do we mean by fallback?
“Returning to a stable DB2 code base after successfully migrating catalog and
directory to Version 8 compatibility mode”.

Fallback from V8 Compatibility Mode to V7


The fallback procedure is the same as for previous versions:
1. Stop Version 8.
2. Restore the Version 7 libraries.
3. Start DB2 under Version 7.
4. Rebind DSNTIAD and SPUFI if required.
5. Run the Version 7 IVP.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-59
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

However, before a successful fallback can occur, the fallback SPE must have been applied
to your DB2 Version 7 system and it has at least been started once under Version 7 before
attempting to start DB2 in a fallback release. This should be OK, as DB2 now checks to see
if the fallback SPE has been applied before starting CATMAINT processing during the
migration to compatibility mode. (See the discussion on “Catalog and DB2 Code Level” in
the previous visuals.)
Important: Fallback to Version 7 is ONLY permitted from Version 8 running in
compatibility mode. The structure of the Version 8 catalog is used in Version 7 after
fallback. Once DB2 has entered enabling-new-function mode, or new-function mode,
fallback to Version 7 is not supported.
Falling back to Version 7 from Version 8 compatibility mode does not undo any changes
made to the catalog and directory during a migration to Version 8. The migrated catalog is
used after fallback. Some objects in this catalog that have been affected by new function in
this release might become frozen objects after fallback. Frozen objects are unavailable,
and they are marked with a release dependency indicator. The release dependency
indicator is recorded in the DB2 catalog against each object using the IBMREQD column.
DB2 Version 8 records a release dependency indicator of ‘L’.
Frozen objects should be rare in Version 7 after a successful fallback from Version 8. This
is because much of the new function is not available in Version 8 until new-function mode.
Remember that DB2 does not support fallback from Version 8 new-function mode to
Version 7.
Frozen objects can include plans, packages, views, indexes, tables, table spaces, and
routines. In Version 8, the following objects can become frozen after fallback to Version 7:
• Plans, packages, or views that use any new syntax, objects, or bind options.
• DBRMs that are produced by a precompile in Version 8 with a value of YES for the
NEWFUN option. These DBRMs are in Unicode.
• User-defined functions created in Version 8 with the PARAMETER CCSID option.
• User-defined SQL procedures and functions created in Version 8 with the PARAMETER
CCSID option.
After fallback, all plans or packages that are bound in Version 8 are automatically rebound
on their first execution in Version 7. If you try to use plans or packages that are frozen, the
automatic rebind in Version 7 that takes place the first time that you try to run the plan in
Version 7 fails.

Returning from V8 New-Function to V8 Enabling-New-Function Mode


Once you are in new-function mode, you can only return to enabling-new-function mode by
running the job DSNTIJNE and re-assembling DSNHDECP with NEWFUN=NO. No
catalog updates that were made in new-function mode will be undone. No objects including
plans and packages are frozen. Plans and packages are not subject to automatic rebinds
and they will continue to work, even if they use any new function or reference any new

11-60 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty function dependent objects. However, after returning to enabling-new-function mode, you
cannot use the new functions any more. For example, you can still use tables that are
defined using table based partitioning; however, you cannot create any new tables using
table based partitioning.
Important: We therefore suggest that not many would use this option. It merely prevents
any new Version 8 function from being used, but does not prevent any existing Version 8
new function objects from being used.
There is no support for returning from enabling-new-function mode to compatibility mode. If
you need to do this, then the only way is by performing a point-in-time restore of the whole
DB2 environment.
When you are executing under Version 8 compatibility mode, there are no more hiperpools
or buffer pools in data spaces. On fallback to Version 7, DB2 remembers the buffer pool
allocations used in Version 7 and reinstates those values.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-61
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Compatibility Mode
No new release function can be used that may preclude fallback
or coexistence
Some new function available like online REORG of entire DB2 catalog
and 64-bit benefits also available now
Migrate to V8 compatibility mode requires normal CATMAINT
Time will mostly likely be determined by the time it takes to scan
SYSDBASE
DSNTIJTC - CATMAINT is a single step
Single commit scope - all or nothing
Authorization check
Ensures catalog is at correct level
DDL processing
Additional processing and tailoring
Directory header page and BSDS / SCA updates
No need to have additional steps to look for unsupported objects

© Copyright IBM Corporation 2004

Figure 11-23. Compatibility Mode CG381.0

Notes:
The DSNTIJTC job invokes the CATMAINT UPDATE utility in order to migrate the catalog
and directory to the current release, in this case Version 8. This is the same as all prior
releases of DB2. The end result of this migration process is a current Version 8 catalog.
A successful migration to DB2 Version 8 places the DB2 subsystem into Version 8
compatibility mode. In compatibility mode, no new Version 8 function is available for use
that may compromise a successful fallback to Version 7, or coexistence in a data sharing
environment. To be able to use new Version 8 function, a DB2 subsystem or data sharing
group must first convert their catalog to a new-function mode catalog via the
enabling-new-function mode process (this is described later).
While in compatibility mode, you can already take advantage of some of the new functions
in DB2 V8. For example, as DB2 V8 is always running in 64 bit mode, irrespective of
whether you are in compatibility mode, enabling-new-function mode, or new-function
mode, you can immediately take advantage of the larger buffer pools (provided that you
have enough real storage) and memory relief due to the new 64 bit architecture.

11-62 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty You may choose to rebind your applications while in compatibility mode, to get the benefits
of some of the Version 8 optimizer enhancements (and hopefully better access paths). This
does not pose a problem if you need to fallback to Version 8, as an automatic rebind is
forced for all your Version 8 bound plans and packages anyway. However, while it can be
said that there is no new function available in compatibility mode that would preclude a
successful fallback, it cannot be said that all new function that falls into this category is
available in compatibility mode. So, while rebinding in compatibility mode might be a good
thing, rebinding in new-function mode is a much better strategy.
Attention: In Version 8, DB2 now enforces the restriction that there must be no
outstanding utilities started from prior releases when running the CATMAINT UPDATE
utility on a non-data sharing system, otherwise message DSNU790I will be issued. The
reason being is that all utilities must be terminated in the same release they were started.
The CATMAINT utility does not enforce this restriction for data sharing environments
because it is valid to have Version 7 members active with utilities while CATMAINT is being
run on a Version 8 member to upgrade the catalog.
Migration to DB2 Version 8 is only supported from Version 7. Before attempting to migrate
to Version 8, make sure that Version 7 is at the proper service level. Refer to Figure 11-2,
"Planning for Version 8", on page 11-6 for a discussion of what the proper service level is
for both data sharing and non-data sharing systems.
The end result of the migration from Version 7 to Version 8 is a new, Version 8 catalog.
However, the catalog is still in EBCDIC and is not equipped for using long names at this
point. The processing needed for each catalog migration changes from release to release.
The migration process continues to change and evolve as we continually try to improve the
CATMAINT utility update process. Continuous availability is one of the main driving forces
of these process improvements.
The catalog migration (from V7 to V8 compatibility mode) in Version 8 is a single step job,
with an all or nothing commit scope. Job DSNTIJTC now only needs to be a single step job.
There is no need to have an extra step to migrate stored procedure definitions from
SYSPROCEDURES to SYSROUTINES that was performed in previous versions. In
addition, the CATMAINT utility is now designed to fail if it finds any unsupported functions.
Therefore, there is no need to have an extra step to report on these unsupported functions.
The job DSDNTIJPM is provided to report on any unsupported functions, which you can
run prior to running the CATMAINT utility.
The CATMAINT UPDATE utility performs the following tasks:
• It adds entries in the catalog and directory for new catalog objects:
Briefly, two new table spaces are created (SYSALTER and SYSEBCDC) and three new
tables are created (SYSIBM.IPLIST, SYSIBM.SYSSEQUENCEAUTH, and
SYSIBM.SYSOBDS). In addition, new columns are added to existing tables and other
columns have new values. Some new indexes are defined and some new constraints
are created. Refer to “DB2 Catalog Changes” in this unit for a little more detail on what

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-63
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

will change, or the DB2 SQL Reference, SC18-7426 for a more complete description of
the DB2 catalog changes.
When changing the tables in the catalog, DB2 also regenerates the views that may exist
on these tables. When that process fails, these views are marked with view
regeneration errors. To determine which views were marked with view regeneration
errors during migration, issue the following query:
SELECT CREATOR,NAME
FROM SYSIBM.SYSTABLES
WHERE TYPE = ’V’
AND STATUS = ’R’
AND TABLESTATUS = ’V’
If any views have view regeneration errors, you can issue the following query:
ALTER VIEW view REGENERATE
In this query, view is the name of the view with regeneration errors.
• The CATMAINT UPDATE utility makes additional updates to the migrated catalog and
directory, to indicate the new “level” of the catalog and directory.
• This utility looks for any unsupported objects, and data in the catalog. Before going
ahead with the conversion to a Unicode catalog (during enabling-new-function mode
processing), DB2 wants to make sure that this conversion has a maximum chance of
success. Therefore, during CATMAINT processing, DB2 will check for unsupported
characters in the catalog, by scanning a number of DB2 catalog table spaces. Of those
catalog table spaces, SYSDBASE is usually the largest. Therefore, you can expect the
elapsed time of the CATMAINT utility run to be close to the time it takes to scan
SYSDBASE.
Version 8 CATMAINT processing is terminated if any Type 1 indexes are found in the DB2
catalog. All CATMAINT processing will be rolled back if any Type 1 indexes are found. It
should be noted that CATMAINT processing is terminated when the first Type 1 index is
discovered. There could be others. The queries provided in the DSNTESQ and DSNTIJPM
jobs should be used to identify all the unsupported objects in a catalog.

11-64 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Compatibility Mode Considerations
Unicode parser
DB2 now parses all SQL in Unicode
SQL written to IFCIDs can be in Unicode (UIFCIDS DSNZPARM)
Message DSNH330I can occur in some applications now which
must be converted to Unicode
String Constants may now exceed the maximum length
DB2 must convert string constants to Unicode before using them
Some characters are represented longer in Unicode
For example: “¬”
Message DSNH102I or SQLCODE -102

Buffer pools may require more memory

© Copyright IBM Corporation 2004

Figure 11-24. Compatibility Mode Considerations CG381.0

Notes:
Here we discuss some considerations regarding compatibility mode.
Unicode Parser
• DB2 V8 always parses SQL statements in Unicode UTF-8 (CCSID 1208). Therefore, if
an application program uses a different CCSID (such as an EBCDIC CCSID), DB2
converts the application program to Unicode for processing by the precompiler. This
conversion occurs for the entire application program (to allow the precompiler to check
the host language definitions), not just for SQL statements.
If certain errors occur in host language statements (not just in SQL statements), the
message DSNH330I can appear. Examples of such errors are an invalid code point, a
mismatch between shift-in and shift-out, and absence of half of a DBCS character.
These may have existed already anywhere in the program source code, even in
comment lines, but were not detected until now, when the entire source code must be
converted to Unicode.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-65
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Programs that execute dynamic SQL are no different. DB2 must convert the program to
Unicode to parse the code to discover PREPARE, DESCRIBE, EXECUTE, OPEN,
FETCH, and CLOSE statements, etc. During the processing of the PREPARE
statements, DB2 must convert the contents of the host variables containing the SQL
statements to Unicode before they can be processed.
• Various contexts of string constants have various maximum lengths. To check the
length of a string constant, DB2 uses the Unicode UTF-8 representation of the string
constant, even if the eventual destination of the constant (for example, a column of a
table) does not use UTF-8. This length might differ from the length that you entered, if
you entered the string constant in a CCSID other than UTF-8.
For example, the character “¬” requires one byte in EBCDIC, but two bytes in UTF-8.
Therefore, in some contexts, a string that was valid in Version 7 might be flagged as too
long in Version 8, and you will receive the message DSNH102I (in a precompilation) or
SQLCODE -102 (otherwise).
This incompatibility can occur only if both of the following conditions exist:
a. The string contains one or more characters whose UTF-8 representations require
more bytes than their original representations did.
b. This expansion causes the string to grow beyond the maximum allowed length.
There are two groups of contexts in which this incompatibility arises. In the first group of
contexts, such a string is flagged as too long in any mode of DB2 Version 8, because
the maximum permitted lengths did not grow in Version 8:
- In ALTER INDEX or CREATE INDEX, VALUES (constant)
- In ALTER TABLE or CREATE TABLE, FIELDPROC program-name (constant)
- In ALTER TABLE or CREATE TABLE, CHECK (check-condition)
- In ALTER TABLE, CREATE TABLE, and DECLARE GLOBAL TEMPORARY
TABLE, under DEFAULT, in a constant
- In COMMENT ON, IS string
- In LABEL ON, IS string
- SET CURRENT LOCALE LC_CTYPE
- SET CURRENT OPTIMIZATION HINT
- SET CURRENT SQLID
- In SIGNAL SQLSTATE, diagnostic-string-constant.
In the second group of contexts, such a string is flagged as too long in compatibility
mode and enabling-new-function mode, but the string is not flagged as too long in
new-function mode, because new-function mode allows longer strings:
- CASE expression
- CALL procedure-name (expression)
- Expression in a predicate
- Expression in a WHERE clause
- Expression in a HAVING clause
- Expression in a SELECT clause in a subselect

11-66 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - Expression specified in a built-in function.Join expression


• While all trace records remain in EBCDIC, you have a DSNZPARM option to write all
SQL statements into IFCID records in Unicode UTF-8, not in EBCDIC. The DSNZPARM
parameter is UIFCIDS. When using the default value (NO), DB2 will continue to write
the SQL statements into IFCID records as EBCDIC.
Only a subset of the character fields (identified in the IFCID record definition by a “%U”
in the comment area to the right of the field declaration in the DSNDQWxx copy files)
are encoded in Unicode. The remaining fields maintain the same encoding of previous
releases.
Buffer Pool Considerations
When you are executing under Version 8 compatibility mode, there are no more hiperpools
or buffer pools in data spaces. On migration to Version 8 compatibility mode, DB2 allocates
buffer pools that are equal in size to the equivalent virtual buffer pools plus any equivalent
hiperpool size. (Version 8 also imposes a restriction that no one buffer pool can be larger
than 1 TB, nor can the sum of all buffer pools exceed 1 TB.) On fallback to Version 7, DB2
remembers the buffer pool allocations used in Version 7 and reinstates those values.
Important: If you currently use hiperpools and/or buffer pools in data spaces, we
recommend that you review your buffer pool allocations before migration to Version 8
compatibility mode. (Refer to the discussion above regarding how buffer pools sizes are
migrated to Version 8.)
This recommendation is to check that you will have enough real memory to back the
memory that DB2 Version 8 will use for buffer pools. If you do not perform this “sanity
check” before migrating to Version 8, you may see an unexpected leap in the amount of
memory DB2 version 8 demands. This may adversely impact performance.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-67
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Coexistence in Data Sharing


Coexistence only between V7 and V8 CM
Must have fallback SPE (PQ48486) on all V7 members
Enforced for both, data sharing and non-data sharing
Information in the BSDS / SCA is checked at DB2 startup time to
ensure that all group members have the SPE on
New starting member's code level checked to ensure coexistence
with the catalog

NO coexistence between an ENFM and CM


Once ENFM begins, all active members of group are considered to be
in ENFM status
Cannot start a V7 subsystem
Could start a V8 member. It would join in ENFM status
ENFM (and NFM) mode are group wide events

© Copyright IBM Corporation 2004

Figure 11-25. Coexistence in Data Sharing CG381.0

Notes:
DB2 data sharing in coexistence mode has its own complexities. (Please refer to the
manual, DB2 Data Sharing: Planning and Administration, SC18-7417 for more details.)
Coexistence of multiple releases of DB2 is particularly of interest in a data sharing
environment. The objective is that a data sharing group can be continuously available for
any planned reconfiguration, including release migrations. So, if you have a data sharing
group consisting of all Version 7 DB2 members, you can migrate your group to Version 8 by
“rolling-in” the release migration one member at a time (similar to the way you would roll in
a PTF for an APAR fix), thus keeping at least a subset of the DB2 members available at all
times. During the “rolling-in” process, you will have a period of time where there are Version
7 and Version 8 members coexisting within the same data sharing group.
DB2 will support the coexistence of only two releases at a time. DB2 does not support the
coexistence with a Version 7 DB2 member if any Version 8 DB2 member is in new-function
mode. Coexistence is only supported in compatibility mode. During this period of
coexistence, any new function available in Version 8 CM is not available to the downlevel
members.

11-68 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty All DB2 members MUST be in Version 8 compatibility mode before you move to
enabling-new-function mode.
During a migration to Version 8, the other group members may be active. CATMAINT
processing will take whatever locks are necessary for the processing it needs to do. The
other active group members may experience delays and/or time-outs if they try to access
the catalog objects that are being updated or locked by migration processing. We therefore
recommend that migration to compatibility mode be scheduled during a period of low
activity, or preferably, only have one DB2 subsystem active for the migration. Once the
migration to Version 8 completes, the other group members can be brought up to Version 8
at any time.
Catalog and DB2 Code Level
DB2 Version 8 more strictly enforces the concept of “Catalog and DB2 Code Level”. The
level of DB2 code (PTF level) must match the level that is recorded in the DB2 catalog.
This check ensures that you are starting DB2 with the right SDSNLOAD library, to avoid
any catalog corruption caused by executing the wrong code base against the DB2 catalog.
Non-data sharing: At DB2 startup time, the code level of the starting DB2 is checked
against the code level required by the current DB2 catalog. If the starting DB2 has a code
level mismatch with the catalog then the message DSNX208E is issued and DB2 will not
start. A code level mismatch indicates that the starting DB2 is at a level of code that is
down level from what it needs to be for the current catalog.
If the catalog has been migrated to Version 8, then the starting DB2 must be at Version 8 or
Version 7 with the fallback SPE (PQ48486) on.
Before attempting to migrate to Version 8, it is necessary to start DB2 at least once as
Version 7 with the fallback SPE on, prior to attempting to migrate to Version 8. If this is not
done, then the Version 8 migration will be terminated. This is a change from the way the
fallback SPE was handled for non-data sharing in previous releases.
Data sharing: At DB2 startup time, the code level of the starting DB2 is checked against
the code level required by the current DB2 catalog and against the code level of the other
DB2s that are active. If the starting DB2 has a code level mismatch with the catalog or any
of the other DB2s that are running then either the DSNX208E or DSNX209E message will
be issued and DB2 will not start.
A code level mismatch indicates that the starting DB2 is at a level of code that is down level
from what it needs to be for the current catalog, or that one or more of the already running
DB2s are down-level from where they need to be. Before attempting to migrate to Version
8, all started DB2 subsystems must have maintenance through the Version 8 fallback SPE
on before any attempt is made to migrate any member to Version 8. If the fallback SPE is
not on all active group members, then DB2 Version 8 will not start and you will not be able
to attempt the Version 8 migration. One of the messages, DSNX208E or DSNX209E, will
be issued in these cases.
Quiesced DB2 members are not a concern because it may be valid to have a quiesced
member that does not have the right code level. The quiesced member may be a DB2

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-69
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

member that is no longer used or is rarely started. The quiesced member will fail to start
anyway if it is not at the right code level.
Call Attachment and TSO Attachment Coexistence
While you are in a coexistence environment, you can attach to either release of DB2 with
your existing TSO logon procedures or with JCL. After you migrate all members of the
group to the latest level of DB2, Version 8, update those procedures and jobs to point to the
latest level of DB2 load libraries.
Avoiding Automatic Rebinds
When planning for migration, new functions introduced in the new release are not available
to members of the group who have not yet migrated.
Plans and packages that have the new release dependency indicator set (column
IBMREQD = “L” in the tables SYSIBM.SYSPLAN and SYSIBM.SYSPACKAGE), cannot be
executed on members who have not been migrated. An automatic rebind must first occur
on a DB2 member running the old release. When the plan/package is re-executed on the
newly migrated member again, another automatic rebind must occur. This can lead to a
“thrashing” situation, where plans and packages are continually being rebound on Version
7 and Version 8 as they are being executed.
A number of strategies exist to avoid this thrashing. For example; do not allow packages
and plans bound on Version 8 to execute on members that have not yet been migrated, or,
do not allow plans or packages to be bound on Version 8 until all members are migrated.
This serves two purposes. First, if those Version 8 bound plans or packages are using new
functions, you can avoid the application errors that occur if the plan or package tries to
execute an SQL statement that is not allowed in the release from which you are migrating
(Version 7). Second, it avoids the automatic rebind that occurs when any plan or package
that is bound on Version 8 is run on the previous release. It also avoids the automatic
rebind that occurs when a Version 8 bound plan or package that was automatically rebound
on the previous release is later run on Version 8.
If it is not possible to enforce on which member a plan or package runs, consider how you
want to handle binds and automatic rebinds while two releases are coexisting. One
approach is to disallow all binds and disable all automatic rebinds on the Version 8
subsystem. The other approach is to disable only those automatic rebinds that occur on
Version 8 in Step 3 of the following scenario:
1. A plan or package is bound on Version 8.
2. The plan or package is run on a non-Version 8 (automatic rebind occurs on non-Version
8)
3. The plan or package is run on Version 8 (automatic rebind occurs on Version 8).
Because DB2 perceives this scenario as a fallback and remigration scenario, the autobind
that occurs in Step 3 is called a remigration rebind. By disallowing the automatic rebind in
Step 3, you are avoiding the thrashing that can occur by having the plan or package rebind
every time it runs on a member of a different level.

11-70 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Disallowing all binds: You can specify NO on the AUTO BIND field of installation panel
DSNTIPO for all Version 8 members (ABIND parameter in DSNZPARM). This disables all
automatic rebinds on the Version 8 member for any reason. You need to also use the
resource limit facility to disallow BIND operations. Do this by inserting rows in the resource
limit specification table (RLST) to set RLFFUNC to “1” and RLFBIND to “N”. This ensures
that nobody binds plans or packages on Version 8.
Disallowing only the automatic remigration rebind: To avoid the automatic remigration
rebind, specify COEXIST for the AUTO BIND field on installation panel DSNTIPO of the
Version 8 members. This means that automatic rebind occurs on Version 8 only in the
following circumstances:
• The plan or package is marked invalid.
• You migrate to a future release, bind a plan or package on that release, and then run
the plan or package on Version 8.
Recommendations for BIND
If the DSN TSO command is at Version 8 and the DB2 member that is named in the DSN
command is at Version 7, using certain bind options causes a BIND or REBIND
subcommand to be rejected. (The list of options differs, depending on the release from
which you are migrating.). If you are migrating from Version 7, the ENCODING option on
BIND and REBIND PLAN or PACKAGE will cause the BIND and REBIND subcommand to
fail.
To avoid problems, make sure the DB2 subsystem named in the DSN subcommand
matches the load libraries that are used for the DSN command.
Recommendations for Utilities
Until all members of the data sharing group are running at the new release, avoid using any
of the new utility functions available in Version 8 compatibility mode. However, as long as
you use utility options that are supported in Version 7, utilities can attach to a member at
either a Version 7 or Version 8 subsystem.
Recommendation for Group Restart
If a group restart is necessary while the data sharing group is running with mixed releases,
issue the START command only for Version 8 members. Do not start the Version 7
members until the Version 8 members have completed forward log recovery. If a Version 7
member performs the group restart for a Version 8 member, Version 7 adds pages to the
logical page list during the peer-forward recovery phase when it tries to apply redo log
records against a release-dependent object.
Recommendation for SPUFI
When you migrate the first member of the data sharing group to Version 8, you run
DSNTIJSG which rebinds SPUFI in Version 8. Binding SPUFI in Version 8 causes SPUFI
to be unavailable to the Version 7 members. If you attempt to run an SQL statement in a
data sharing member that is yet to be migrated to Version 8, expect messages that indicate
an unavailable resource.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-71
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

An alternative is to modify job DSNTIJSG and defer the SPUFI BIND until all member have
successfully ben migrated to Version 8.

11-72 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Coexistence with DDF
Currently NO restriction on DRDA processing

Local package enforced for "section 0" statements

Other versions of DB2 can


converse with DB2 Version 8
as today

Important
Of course, the
conversation will reflect
the capabilities of the
level of DRDA that the
other subsystem can
support

© Copyright IBM Corporation 2004

Figure 11-26. Coexistence with DDF CG381.0

Notes:
DB2 Version 8 communicates in a distributed data environment with DB2 UDB for z/OS
Version 6 and later, using either DB2 private protocol access or DRDA access. However,
the distributed functions introduced in Version 8 can be used only when using DRDA
access.
Other DRDA partners at DRDA Version 3 can also take advantage of the functions that are
introduced in this release of DB2.

Package Required for “Section 0” Statements


The PTF for APAR PQ59207 (V7) made some changes to the packages you need to have
at the local site. When you use the following, so-called “Section 0 SQL Statements”, and
you do not have a package or DBRM bound into the local plan, this PTF requires you to
have a package or DBRM for these statements. The affected “Section 0 SQL Statements”
are: CONNECT, COMMIT, ROLLBACK, DESCRIBE TABLE, RELEASE, SET
CONNECTION, SET :HV = CURRENT SERVER, and VALUES CURRENT SERVER
INTO :HV.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-73
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The SET CURRENT PACKAGSET, SET :HV = CURRENT PACKAGESET and VALUES
CURRENT PACKAGESET INTO :HV statements are not affected. By default, DB2 will
enforce this new rule and requires you to have a package at the local site. However,
specifying YES for the PKGLDTOL DSNZPARM (the default is NO) allows you to continue
to have the same behavior as before the introduction of this PTF. This restriction is
enforced in the DB2 V8 base. The PKGLDTOL DSNZPARM has been removed.
Also note that two-phase commit is no longer supported when you use DB2 Connect
Version 8 to access a DB2 UDB for z/OS (any supported version) using SNA. When
coming from DB2 Connect V8, in order to be able to use two-phase commit, you must use
TCP/IP.

11-74 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Enabling-New-Function Mode
The process by which the catalog is converted from V8
Compatibility Mode to V8 New Function Mode
This is NOT optional. Everyone must run the Enabling New Function
Mode before migrating to a later release.
During ENFM processing:
Change types and lengths of existing catalog columns
Converts the catalog data from EBCDIC to Unicode
Using online REORG
Change buffer pools for some catalog table spaces
Change data capture (CDC) is disabled on every catalog table that
has it enabled
Must manually re-enable CDC
ALTER TABLE SYSIBM.SYSTABLES DATA CAPTURE CHANGES

© Copyright IBM Corporation 2004

Figure 11-27. Enabling-New-Function Mode CG381.0

Notes:
Enabling-new-function mode (ENFM) converts a Version 8 compatibility mode (CM)
catalog to a Version 8 new-function mode (NFM) catalog.
Conversion to a DB2 Version 8 new-function mode catalog is only allowed after a
successful migration has been completed to DB2 Version 8 compatibility mode. (DB2
checks the level of catalog and DB2 code before it will successfully enter
enabling-new-function mode.)
The following tasks are performed in the enabling-new-function mode:
1. The catalog is flagged as being in enabling-new-function mode.
2. The types and/or lengths of existing catalog columns are changed.
Many columns in many tables will change data type and/or length to support long
names. Typically, character identifier columns change to varchar columns 3 times the
size (for example, CHAR(8) columns are converted to VARCHAR(24), and other
columns will change to VARCHAR(128).

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-75
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

When these columns are converted to overcharge, they will be converted with their
maximum previous length. For example: CHAR(8) columns with “A” (containing 7
blanks) will be converted to VARCHAR(24) with a length of 8 containing “A “(with 7
blanks). DB2 does not know how many of these trailing spaces are valid. However,
when these objects are created without trailing spaces under Version 8, DB2 will store
them with their true length.
3. Columns that contain data that should not be converted are marked FOR BIT DATA.
For example, this is the case with the CONTOKEN and STMT column in
SYSPACKSTMT, and the TEXT column in SYSSTMT.
Tip: Remember that in V8, all statements are parsed in Unicode, and are normally stored
in the catalog in Unicode (once you are in NFM). With the TEXT and STMT column
marked “FOR BIT DATA”, they do not get translated to EBCDIC, when you are using
SPUFI, for example. To be able to read the statement text information stored in the DB2
catalog, you can use Visual Explain. The tool has been enhanced (as all regular DB2 DM
Tools) to support Version 8.
4. The catalog is converted to Unicode.
The following catalog and directory table spaces are converted to Unicode via special
Online Reorg processing:
- Directory (DSNDB01):
• SPT01
- Catalog (DSNDB06):
• SYSDBASE
• SYSDBAUT
• SYSDDF
• SYSGPAUT
• SYSGROUP
• SYSGRTNS
• SYSHIST
• SYSJAVA
• SYSOBJ
• SYSPKAGE
• SYSPLAN
• SYSSEQ
• SYSSEQ2
• SYSSTATS
• SYSSTR
• SYSUSER
• SYSVIEWS
The following catalog and directory table spaces are not converted to Unicode and
remain as EBCDIC. Many of these tables already contain binary data so there is no
need to convert them to Unicode. They also contain many logical names that must

11-76 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty interface with external MVS names (for example data set names), which remain
EBCDIC.
- Directory:
• DBD01
• SCT02
• SYSLGRNX
• SYSUTILX
- Catalog:
• SYSCOPY
• SYSEBCDC
During the online REORG process, the system-defined indexes on the catalog are
converted to NOT PADDED.
Please note that despite having Unicode and the z/OS Conversion Services active, DB2
still uses the table SYSIBM.SYSSTRINGS for character conversion support.
For character conversion, DB2 performs these steps:
a. First it looks in SYSIBM.SYSSTRINGS for the combination of source and target
CCSIDs it needs for conversion.
b. Then it turns to the z/OS Conversion Services.
If nothing is found, then an error is returned.
5. Buffer pools for some catalog table space are changed.
Increasing the length of some of the catalog table columns causes many catalog table
rows to exceed the current BP0 4K page size maximum. In these cases, the table
spaces that contain these tables are moved to an appropriately sized buffer pool. DB2
creates a BP8K0 buffer pool and a BP16K0 buffer pool during migration from Version 7
to Version 8 compatibility mode, if they don’t already exist. Once in Version 8
compatibility mode, you cannot use the -ALTER BUFFERPOOL command to delete
these buffer pools because they must exist before you use the REORG utility to convert
the catalog tables to Unicode, in enabling-new-function mode.
Table 11-2 shows the table spaces which move to new buffer pools.
Table 11-2 New Catalog Table Space Buffer Pools
Table space name Buffer pool Page size

SPT01 BP8K0 8K

SYSDBASE BP8K0 8K

SYSGRNTS BP8K0 8K

SYSHIST BP8K0 8K

SYSOBJ BP8K0 8K

SYSSTR BP8K0 8K

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-77
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Table space name Buffer pool Page size

SYSSTATS BP16K0 16 K

SYSVIEWS BP8K0 8K

If you are running a data sharing system, it is your responsibility to define these new
group buffer pools before you move into enabling new-function mode.
In addition, if you specified “YES” in field 6 “VARY DS CONTROL INTERVAL” on install
panel DSNTIP7 (during MIGRATE mode), the installation CLIST configures DSNTIJNE
to use variable CI sizes for DB2 catalog and directory data sets that have a page size
greater than 4 K.
6. Old catalog tables are dropped.
The SYSIBM.SYSLINKS and SYSIBM.SYSPROCEDURES catalog tables are dropped
from the catalog, as they are no longer used.
The contents of SYSIBM.SYSPROCEDURE was moved to SYSIBM.SYSROUTINES
when DB2 was migrated from Version 5 to Version 6 or 7. The table
SYSIBM.SYSLINKS is no longer used in DB2 Version 8.
7. Catalog tables are moved.
The SYSIBM.SYSDUMMY1 catalog table is being moved from the SYSSTR catalog
table space to the new Version 8 SYSEBCDC catalog table space (which remains
encoded in EBCDIC). This table space is created during migration by the CATMAINT
utility.
SYSIBM.SYSDUMMY1 is probably already intrenched into many applications today. If it
were to be converted to Unicode, all SQL statements accessing SYSIBM.SYDUMMY1
would become multiple CCSID statements. Results of multiple CCSID SQL statements
can be different from single CCSID statements; for example, the ordering of result sets,
or the use of range predicates, can affect the result set. This could cause unnecessary
complications for existing applications.
During the enabling-new-function mode, views on the catalog tables in the table space
being processed are regenerated. However, plans and packages dependent on the catalog
tables in the table space being processed will be invalidated. DB2 will attempt to auto
rebind these invalidated plans and/or packages the next time they are needed.
Once a table space is converted to Unicode, there is no way to go back for that table
space. All converting table spaces must be converted to Unicode before you can change
from enabling-new-function mode to new-function mode.
This enabling-new-function mode processing phase can last for quite some time as the
table space changes may be staged in over a number of “conversion” windows. However,
we do not recommend staying in this mode for a long period of time.

11-78 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
ENFM - When Should I Start?
When you are sure you will not want to fallback to compatibility
mode
Your applications are stable in CM
Version 8 is stable in CM
Enabling process CAN span maintenance windows
You have the flexibility to plan the schedule
Can be stopped after the REORG of any table space
Can be restarted without modification
Skips already processed table spaces
Resumes processing at first table space not successfully
converted
Recommendation is NOT to stay in ENFM phase any longer
than necessary

© Copyright IBM Corporation 2004

Figure 11-28. ENFM - When Should I Start? CG381.0

Notes:
We recommend that you plan to enable new-function mode in DB2 Version 8 only after you
have had time to verify that your applications are stable running with the Version 8 code in
compatibility mode and that the DB2 Version 8 code itself is stable in compatibility mode.
Allow yourself adequate time to complete this testing. Typically, you may choose to run in
Version 8 compatibility mode for a month or two before you plan to move into ENFM.
Once you have decided to enter ENFM, there are a few things for you to consider when
you schedule the work.
• The enabling process can span a number of windows. It all does not have to be
performed at the same time. The enabling job which performs the conversion can be
stopped and restarted at any time (more details will be provided later). It will skip table
spaces it has already processed and resume processing from where it left off.
• Although DB2 can process other work while the conversion process takes place, we
recommend that you schedule the conversion work at a quiet time. DB2 uses the
ONLINE REORG Utility with SHRLEVEL REFERENCE to transform each table space.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-79
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

This means that DB2 only needs exclusive access to the catalog table spaces for only a
very short period of time. However, applications may still be impacted by resource
unavailable conditions while the conversion process is active.
• While the ENFM conversion process can take as long as you need, we recommend that
you plan to complete ENFM sooner rather than later. Do not plan to stay in ENFM any
longer than necessary. The sooner you move to new-function mode, the sooner you will
be able to use the new function provided by DB2 Version 8.

11-80 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
A Checklist Before ENFM
Run online REORGs (in V8 CM) against the catalog to check:
Timings
Elapsed times
Plan staged execution around outage windows
Review catalog data set sizings
Eliminate space failures during ENFM
Consider to increase the size of the catalog table spaces and
indexspace to accommodate longer names
Review space for any user defined catalog indexes
For declared temporary tables
Ensure at least one table space with page size of 8K or greater
For data sharing
Ensure GBP8K0, GBP16K0, GBP32K is defined

© Copyright IBM Corporation 2004

Figure 11-29. A Checklist Before ENFM CG381.0

Notes:
This visual highlights a few activities you should consider performing before you begin the
enabling-new-function mode processing. Although they are not mandatory, we highly
recommend that you perform them:
• Run online REORG against the DB2 Version 8 catalog in compatibility mode.
This will provide you with some timings which should help you plan and schedule the
ENFM jobs around available windows of low activity. It will help to verify the integrity of
your DB2 catalog tables prior to conversion and also provide an opportunity to resize
any table space, if you need to, before conversion.
• Review the size of your existing DB2 catalog data sets.
During the ENFM, many catalog tables will increase in size to accommodate new
column definitions for long names. We strongly advice you have adequate space
available for the expected growth in the catalog tables before you begin to convert
them. This will help to avoid conversion failures due to space problems.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-81
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The size of these catalog and directory table spaces will change as a result of the
Unicode conversion. Many columns are increasing in length to support long names.
Unicode is a variable length encoding scheme. Anywhere from 1 to 4 bytes are required
to store some characters (for example; the character “¬” requires 2 bytes in UTF-8).
During lab measurements on real customer catalogs, catalog tables grew by no more
than 10%, but more typically between 1% and 5%. In fact, some catalog table spaces
have decreased in size (merely due to the fact that they were reorganized, and dead
space was reclaimed).
The size of any user-defined index on the DB2 catalog is likely to grow substantially
during the conversion process. This is because many column lengths increase to 128
bytes, to support long names. Unlike the system-defined DB2 catalog indexes, which
are converted from PADDED to NOT PADDED during the conversion process, any
user-defined indexes on the catalog will remain PADDED, and therefore require more
space.
Once you are in new-function mode, you can ALTER these user-defined indexes to
NOT PADDED and rebuild them to reclaim the space. Alternatively, you may decide to
DROP these user-defined indexes before you proceed with the ENFM process.
• Ensure that there is at least one table space with page size of 8K or greater in the
TEMP database.
DB2 Version 8 now requires at least one table space to be available in the TEMP
database which has a page size of 8K or greater. This is to support declared temporary
tables.
• Ensure that GBP8K0, GBP16K0, and GBP32K are defined.
For data sharing, you will need to define a group buffer pool for GBP8K0, GBP16K0
and GBP32K. The DB2 catalog now uses BP0, BP8K0, BP16K0, for DB2 catalog
objects.
When using the installation CLIST to generate the ENFM jobs, the panels will force you
to put in a value for a virtual 8K and 16K buffer pool.

11-82 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
ENFM in Data Sharing
Cannot start a V7 member once ENFM begins
Regardless of whether or not ENFM process is running
Now in ENFM phase
Cannot begin ENFM process if any active V7 subsystems
in group
ENFM is group wide
Once ENFM begins, the entire group is in ENFM
Only done once for the data sharing group

© Copyright IBM Corporation 2004

Figure 11-30. ENFM in Data Sharing CG381.0

Notes:
The important point to note here is that enabling-new-function mode is a data sharing
group wide activity. You perform it only once for the whole data sharing group.
Enabling-new-function mode cannot begin until all data sharing members are at a Version
8 level of code, running in compatibility mode.
Once you have entered enabling-new-function mode, you cannot start a DB2 Version 7
member.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-83
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

No "Going Back" to V8 CM
Why return to Version 8 compatibility mode?
Possibly only -- to then fallback to Version 7
Restore entire system from backup copy is ONLY option
Important message - "Do not short change compatibility mode"
Make sure you have been through all the major cycles
(1 month to 3 months)
At least in testing, before moving on through ENFM to NFM
It is very difficult to go back

Attention: You cannot fallback


from Version 8 NFM.
Do NOT convert to NFM until
you are certain that you will not
need to fallback.

© Copyright IBM Corporation 2004

Figure 11-31. No “Going Back” to V8 CM CG381.0

Notes:
Fallback is defined as going back to a stable code base. By returning to compatibility mode
we are not returning to a stable code base as we are running with the same code case (the
same Version 8 code we are executing in enabling-new-function mode.). So, fallback to
compatibility mode is rather meaningless. However, there may be some reasons why you
may want to “return” to compatibility mode.
Remember that once the first step of job DSNTIJNE completes successfully, your DB2
subsystem is deemed to be in enabling-new-function mode and there is no returning to
Version 8 compatibility mode. This is true for both data sharing and non-data sharing
environments.
For this reason we recommend that you spend an appropriate amount of time in
compatibility mode before moving to the next phase. Make sure you have performed
adequate testing, the environment is stable and you are comfortable with the code before
you plan to move to the next phase.

11-84 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The restriction that you cannot go back to compatibility mode should not cause any
problems because the need to return from ENFM for CM should be very rare. The only
reason to return may be to subsequent fallback to Version 7.
If you absolutely need to return to Version 8 compatibility mode, then the only option
available to you is to perform a point-in-time recovery of both the catalog and directory to a
point before ENFM was entered. This implies a point-in-time recover of the entire DB2
subsystem or data sharing group! You can use whatever means you have available to
perform this PIT recovery, online via DB2 utilities of offline via pack restores. However,
remember that all updates that were done while in ENFM or NFM will be lost!

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-85
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Enabling New Function Mode


Use DB2 installation CLIST to generate conversion jobs
Conversion done via the DSNTIJNE job
First step enters enabling-new-function mode
Several steps for each table space
Uses CATENFM utility (not CATMAINT)
Conversion complete signaled by job DSNTIJNF
To enter new-function-mode
Reassemble DSNHDECP
Job DSNTIJNG

© Copyright IBM Corporation 2004

Figure 11-32. Enabling New Function Mode CG381.0

Notes:
To begin the conversion process into enabling-new-function mode, you first need to invoke
the DB2 installation CLIST with the new ENFM option. The CLIST generates a number of
new jobs which you need to run.
Job DSNTIJNE initiates and completes all of the necessary ENFM processing. The first
step in this job executes the new CATENFM utility to bring DB2 from compatibility mode
into enabling-new-function mode. Subsequent steps convert the DB2 catalog tables.
When this process is complete, the subsystem is ready to formally enable NFM via the
DSNTIJNF job.This job once again invokes the CATENFM utility to bring DB2 into
new-function mode.
The data only module DSNHDECP must then be re-assembled with the parameter
NEWFUN=YES. This is required to tell the DB2 precompiler that DB2 is now in
new-function mode and to accept the new SQL syntax. DB2 does not have to be cycled to
bring this change into effect. A sample job DSNTIJNG is provided which can be used, or
you can choose to assemble DSNHDECP using the existing job DSNTIJUZ.

11-86 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Panel DSNTIPA1 - ENFM

DB2 VERSION 8 INSTALL, UPDATE, MIGRATE, AND ENFM - MAIN PANEL


===>

Check parameters and reenter to change:

1 INSTALL TYPE ===> ENFM Install, Update, or Migrate


or ENFM (Enable New Function Mode)
2 DATA SHARING ===> Yes or No (blank for Update or ENFM)

Enter the data set and member name for migration only. This is the name used
from a previous Installation/Migration from field 7 below:
3 DATA SET(MEMBER) NAME ===>

Enter name of your input data sets (SDSNLOAD, SDSNMACS, SDSNSAMP, SDSNCLST):
4 PREFIX ===> DSN810
5 SUFFIX ===>

Enter to set or save panel values (by reading or writing the named members):
6 INPUT MEMBER NAME ===> DSNTID8A Default parameter values
7 OUTPUT MEMBER NAME ===> DSNTID8N Save new values entered on panels

PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIPA1: Install, update, and migrate DB2 - main panel

© Copyright IBM Corporation 2004

Figure 11-33. Panel DSNTIPA1 - ENFM CG381.0

Notes:
To begin preparation for enabling-new-function mode, you invoke the DB2 installation
CLIST in the same way as you would for a new DB2 install, migrating a DB2 subsystem to
a new version or enabling data sharing. The primary panel DSNTIPA1 is first displayed.
You specify ENFM as the primary option on this panel.
Leave the DATA SHARING and DATA SET NAME(MEMBER) fields blank. This is true for
both data sharing and non-data sharing.
The INPUT MEMBER NAME specified for conversion should always be the same as the
OUTPUT MEMBER NAME that you specified during migration to compatibility mode.
In a data sharing environment, the INPUT MEMBER NAME specified for conversion must
be the same as the OUTPUT MEMBER NAME used when migrating the first member of
the data sharing group to Version 8.
The DB2 installation CLIST displays a smaller set of panels for the primary option of”
ENFM” than the “install” and “migrate” primary options. In fact, there are only a few panels
in common. We will walk through some of the major panels in the next few visuals.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-87
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Briefly, the DSNTINST CLIST performs the following tasks:


• Displays the 18 catalog and directory table spaces which are transformed during ENFM
on panel DSNPIT00, together with an estimate for the space allocations for the shadow
data sets.
• Customizes a number of new installation jobs, together with the DB2 Version 8 sample
jobs.

11-88 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Panel DSNTIPT - Data Set Names
ENFM DB2 - DATA SET NAMES PANEL 1
===>

Data sets allocated by the installation CLIST for edited output:


1 TEMP CLIST LIBRARY ===> DSN810.NEW.SDSNTEMP
* 2 SAMPLE LIBRARY ===> DSN810.NEW.SDSNSAMP
Data sets allocated by the installation jobs:
3 CLIST LIBRARY ===> DSN810.NEW.SDSNCLST
4 APPLICATION DBRM ===> DSN810.DBRMLIB.DATA
5 APPLICATION LOAD ===> DSN810.RUNLIB.LOAD
6 DECLARATION LIBRARY ===> DSN810.SRCLIB.DATA
Data sets allocated by SMP/E and other methods:
7 LINK LIST LIBRARY ===> DSN810.SDSNLINK
8 LOAD LIBRARY ===> DSN810.SDSNLOAD
9 MACRO LIBRARY ===> DSN810.SDSNMACS
10 LOAD DISTRIBUTION ===> DSN810.ADSNLOAD
11 EXIT LIBRARY ===> DSN810.SDSNEXIT
12 DBRM LIBRARY ===> DSN810.SDSNDBRM
13 IRLM LOAD LIBRARY ===> DSN810.SDXRRESL
14 IVP DATA LIBRARY ===> DSN810.SDSNIVPD
15 INCLUDE LIBRARY ===> DSN810.SDSNC.H

PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIPT: Data set names panel 1

© Copyright IBM Corporation 2004

Figure 11-34. Panel DSTIPT - Data Set Names CG381.0

Notes:
Panel DSNTIPT is displayed after the primary panel DSNTIPA1.
The SAMPLE LIBRARY field (option 2), is the only updateable field on this panel. Here is
where you specify a destination data set for the ENFM customized jobs.
You can choose to use the same data set name as you used during migration. In this case,
the data set is not deleted or re-allocated. The CLIST merely compresses the data set,
then updates the data set with the new jobs required for enabling-new-function processing,
together with the Version 8 IVP suite of jobs. Alternatively, you can specify a new data set.
In this case, the CLIST creates the new data set, then generates the jobs into that data set.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-89
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Panel DSNTIP00 - Shadow Data Sets

ENABLE NEW FUNCTION MODE FOR DB2 - SHADOW DATA SET ALLOCATION
===>
OBJECT DASD DEVICE VOL/SERIAL PRIMARY RECS SECONDARY RECS
1 SPT01 ==> 3390 ==> SBOXCC ==> 636 ==> 636
2 SYSDBASE ==> 3390 ==> SBOXCC ==> 7049 ==> 7049
3 SYSDBAUT ==> 3390 ==> SBOXCC ==> 478 ==> 478
4 SYSDDF ==> 3390 ==> SBOXCC ==> 144 ==> 144
5 SYSGPAUT ==> 3390 ==> SBOXCC ==> 3060 ==> 3060
6 SYSGROUP ==> 3390 ==> SBOXCC ==> 48 ==> 48
7 SYSGRTNS ==> 3390 ==> SBOXCC ==> 144 ==> 144
8 SYSHIST ==> 3390 ==> SBOXCC ==> 144 ==> 144
9 SYSJAVA ==> 3390 ==> SBOXCC ==> 144 ==> 144
10 SYSOBJ ==> 3390 ==> SBOXCC ==> 616 ==> 616
11 SYSPKAGE ==> 3390 ==> SBOXCC ==> 9673 ==> 9673
12 SYSPLAN ==> 3390 ==> SBOXCC ==> 13373 ==> 13373
13 SYSSEQ ==> 3390 ==> SBOXCC ==> 144 ==> 144
14 SYSSEQ2 ==> 3390 ==> SBOXCC ==> 144 ==> 144
15 SYSSTATS ==> 3390 ==> SBOXCC ==> 53355 ==> 53355
16 SYSSTR ==> 3390 ==> SBOXCC ==> 661 ==> 661
17 SYSUSER ==> 3390 ==> SBOXCC ==> 1675 ==> 1675
18 SYSVIEWS ==> 3390 ==> SBOXCC ==> 7093 ==> 7093
19 INDEXES ==> 3390 ==> SBOXCC Catalog and directory index shadows
PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIP00: Enable new function mode for DB2 - shadow data set allocations

© Copyright IBM Corporation 2004

Figure 11-35. Panel DSNTIP00 - Shadow Data Sets CG381.0

Notes:
Panel DSNTIP00 is displayed after panel DSNTIPT.
The shadow data sets are used by the online REORG utility, to convert the table spaces to
the ENFM format. Since the table spaces of the DB2 catalog are user-defined table
spaces, these data sets must be defined before the REORG utility is run.
The conversion job DSNTIJNE will allocate these shadow data sets based on the
parameters provided by this panel. The values are based on the fields PERMANENT UNIT
NAME and VOLUME SERIAL from panel DSNTIPA2, and the fields PRIMARY RECS and
SECONDARY RECS calculated by the DSNTCALC REXX, based on information entered
during migration to compatibility mode. The PRIMARY RECS and SECONDARY RECS
fields show the number of records (VSAM CIs), that will be allocated. These values, along
with the device type and volser(s) can be overridden in this panel.
When you run DSNTIJNE, the shadow data sets replace the current data sets for the table
and index spaces being converted.

11-90 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty We recommend that you review these data set allocations and space estimates carefully
before moving on.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-91
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Panel DSNTIP01 - Image Copies

ENABLE NEW FUNCTION MODE FOR DB2 - IMAGE COPY DATA SET ALLOCATIONS

===>

Enter characteristics for ENFM image copy data set allocation

1 COPY DATA SET NAME PREFIX ===> DB8AU.IMAGCOPY


2 COPY DATA SET DEVICE TYPE ===> SYSDA

PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIP01: Enable New Function Mode for DB2 - image copy data set
allocations
© Copyright IBM Corporation 2004

Figure 11-36. Panel DSNTIP01 - Image Copies CG381.0

Notes:
Panel DSNTIP01 is displayed after panel DSNTIP00.
When you reorganize a DB2 table space using either SHRLEVEL REFERENCE or
CHANGE, it is mandatory that you take an image copy as part of the reorganization. This is
the same as when you use the REORG utility in enabling-new-function mode to convert the
catalog. An inline image copy must be taken.
This panel allows you to enter the image copy data set names prefix hat you want to use for
the inline image copies, as well as which output device you like for the data sets.
If TAPE, 3480, or 3490 is specified for DEVICE TYPE, then no SPACE parameter is
included in the generated job for each image copy data set. Otherwise, the same space
settings are used as specified for the table space’s shadow data set list.
Even though tape devices are supported, the stacking of image copy data sets on the
same tape is not supported.

11-92 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Panel DSNTIP02 - Storage

DSNTIP02 ENABLE NEW FUNCTION MODE FOR DB2 - STORAGE REQUIREMENTS


===>
1 DSNT488I VOLUME SBOXCC WILL REQUIRE AT LEAST 182192 4K BLOCKS

PRESS: ENTER to continue RETURN to exit HELP for more information

DSNTIP02: Enable New Function Mode for DB2 - storage requirements

© Copyright IBM Corporation 2004

Figure 11-37. Panel DSNTIP02 - Storage CG381.0

Notes:
Panel DSNTIP02 is displayed after panel DSNTIP01. It displays a summary of the storage
requirements based on the data entered on the previous panels.
To accept the space requirements that are displayed for each volume, press the ENTER
key. Alternatively, you can press PF3 to return to previous panels and review your values.
Pressing the ENTER key also signals the CLIST to generate the jobs required for the
enabling-new-function mode. The following jobs are generated:
• DSNTIJNE: Enabling-new-function mode processing:
This new job uses the online REORG utility to convert the catalog and directory table
spaces to the new Unicode format. It can be stopped with job DSNTIJNH and it will
restart from the next table space to be converted.
• DSNTIJNH: Halt DSNTIJNE:
This new job stops the execution of DSNTIJNE at the end of the active group.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-93
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• DSNTIJNF: Turn new-function mode on:


This new job flags DB2 as in new-function mode.
• DSNTIJNG: Update DSNHDECP for new-function mode:
This new job updates DSNHDECP with NEWFUN=YES in SDSNEXIT.
• DSNTIJEN: Return to enabling-new-function mode status:
This new job returns from new-function mode to enabling-new-function mode status.
• DSNTIJNR: Convert the DSNRLST table for long name support:
This new job ALTERs the columns of the DSNRLST table to support long names.
• DSNTIJMC: Needed if you use the ODBC/JDBC metadata methods:
This new job switches the stored procedures used by these methods from compatibility
mode to new-function mode.
• Version 8 IVP: The new suite of Version 8 IVP jobs:
This is the new suite of Version 8 IVP jobs.
We shall now take a closer look at these jobs in the next few visuals.

11-94 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Job DSNTIJNE
Consists of the following steps:
CATENFM START
Processing for 18 table spaces that are being converted
Several steps for each table space to be processed
Online REORG, SHRLEVEL(REFERENCE) of each table space
Term Utility
Can be stopped after any table space
By job DSNTIJNH
Can be restarted without modification
Skips already processed table spaces

© Copyright IBM Corporation 2004

Figure 11-38. Job DSNTIJNE CG381.0

Notes:
Job DSNTIJNE is new for DB2 Version 8. It is generated by the DB2 installation CLIST
when you specify ENFM on the primary panel.
The job can only be executed by a user with install SYSADM authority. It has three main
phases:
1. The first step executes the CATENFM utility (CATENFM START) to move the DB2
subsystem or data sharing group into enabling-new-function mode. CATENFM is a new
utility in DB2 Version 8.
2. Then, 18 catalog and directory table spaces are converted to Unicode via the online
REORG Utility using SHRLEVEL(REFERENCE). Each table space is processed in turn
and a number of steps are required to process each table space (details provided later).
3. Any outstanding utilities are terminated. Actually, the first step of the DSNTIJNE job
also terminates all outstanding utilities related to ENFM processing (DSNENFM.* utility
IDs) before starting.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-95
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Job DSNTIJNE can be stopped at any time using the job DSNTIJNH. More on this in later
visuals. Job DSNTIJNE can be restarted afterwards, without any modification to the JCL
and it will skip any already processed table spaces.
What Happens if You Have a Space Problem during the ENFM Process?
If you have a space problem, the following actions are taken:
• All the succeeding steps are skipped.
• A -TERM UTIL command is issued to make the table space available.
• You can then change the space parameters and re-submit the job.
• The next job execution will skip any already processed table spaces and resume
processing at the first table space that has not been successfully converted.

11-96 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIJNE - CATENFM START
Only done once
Makes sure no V7 subsystems are up in a data sharing group
Makes sure catalog is at correct level
Makes sure user is authorized to convert
Marks ENFM mode in BSDS / SCA and directory header page
SYSLINKS and SYSPROCEDURES catalog tables are dropped
SYSDUMMY1 table moved from SYSSTR table space to SYSEBCDC
table space which remains EBCDIC

© Copyright IBM Corporation 2004

Figure 11-39. DSNTIJNE - CATENFM START CG381.0

Notes:
CATENFM is a new utility in DB2 Version 8. It is used to control and manage the
conversion processes into new-function mode.
CATENFM START is the first “real” step in job DSNTIJNE. (The actual first step terminates
outstanding utilities related to ENFM processing from previous runs).It first preforms a
number of consistency checks:
• The utility first checks it you are authorized to run the utility. Only install SYSADMs can
execute the CATENFM Utility.
• The utility checks the DB2 code level to ensure DB2 is running in compatibility mode.
• If data sharing, the utility makes sure there are no Version 7 members active in the data
sharing group.
If everything is OK, the CATENFM utility will then update the DB2 BSDS/SCA and
directory header page, to indicate that DB2 has moved from compatibility mode into
enabling-new-function mode. Once in ENFM, you cannot start any DB2 Version 7 system
either in data sharing or non-data sharing.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-97
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The CATENFM utility also performs these tasks:


• It DROPs the old catalog tables SYSIBM.SYSLINKS and SYSIBM.SYSPROCEDURE
which are no longer used.
• It moves the SYSIBM.SYSDUMMY1 table from DSNDB06.SYSSTR to
DSNDB06.SYSEBCDC, which is a new catalog table space created during the
migration of DB2 from Version 7 to Version 8. SYSEBCDC remains an EBCDIC table
space.

11-98 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIJNE - Conversion Steps
Each table space has the following steps where 'nn' indicates
the table space order number:
ENFM0nn0 CHECK THE NFM STATUS OF <tsp name>
ENFM0nn1 CLEAN UP ANY WORK FILES FOR CONVERTING <tsp name>
ENFM0nn3 ALLOCATE SHADOW DATA SETS FOR CONVERTING <tsp name>
ENFM0nn7 CONVERT <tsp name> TO NFM FORMAT
ENFM0nn9 DELETE WORK FILES USED TO CONVERT <tsp name>

Conversion process CANNOT happen if NO image copy taken of


table space

If tailoring the DSNTIJNE job


be careful NOT to change the
table space processing order

© Copyright IBM Corporation 2004

Figure 11-40. DSNTIJNE - Conversion Steps CG381.0

Notes:
Job DSNTIJNE converts 17 catalog and one directory table space from EBCDIC to
Unicode.
The table spaces DSNDB01.DBD01, DSNDB01.SCT02, DSNDB01.SYSLGRNX,
DSNDB01 and DSNDB06,SYSCOPY remain in EBCDIC. Table space
DSNDB06.SYSEBCDC is a new table space in Version 8 which also remains in EBCDIC.
Conversion cannot happen unless the table space to be converted has been successfully
image copied. This is a normal requirement on the REORG utility when reorganizing
catalog table spaces.
Job DSNTIJNE performs five steps to convert each table space:
1. The CATENFM Utility is first used to check if the table space has already successfully
been converted.
2. The next step cleans up any work files left after a previous execution.
3. Next, shadow data sets are allocated for the REORG utility.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-99
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

4. Here is where the conversion is performed.


5. Finally, the work files are cleaned up.
You need to update the job to include the IDCAMS allocation for any user-defined indexes
on the catalog table space. You only need to include data sets for user-managed data sets,
however, we recommend that you review the PRIQTY and SECQTY parameters in the
catalog for any DB2-managed indexes, as they will probably need to be increased.
The order in which the table spaces are converted is very important. If you are tailoring job
DSNTIJNE be very careful not to change the order of the table spaces. RI relationships
exist between catalog tables that need to be maintained during the conversion process.
Changing the order may cause some table spaces to fall into CHECK PENDING and
therefore impact availability.
Table 11-3 Table Space Conversion Order
Order Table space Order Table space

1 SYSVIEWS 10 SYSOBJ

2 SYSDBASE 11 SYSPKAGE

3 SYSDBAUT 12 SYSPLAN

4 SYSDDF 13 SYSSEQ

5 SYSGPAUT 14 SYSSEQ2

6 SYSGROUP 15 SYSSTATS

7 SYSGRTNS 16 SYSSTR

8 SYSHIST 17 SYSUSER

9 SYSJAVA (1) 18 SPT01


(1)
This includes the LOB table spaces associated with the LOB columns related to the tables in this
table space.

If the job fails in any of the five steps required to convert a table space, you do not need to
do anything to restore availability to the catalog table space. The last step of the job will
terminate any outstanding conversion REORG utility, restoring availability to the catalog
table space. All you need to do is fix the error and re-submit the job again when you are
ready. Typically the job fails because of space problems with the IDCAMS
DELETE/DEFINE statements.

11-100 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIJNE - ENFM0nn0 Steps
ENFM0nn0 CHECK THE NFM STATUS OF <tsp name>

Can only be done if all the previous table spaces were


successfully converted

Execute SQL to add long name support to columns in tables


Change data type and / or maximum length of these columns

System-defined indexes with keys with VARCHAR columns


(includes new ENFM generated indexes)
Changed to NOT PADDED indexes
Placed in advisory REORG-pending state (AREO*) if not already
from other process

© Copyright IBM Corporation 2004

Figure 11-41. DSNTIJNE - ENFM0nn0 Steps CG381.0

Notes:
For DSNTIJNE, these are the steps:
ENFM0nn0 CHECK THE NFM STATUS OF <tsp name>
This is the first step of each JCL block in DSNTIJNE, which will convert an individual
catalog and directory table space to the DB2 Version 8 new-function mode format. This
step will only execute if all previous table spaces were successfully converted.
This step uses the new Version 8 CATENFM utility to:
• Check the status and availability of the table space to be converted.
• Update the catalog to add long names support to the columns in the catalog tables. This
involves changing the data type and/or column lengths for these catalog columns.
• All system-defined indexes for tables in this table space which contain VARCHAR
columns, including the new VARCHAR columns, will be changed to Advisory Reorg
Pending (AREO*) status, in preparation for being converted from PADDED to NOT
PADDED during the REORG step. While the indexes are in this pending state, access
to the catalog is not impacted and you can still execute DDL. Any user-defined indexes

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-101
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

on the catalog tables are not affected and they remain PADDED indexes after the
conversion is complete.
Advisory Reorg Pending (AREO*) is a new database exception status introduced in DB2
Version 8. It indicates that the table space, index, or partition identified should be
reorganized for optimal performance. Access to the data is not restricted.

11-102 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIJNE - ENFM0nn7 Steps
Only attempted if:
Previous table spaces have been successfully processed
Long name DDL for this table space completed successfully
Converted to Unicode using online REORG (SHRLEVEL
REFERENCE)
No special locking considerations
'FOR BIT DATA' columns NOT converted to Unicode
If this breaks the old catalog data is still there, so no outage
Once a table space is Unicode, there is NO going back - Only RESTORE
option
Buffer pools / page sizes changed during the process
Need to allocate BP8K0 and BP16K0!!

© Copyright IBM Corporation 2004

Figure 11-42. DSNTIJNE - ENFM0nn7 Steps CG381.0

Notes:
For DSNTIJNE, these are the steps:
ENFM0nn7 CONVERT <tsp name> TO NFM FORMAT
This step uses the online REORG utility with SHRLEVEL(REFERENCE) to convert the
individual catalog and directory table space to Unicode. This step will only execute if all
previous table spaces were successfully converted and the CATENFM utility has
successfully updated the catalog with the new long name definitions for table columns in
this table space.
There is no special locking that is used by the REORG utility. So, even though online
REORG is being used, there still remains the possibility that other DB2 work may be
impacted by delays and “resource unavailable” conditions while this step is running. We
therefore suggest you schedule these conversions when there is little or no activity on the
system.
As online REORG Utility is loading the data into a shadow copy of the table space and
indexes, there should be no significant outage in the event of a failure. However, once the

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-103
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

table space has been successfully converted to Unicode, there is no easy fallback. The
only was it to restore the entire DB2 subsystem to a prior point-in-time.
Job DSNTIJNE can be used to convert user-defined indexes on the catalog. However, the
AMS DELETE/DEFINE statements have to be manually added to the job. The job also
does not alter the user-defined indexes to NOT PADDED.
Attention: If any user-defined indexes are defined on the DB2 catalog and the index
columns that increase in size to support long names, which is likely to be the case, these
indexes may grow dramatically in size (because user-defined indexes are not changed to
NOT PADDED by the ENFM processing.
We therefore recommend that you review the space used by these indexes carefully and
substantially increase their space allocation before you run jon DSNTIJNE, or drop the
indexes before ENFM processing, and recreate them (as NOT PADDED) afterwards.
Once you are in new-function mode, these user-defined indexes can be altered from
PADDED to NOT PADDED, which will place them in rebuild pending. They can then be
rebuilt to get them back to a reasonable size.
Alternatively, you may decide to drop any user-defined indexes on the catalog tables
before job DSNTIJNE is run, then re-evaluate if you need to recreate them again after you
are in new-function mode.
For those table spaces that require larger than 4K buffers after conversion to Unicode, DB2
must load the data into the shadow tables using these new buffer pools. This is why BP8K0
and BP16K0 are allocated during the migration process by DB2. You may need to adjust
the size.
Important: The REORG utility unloads the index data in PADDED format. So make sure
the SYSUT1, SORTOUT and SORTWORK DD cards have plenty of space in job
DSNTIJNE.

11-104 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIJNE - ENFM0nn7 Switch
Switch phase processing -- has a special interface for
CATENFM processing:
Update catalog and directory:
New page size values
New encoding scheme and CCSID information
View regeneration
Table check constraint regeneration
Plan and package invalidation
Dynamic cached statement invalidation

© Copyright IBM Corporation 2004

Figure 11-43. DSNTIJNE - ENFM0nn7 Switch CG381.0

Notes:
During conversion of the catalog and directory table spaces to NFM, online REORG needs
to perform some “special” processing during the “switch” phase:
• The DB2 catalog and directory needs to be updated to reflect the table space is now
Unicode. The DB2 catalog and directory also needs to be updated to reflect the new
buffer pool assignments for those table spaces which will no longer be using BP0.
• All the views on the catalog tables that have been converted are re-generated.
If a dependent view cannot be successfully regenerated, it is marked in the catalog to
indicate the error (the STATUS column in SYSIBM.SYSTABLES has the value “R” and
the TABLESTATUS column is “V”). Please note that this behavior is different from
situations in which dependent views are regenerated as a result of an ALTER TABLE
statement, when data types have changed and the dependent views cannot
successfully be regenerated. In that case, the ALTER TABLE statement fails. (So
“flagging” a view during view re-generations normally only occurs during the migration
process.)

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-105
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• All the table check constraints on the catalog tables that have been converted are also
re-generated.
• All the plan and packages that are dependent upon the catalog tables that have been
converted are marked as invalid. They will be rebound on the next allocation.
• All the statements in the dynamic statement cache that are dependent on the catalog
tables that have been converted are also marked as invalid.

11-106 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Job DSNTIJNH
Used to HALT the ENFM process at the completion of the
step that is currently executing

Submit DSNTIJNH while DSNTIJNE is running

DSNTIJNE will stop after it completes converting


the current table space

Use DSNTIJNH instead of -TERM UTILITY

© Copyright IBM Corporation 2004

Figure 11-44. Job DSNTIJNH CG381.0

Notes:
Job DSNTIJNH is new for DB2 Version 8. It is generated by the DB2 installation CLIST
when you specify ENFM on the primary panel. The job is used to halt the ENFM process at
the completion of the step that is currently executing. It can only be executed by a user with
install SYSADM authority.
You submit job DSNTIJNH while job DSNTIJNE is currently executing. Job DSNTIJNH
uses the new CATENFM utility (CATENFM HALTENFM) to signal job DSNTIJNE to
terminate when it completes all the steps associated with converting the current table
space to the new format. Job DSNTIJNE will then terminate and not move on to the next
table space. Job DSNTIJNE can be resubmitted at a later time and it will move on to
convert the next table space.
In this way, you can terminate the ENFM conversion process if it is taking too long and you
have exhausted your batch window. The ENFM conversion process can easily be resumed
during another window.
We strongly recommend that you use job DSNTIJNH to stop an active DSNTIJNE job,
rather than using any other process such as the -TERM UTILITY command.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-107
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

All you need to do to restart job DSNTIJNE, after you have used job DSNTIJNH to halt its
processing, is to simply re-submit the DSNTIJNE job. There is nothing else you need to do.
Job DSNTIJNE will continue from where it left off, and will start converting the next table
space in sequence.

11-108 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Where Am I?

** BEGIN DISPLAY OF GROUP(........) GROUP LEVEL(...) MODE(E)


GROUP ATTACH NAME(....)
-----------------------------------------------------------------------
DB2 DB2 SYSTEM IRLM
MEMBER ID SUBSYS CMDPREF STATUS LVL NAME SUBSYS IRLMPROC
-------- -- ------ ------- ------ ----------- ------ -----------
........ 0 V81A = ACTIVE 810 ZS13PE PR21 PRLMPR21
-----------------------------------------------------------------------
TABLE ENABLED
SPACE NEW FUNCTION
-------- --- --------
SYSVIEWS YES
SYSDBASE YES
SYSDBAUT YES
SYSDDF YES
.......
SPT01 NO
-----------------------------------------------------------------------
*** END DISPLAY OF GROUP(........)

Use the -DISPLAY GROUP DETAIL command

© Copyright IBM Corporation 2004

Figure 11-45. Where Am I? CG381.0

Notes:
The output from the DISPLAY GROUP DETAIL command is enhanced to show what mode
the DB2 subsystem is in (top right of the output).
• MODE(C): DB2 is in Version 8 compatibility mode.
• MODE(E): DB2 is in enabling-new-function mode.
• MODE(N): DB2 is in new-function mode.
In addition, the DISPLAY GROUP DETAIL command also lists the conversion state of each
catalog and directory table space while DB2 is in enabling-new-function mode.
Please note that the DISPLAY GROUP command can be used on non-data sharing
systems, as well as data sharing systems. The sample on this visual is a non-data sharing
system. Data sharing specific fields are not filled in.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-109
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

New-Function Mode
Job DSNTIJNF

Ensures all 18 table spaces have been converted to Unicode


and all the long name DDL updates were made plus DB2
defined catalog indexes not padded

Marks NFM mode in BSDS / SCA and directory header page

Your DB2 system is now in NFM!


NFM

© Copyright IBM Corporation 2004

Figure 11-46. New-Function Mode CG381.0

Notes:
Job DSNTIJNF is new for DB2 Version 8. It is generated by the DB2 installation CLIST
when you specify ENFM on the primary panel. By providing the job DSNTIJNF to move into
new-function mode, you are forced to make a conscious decision, by submitting the job,
when you are ready to enter into new-function mode.
This can be useful when you want to move a number of DB2 subsystems into new-function
mode at the same time, for example, all systems that are linked via DDF, and you want to
use some of the new distributed functions in Version 8.
Job DSNTIJNF is a single step job which invoked the new CATENFM utility (CATENFM
COMPLETE). It can only be executed by a user with install SYSADM authority.
This job must be run after a successful completion of job DSNTIJNE to formally bring the
DB2 subsystem or data sharing group into Version 8 new-function mode. DB2 will only be
brought into new-function mode if all of the enabling-new-function mode processing has
completed successfully.
Before flagging DB2 as in new-function mode, the CATENFM Utility will:

11-110 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Check the catalog for long names support.


• Check that all the catalog indexes are available and defined as NOT PADDED.
• Check that all of the 18 table spaces have successfully been converted to Unicode.
After successful completion of job DSNTIJNF, this is what you can expect:
• You can now exploit the new DB2 Version 8 functionality.
• The 18 DB2 catalog and directory table spaces are now in Unicode.
• There is NO fallback or coexistence with DB2 Version 7.
• There is NO going back to Version 8 compatibility mode.
• YES — you can return to enabling-new-function mode. This is done by the new job
DSNTIJEN, which is described in Figure 11-56, "Returning to ENFM", on page 11-130.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-111
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

NFM Considerations
Schema evolution
Varying length index keys may no longer be padded
When using index based partitioning (as in V7)
Trying to create an index with the PARTITIONED keyword will now
convert the table to table-based partitioning

Multiple CCSIDs per SQL statement


SQL that accesses the DB2 catalog will be evaluated and sorted
in Unicode
The SQL results and order may differ from CM
REBIND plans/packages and update DBDs

© Copyright IBM Corporation 2004

Figure 11-47. NFM Considerations CG381.0

Notes:
Here are some considerations regarding new-function mode.
Schema Evolution
After enabling-new-function mode, the following items may behave slightly differently than
when DB2 was running in compatibility mode:
• Varying length index keys may no longer be padded.
If the PADDED/NOT PADDED keywords are not specified on a CREATE INDEX
statement, the default padding type used will depend on whether the Version 8
subsystem was migrated from Version 7 or if it was a new Version 8 install.
For a new Version 8 Install, the default is NOT PADDED. DB2 always generates
PADDED indexes while it is in compatibility mode or enabling-new-function mode.
When DB2 is in new-function mode, it generates either PADDED or NOT PADDED
indexes, depending on the PADIX system parameter in DSNZPARM.
The PADDED/NOT PADDED keywords only apply to indexes on variable length
column(s). Indexes with all fixed length column(s) are defined as PADDED by default.

11-112 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • When you try to create an index with the new PARTITIONED keyword in NFM, the table
will be converted from index-based to table-based partitioning. For example:
- When using index-based partitioning, this will happen:
• Create the table space specifying NUMPARTS.
• Create the table without the PARTITIONING KEY keyword.
• Create the partitioning index with the VALUES keyword. Trying to create an
index with the PARTITIONED keyword will convert the table to table-based
partitioning.
- When using table-based partitioning, this will happen:
• Create the table space specifying NUMPARTS.
• Create the table with the PARTITIONING KEY keyword.
• Trying to create an index with the VALUES keyword will fail.
• Any partitioned indexes are created using the PARTITIONED keyword.
Multiple CCSIDs per SQL Statement
As each DB2 catalog table space is converted to Unicode, SQL that references tables in
these converted table spaces may be impacted.
SQL that returns, compares, or orders by a DB2 catalog column that has been converted to
Unicode will be evaluated in Unicode instead of EBCDIC. Depending on the specific SQL
statement the result set and result set sequence may differ from compatibility mode when
the DB2 catalog was EBCDIC. In addition, any SQL statement which joins a DB2 catalog
table that has been converted to Unicode and any EBCDIC table (including a DB2 catalog
table that has not yet been converted to Unicode) might also return different result sets.
For example, if an EBCDIC column is compared with a DB2 Unicode catalog column, the
comparison will be done in Unicode. Since EBCDIC collating sequence is different from
Unicode, the result set may be different then it was during compatibility mode when the
DB2 catalog was EBCDIC.
Consider the following query:
SELECT NAME FROM SYSIBM.SYSTABLES
WHERE NAME < 'T0' AND NAME > 'TA';
In compatibility mode and DB2 Version 7, the predicate uses the EBCDIC collating
sequence and returns table names that begin with 'T' and are followed by any letter. In
Enabling new-function mode, after DB2 has converted the table space that contains
SYSTABLES from EBCDIC to Unicode, both range predicates will be evaluated in Unicode.
Since the UTF-8 collating sequence differs from EBCDIC, the query won't return any
rows.(In EBCDIC, numbers are collated after letters. UTF-8 is collated like ASCII, where
numbers are collated before letters.)
This incompatibility only potentially impacts SQL with range predicates (for example, >. >=.
<. <=, between), and SQL which contains ORDER BY clauses. Equals type predicates are
not affected.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-113
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Unicode Parser
Various types of identifiers have various maximum lengths. Much of DB2's own processing
of SQL statements uses Unicode UTF-8. So, the length of the Unicode representation of an
identifier might differ from the length that you entered, if you entered the identifier in a
CCSID other than Unicode. For example, the character “¬” requires one byte in EBCDIC
but two bytes in UTF-8.
Therefore, an identifier that was valid in Version 7 might be flagged as too long in Version 8
new-function mode, and you will receive message DSNH107I (in a precompilation) or
SQLCODE -107 (otherwise). This incompatibility can occur only if both of the following
situations exist:
1. The identifier contains one or more characters whose Unicode representations require
more bytes than their EBCDIC representations did.
2. This expansion causes the identifier to grow beyond the maximum allowed length.
The length of the EBCDIC representation of a column-name in an object that was created
in Version 8 (before new-function mode) or before Version 8 must have been 18 bytes or
less. Such a pre-existing object is allowed to exist in Version 8 new-function mode, even if
the Unicode representation exceeds 30 bytes.
SQL statements can reference a column-name whose Unicode representation exceeds 30
bytes if they do not create new objects containing that column-name. However, if you drop
the object that contains that column-name, you cannot recreate the object containing that
column-name (in Version 8 new-function mode) if the Unicode representation exceeds 30
bytes.
Run RUNSTATS to Re-create Catalog Statistics
Enabling-new-function mode invalidates catalog statistics. DB2 uses default statistics when
it calculate access paths. Run RUNSTATS to gather the statistics again.
REBIND Plans/Packages and Update DBDs
DB2 Version 8 in new-function mode, uses a different format for its DBDs, packages and
plans. So, before DB2 can use a DBD, plan or package from an earlier release of DB2, it
must first be expanded it the new Version 8 format. This is an overhead you can easily do
away with.
This is also true for DB2 Version 8 running in compatibility mode and enabling-new-function
mode. DB2 must first expand the DBDs, plans and packages before it can use them. DB2
must also convert the DBDs, plans and packages to the old format before it can store them
in the catalog. This is an extra overhead that exists while running in compatibility mode and
enabling-new-function mode.

11-114 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Attention: After you have entered new-function mode, we recommend that you plan to
rebind all of your plans and packages. DB2 will then store the plans and packages in the
DB2 catalog in the new format. DB2 will no longer need to expand the plans/packages
each time it needs to use them.
We also recommend that you plan to make some small change to every database. This
will also force DB2 to rebuild and store all the DBDs using the Version 8 format into the
directory. DB2 will no longer need to expand the DBDs each time it needs to use them.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-115
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

NFM and the Precompiler


Precompiler option NEWFUN defaults to NO - will require
changing
NEWFUN(YES)
Accepts new V8 syntax
Unicode DBRM and Unicode bound statements in catalog
NEWFUN(NO)
Rejects new V8 syntax
EBCDIC DBRM and bound statements in catalog

Run DSNTIJNG or DSNTIJUZ to assemble Version 8 DSNHDECP


with new parameter
NEWFUN(YES)

© Copyright IBM Corporation 2004

Figure 11-48. NFM and the Precompiler CG381.0

Notes:
The data only module DSNHDECP provides, among other things, the default parameter
settings for the DB2 precompiler. Version 8 brings a new parameter for DSNHDECP. The
NEWFUN parameter tells the precompiler it can accept new Version 8 functionality during
program preparation.
• NEWFUN=YES
Allows programs to use the new functions provided by DB2 Version 8. In addition, it will
result in DBRMs being generated in Unicode and the statements are stored in the DB2
catalog in Unicode.
Unicode DBRMs will not be legible under a standard ISPF browser unless you use
some tool to convert the Unicode to EBCDIC. Tools such as SPUFI and QMF will also
not be able to display the SQL statements directly from the DB2 catalog correctly as
well. You can use a tool like the DB2 Path Checker, DB2 Administration Tool, or Visual
Explain, which will convert the SQL statements from Unicode to EBCDIC.

11-116 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Columns in the DB2 catalog that are defined as character are converted from Unicode
to EBCDIC, before they are displayed. However, a number of columns in the DB2
catalog, including column CONTOKEN in SYSIBM.SYSPLAN and
SYSIBM.SYSPACKAGE and column TEXT in SYSIBM.SYSSTMT and STMT in
SYSIBM.SYSPACKSTMT, are now defined as FOR BIT DATA. These columns are not
converted before they are displayed.
• NEWFUN=NO
This is the default when you migrate using the installation CLIST. It prevents programs
from using the new Version 8 functionality, by instructing the precompiler to reject
Version 8 SQL. DBRMs will be generated in EBCDIC, and SQL will be stored in the DB2
catalog as EBCDIC, the same as prior versions of DB2.
Irrespective of whether NEWFUN is set to YES or NO, the Version 8 precompiler will parse
SQL in Unicode.
Job DSNTIJNG is new for DB2 Version 8. It is generated by the DB2 installation CLIST
when you specify ENFM on the primary panel. This job can be used to generate a new
version of DSNHDECP into SDSNEXIT. Alternatively you can change DSNHDECP in job
DSNTIJUZ to re-assemble a new version of DSNHDECP into SDSNEXIT.
Once you are in new-function mode, all the plans and packages you bind will be marked
with the Version 8 dependency indicator, even though they may not contain any new
Version 8 syntax. Similarly, any DBDs that are written to DSNDB01 will be marked as
Version 8 dependant. This is because plans, packages and DBDs have a different format in
Version 8 than prior releases of DB2. This will not cause any problems should you choose
to return to enabling-new-function mode, since these objects are not frozen.
You can use the precompiler with NEWFUN=YES while in compatibility mode or in
enabling-new-function mode, to test the stability of your applications with the Version 8
precompiler. The resulting DBRM’s will be in Unicode. You can then bind the DBRMs and
run the programs. However, the bind will fail if the DBRMs contain any new-function SQL
that is not supported in compatibility mode.
You can also execute the precompiler with NEWFUN=NO while running in new-function
mode. In this case any Version 8 syntax will be rejected. This could be useful when your
production system is still in V7 or V8 compatibility mode, and you want to make sure that
the applications you create in your development system that is in V8 NFM, will be able to
run on your production system that is still at V7 or V8 CM.
Finally, when you migrate from Version 7, the default for NEWFUN is NO, while any new
install will use a default of YES.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-117
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Application Programming
Value of NEWFUN NO YES YES
Are V8 functions used? NO NO YES
Is DBRM a V8 new object and is NO YES YES
DBRM V8 dependent?
Can V7 or earlier bind? YES NO NO
Can V8 bind in CM/ENFM? YES YES NO
(Note 1)
Can V8 bind in NFM? YES YES YES
Can run in V7? YES n/a n/a
Can run in V7 after fallback from V8 YES frozen n/a
CM? (auto rebind)
Can run in V8 CM? YES YES NO
(Note 1)
Can run in V8 CM after fallback from n/a n/a n/a
V8 ENFM/NFM? (Note 2)
Can run in V8 ENFM? YES YES NO
(Note 1)
Can run in V8 ENFM after returning YES YES YES
from NFM?
Can run in V8 NFM? YES YES YES

© Copyright IBM Corporation 2004

Figure 11-49. Application Programming CG381.0

Notes:
For any release of DB2, if a DBRM was built from a source program that uses syntax that
was introduced in that release, the precompiler marks the DBRM as dependent on that
release, and a BIND on an earlier release of DB2 will fail.
However, in DB2 Version 8 and the new format of DBRMs, this behavior is not as clear. An
application program that does not use new syntax does not appear to use any Version 8
new features or to produce a Version 8 new object, but it actually does produce a Version 8
new object (if the precompilation uses NEWFUN YES).
This visual explains the relationship between characteristics of precompilation of
application programs and the ability to bind and run in various DB2 releases and modes:
• The first column lists certain questions about this behavior.
• The second column answers the questions for a DBRM that is produced from a
precompilation that uses the NEWFUN NO option for SQL processing. This option
prevents the use of Version 8 new functions.

11-118 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • The third column applies to a DBRM that is produced from a precompilation that uses
the NEWFUN YES option, where the application program does not use any Version 8
new functions.
• The fourth column applies to a DBRM that is produced from a precompilation that uses
the NEWFUN YES option, where the application program uses Version 8 new functions.
Why is this table being used?
• With NEWFUN YES, the SQL statements in the DBRM use Unicode, so the DBRM is a
Version 8 new object, even if the application program does not use any Version 8 new
functions. Therefore, the DBRM is Version 8 dependent, and Version 7 and earlier
releases of DB2 cannot bind the DBRM. Version 8 can bind it, even before new-function
mode. With NEWFUN NO, the SQL statements use EBCDIC, the DBRM is not a new
object, and Version 7 and earlier releases can bind it.
• If the application program uses Version 8 new functions, DB2 Version 8 can bind the
DBRM only in new-function mode. If the program does not use any new function, DB2
Version 8 can bind the DBRM even before new-function mode.
The current release marker (DBRMMRIC) in the header of a DBRM is marked according to
the release of the precompiler, regardless of the value of NEWFUN. When the DBRM is
bound, this value is stored in the column RELBOUND of catalog tables SYSIBM.SYSPLAN
or SYSIBM.SYSPACKAGE. This triggers autobind when the plan/package is scheduled to
execute on a lower DB2 version.
In a Version 8 precompilation, the DBRM dependency marker (DBRMPDRM) in the header
of a DBRM is marked for Version 8 if the value of NEWFUN is YES, otherwise it is not
marked for Version 8. This value is stored in column IBMREQD of catalog tables
SYSIBM.SYSPLAN or SYSIBM.SYSPACKAGE and is used to determine if the plan or
package is to be frozen on fallback.
Table notes:
Here are some notes on the table presented in this visual:
• Note 1: In general, plans and packages can be bound and executed, provided that they
do not use any new function. However, there are a few exceptions. For example, these
plans and packages can use Multi CCSID SQL while in ENFM.
• Note 2: This row is marked not applicable (N/A) because DB2 does not support
returning from ENFM/NFM to CM. However, you can return DB2 to CM by performing a
point-in-time recovery of the whole DB2 subsystem to a time when DB2 was in CM. In
this case, DB2 will behave exactly the same as if it was first in CM.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-119
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

RLST Migration
Resource Limit Facility
The DB2 'governor' for dynamic SQL

Convert Resource Limit Facility for Long Names


DSNRLST
After you are in new function mode
Sample job DSNTIJNR
RLF must be stopped

© Copyright IBM Corporation 2004

Figure 11-50. RLST Migration CG381.0

Notes:
The Resource Limit Facility is the “governor” facility used by DB2 to control DB2 resources
used by dynamic SQL. It can also be used to govern binding, and to disable different
flavors of parallelism. It relies on the table DSNRLSTxx.
As many of the columns have changed in the DB2 catalog to support long names, the
equivalent columns in DSNRLST need to also change.
DB2 Version 8 provides a sample job DSNTIJNR with the required ALTER statements to
convert the DSNRLST table to support long names. This job needs to be run after DB2 is in
new-function mode and while the RLF is stopped.

11-120 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
BSDS Migration
BSDS conversion is optional in Version 8
Increasing the maximum number of active log data sets and
archive log volumes requires conversion of the BSDS
Avoid the B37 ABEND
Run the new utility program DSNJCNVB to convert the BSDS(s)
to larger size
Conversion job must be run in New Function Mode
Running DSNJCNVB in Compatibility mode or Enabling New Function
mode
Results in program termination with message DSNJ439I and
return code 777

DSNTIJIN RECORDS value increased

© Copyright IBM Corporation 2004

Figure 11-51. BSDS Migration CG381.0

Notes:
Some customers are finding that the current maximum of 1,000 archive log volumes (per
log copy) recorded in the BSDS is no longer sufficient to remain recoverable without having
to take frequent image copies. In addition, customers have requested the ability to have
more than the current maximum of 31 active log data sets per log copy. Active log read is
much faster than archive log read, and there would be no queuing for archive log tape
volumes during recovery or backout.
DB2 Version 8 increases the maximum number of archive log volumes recorded in the
BSDS from 1,000 volumes per log copy to 10,000 volumes. Also, it increases the maximum
number of active log data sets from 31 pairs of log data sets to 93 pairs of log data sets.
Increasing these limits requires a conversion of the BSDS data sets, to contain more
formatted records. DB2 Version 8 provides a new utility, DSNJCNVB, to convert the BSDS
data sets to the larger size.
The conversion can only be done while DB2 is in new-function mode (NFM). This is to
minimize the fallback and data sharing coexistence impact. Running DSNJCNVB in

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-121
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

compatibility mode or enabling-new-function mode results in program termination with


message DSNJ439I and return code 777.
The BSDS conversion is optional. However, these enhancements are not available until the
BSDS is converted to a new format.
For new DB2 Version 8 systems, DB2 provides a larger BSDS definition (space allocation)
during installation. However, you must still manually convert the BSDS to the new format,
by running the conversion utility, if you want to take advantage of these larger limits. The
converted BSDS can be used to store more active and archive log data sets in the BSDS.
For more information, see also Figure 1-48, "More Active Log Data Sets", on page 1-79
and Figure 1-49, "Increased Maximum Archive Log Data Sets", on page 1-81.

11-122 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNJCNVB Utility
DB2 must be stopped in order to run the conversion utility, then
1. Rename your existing BSDS copy 1 data set, as a backup
2. Allocate a larger BSDS data set (see Installation job DSNTIJIN),
using the original BSDS name
3. Use IDCAMS REPRO to copy the original data set to the new data set
4. Repeat for copy 2, if dual BSDS data sets
5. Run DSNJCNVB
DSNJCNVB utility is invoked the same way as DSNJU003 and
DSNJU004

//DSNTLOG EXEC PGM=DSNJCNVB


//STEPLIB DD DISP=SHR,DSN=DSN810.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB7OU.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB7OU.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*

© Copyright IBM Corporation 2004

Figure 11-52. DSNJCNVB Utility CG381.0

Notes:
Prior to running the BSDS conversion utility, you should perform the following tasks:
1. Rename your existing BSDS copy 1 data set. You should retain the original copy of the
BSDS in order to restore it in case of a failure during conversion.
2. Allocate a larger BSDS data set using the VSAM DEFINE statements in installation job
DSNTIJIN, using the original BSDS name.
3. Use IDCAMS REPRO to copy the original data set to the new, larger data set.
4. Repeat for copy 2 if you are using dual BSDS data sets.
5. Run the DSNJCNVB utility.
DSNJCNVB, the BSDS conversion utility, is invoked the same way as the DSNJU003
and DSNJU004 utilities:

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-123
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

//DSNTLOG EXEC PGM=DSNJCNVB


//STEPLIB DD DISP=SHR,DSN=DSN810.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB7OU.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB7OU.BSDS02
//SYSPRINT DD SYSOUT=*
Once the BSDSs have successfully been converted to the new format, you can
re-assemble the DSNZPARM module to take advantage of the higher limits. However, if
DB2 finds a value greater than 1000 and determines that conversion has not occurred, it
issues the warning message DSNJ155I at startup, resets the maximum number of archive
log to 1000 (MAXARCH in DSNZPARM), and continues to start.

11-124 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Has BSDS Conversion Run?

DSNJCNVB CONVERSION PROGRAM HAS RUN DDNAME=SYSUT1


LOG MAP OF BSDS DATA SET COPY 2, DSN=DB7OU.BSDS01
LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
DATA SHARING MODE IS OFF
SYSTEM TIMESTAMP - DATE=2002.252 LTIME=22:15:43.72
UTILITY TIMESTAMP - DATE=2002.249 LTIME=18:23:22.80
VSAM CATALOG NAME=DB7OU
HIGHEST RBA WRITTEN 00000428E1F4 0000.000 00:00:00.0
HIGHEST RBA OFFLOADED 0000041F3FFF
RBA WHEN CONVERTED TO V4 000000000000
.........

Execute the DSNJU004 Utility

© Copyright IBM Corporation 2004

Figure 11-53. Has BSDS Conversion Run? CG381.0

Notes:
To determine if your BSDS data sets have been converted to support the larger number of
active and archive log data sets, you can execute the print log map utility (DSNJU004).
This will report if the DSNJCNVB utility has run, as shown in the visual above.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-125
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DSNWZP Considerations
A stored procedure to view DB2 subsystem parameter settings
V7 defaulted to DB2-established SP
V8 only WLM-established SP

On fallback to Version 7:
DSNWZP will not work!
Need to alter the definition
ALTER PROCEDURE SYSPROC.DSNWZP EXTERNAL NAME DSNWZPR;

© Copyright IBM Corporation 2004

Figure 11-54. DSNWZP Considerations CG381.0

Notes:
The stored procedure DSNWZP is used to view the DB2 subsystem parameter settings. It
is used by a number of facilities including the DB2 Control Center and Visual Explain.
In prior releases of DB2, DSNWZP executed as a DB2 established stored procedure.
In Version 8, DSNWZP is defined to run in a WLM established stored procedure address
space. The migration job DSNTIJSG will redefine DSNWZP as a WLM established stored
procedure, and changes the definition to use the external module DSNWZP.
In Version 7, the DSNWZP stored procedure can execute, as either a DB2 established
stored procedure using the external name of DSNWZP, or as a WLM established stored
procedure using the name DSNWZPR. The default under Version 7 was to run DSNWZP
as a DB2 established stored procedure.
So, provided that you were already running the DSNWZP stored procedure in a WLM
managed environment in V7, when you fallback from Version 8 to Version 7, or re-migrate
from Version 7 to Version 8, the DSNWZP stored procedure will fail to work. You will need

11-126 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty to issue the appropriate ALTER commands to change the external name for the stored
procedure.
When falling back to V7, issue the following SQL ALTER statement:
ALTER PROCEDURE SYSPROC.DSNWZP EXTERNAL NAME DSNWZPR;
When re-migrating to V8, issue:
ALTER PROCEDURE SYSPROC.DSNWZP EXTERNAL NAME DSNWZP;

Important: You must set NUMTCB=1 in your WLM environment for the DSNWZP stored
procedure.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-127
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Running the IVP Jobs


Important - You cannot run the Version 8 IVP jobs until DB2 is
running in Version 8 new-function mode (NFM)
So, when migrating for Version 7 to Version 8 CM
Use the Version 7 IVP jobs to
Verify the migration
Ensures that the old jobs work with Version 8 in NFM

After entering new-function mode


Run the Version 8 IVP
The Version 8 IVP jobs are created by the installation CLIST
as part of the ENFM option

© Copyright IBM Corporation 2004

Figure 11-55. Running the IVP Jobs CG381.0

Notes:
The DB2 Installation Verification Procedure (IVP) jobs are designed to test that DB2 is
functioning correctly after a new subsystem has been installed, or an existing subsystem
has been migrated to a new version. The IVP jobs are designed to test both existing
functionality as well as new functionality in the new release.
How can we use the Version 8 IVP suite to test a successful migration from Version 7 to
Version 8 compatibility mode?
We cannot! DB2 will not allow any new function to be executed while running in
compatibility mode.
We, therefore, recommend that you use the DB2 Version 7 IVP suite to test a successful
migration from Version 7 to Version 8 compatibility mode.
Once DB2 has successfully moved from compatibility mode, through
enabling-new-function mode and into new-function mode, you should then use the Version
8 IVP suite to test the migration. This is why the Version 8 IVP suite of jobs are only

11-128 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty generated by the DB2 installation CLIST when the ENFM option is specified, and not
during MIGRATE.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-129
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Returning to ENFM
You can only return from NFM to ENFM
Job DSNTIJEN
Does not undo ENFM changes
Prevents users from exploiting V8 new functions
No Version 8 dependent objects frozen
Dependency indicator "L"

Steps
Job DSNTIJEN
Reassemble DSNHDECP
Change for NEWFUN=NO

To return again to new-function mode


Re-run job DSNTIJNF and reassemble DSNHDECP with NEWFUN=YES

© Copyright IBM Corporation 2004

Figure 11-56. Returning to ENFM CG381.0

Notes:
Although there is no supported way to return from new-function mode to compatibility mode
or fallback to Version 7, there is a path to return from new-function mode to
enabling-new-function mode.
The new installation job DSNTIJEN invokes the new CATENFM utility (CATENFM
ENFMON) to mark the DB2 subsystem or data sharing group in enabling-new-function
mode. No ENFM processing will be undone. You will also need to re-assemble the data
only module, DSNHDECP, with NEWFUN=NO to turn off new-function mode processing in
the DB2 precompiler.
This is the only means by which NFM can be turned off so that a subsystem can prevent
new Version 8 function from continuing to be used (create new objects/applications that
use new functions). All Version 8 dependent objects will continue to be available, as they
will not be frozen.
For example, table-based partitioned table space will continue to be available however you
cannot create any new ones. All the plans and packages bound in Version 8 NFM are also

11-130 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty available and they do not need to be rebound before use. Version 8 format DBDs, plans
and packages are also available.
Re-migration to new-function mode is achieved by re-running the installation job
DSNTIJNF and re-assembling DSNHDECP with NEWFUN=YES.
This job has merely been provided to “complete logical the set of migration jobs”. We do
not expect this path to be used very often.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-131
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Migration and the Future


No skip release migration
Can only get to V8 from V7
Will need to be in NFM before migrating to future releases
Multiple mode concept will be used in future releases
There will not be a long ENFM conversion process
Ability to prevent new function from being used until shop is ready
No accidental use of new function that would make fallback
difficult / impossible

© Copyright IBM Corporation 2004

Figure 11-57. Migration and the Future CG381.0

Notes:
It is always difficult to look into the crystal ball. However, the new migration strategy
introduced in DB2 Version 8 will probably survive well into the future.
You will need to be running DB2 Version 8 new-function mode before you will be allowed to
migrate to any future release of DB2. We therefore recommend that you plan to move to
NFM as soon as you are ready and your business allows, thereby satisfying this key
prerequisite for any future migration of DB2.
There will be no more “skip” releases, as we have seen from Version 5 to Version 7. This
was a “once off” strategy to move customers forward through and after the turn of the
century “bug” called Y2K!
We can see the multiple migration strategy continuing in future releases of DB2; however,
the enabling-new-function phase will probably not be as long. This strategy solves a
number of problems. For example:

11-132 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Customers are able to restrict the use of new functions until the new version is stable,
their applications are stable running under the new version, and they are now ready to
exploit the new functions.
• Restricting the use of new functions makes the process of migration and fallback
simpler and much less prone to error.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-133
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

11-134 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.4 DB2 Catalog Changes

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-135
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 Catalog Changes

DB2 catalog continues to grow with every DB2 release

DB2 Table Tables Indexes Columns Table Check


Version Spaces Constraints

V1 11 25 27 269 N/A
V3 11 43 44 584 N/A
V4 11 46 54 628 0
V5 12 54 62 731 46
V6 15 65 93 987 59
V7 20 82 119 1206 105
V8 22 83 132 1265 105

© Copyright IBM Corporation 2004

Figure 11-58. DB2 Catalog Changes CG381.0

Notes:
This visual shows how the DB2 catalog continues to grow with every release of DB2. In
addition to the new catalog objects required to support the new function in DB2 (tables,
columns etc.), Version 8 introduces a number of other significant changes to the catalog:
• Adding a number of columns, indexes, and new tables to the DB2 catalog.
• Changing the definition of many existing columns to support long names.
• Conversion of the character columns in the catalog from EBCDIC to Unicode.
In the next few visuals we introduce the major changes that are made to the DB2
catalog during migration to compatibility mode, and in enabling-new-function mode.
Please refer to the DB2 Release Planning Guide, SC18-7425, and DB2 SQL
Reference, SC18-7426, for a complete list of changes to the DB2 catalog.

11-136 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Migration - New Table Spaces
SYSEBCDC
EBCDIC table space
Not used until NFM
SYSIBM.SYSDUMMY1 moved from SYSSTR to SYSEBCDC

SYSALTER
New catalog table SYSIBM.SYSOBDS
To store version 0 information of ALTERed tables

© Copyright IBM Corporation 2004

Figure 11-59. Migration - New Table Spaces CG381.0

Notes:
SYSEBCDC is a new table space, defined in DSNDB06. Its encoding scheme is EBCDIC.
This table space is used to store the catalog table SYSIBM.SYSDUMMY1, which needs to
remain in EBCDIC. Its current (V7) table space, SYSSTR, will be converted to Unicode
during enabling-new-function mode.
SYSIBM.SYSDUMMY1 is a “dummy” table which is used by many applications as a
“default” or “dummy” table to use in SQL statements. For example:
SELECT CURRENT TIMESTAMP
FROM SYSIBM. SYSDUMMY1
Therefore SYSIBM.SYSDUMMY1 can be thought of more as an application table, rather
than a catalog table, and it may be imbedded into many applications today. So, if it were to
be converted to Unicode, all SQL statements accessing SYSIBM.SYDUMMY1 would
become multiple CCSID set statements. Results of multiple CCSID set SQL statements
can be different from single CCSID statements; for example, the ordering of result sets, or
the use of range predicates can affect the result set. This could cause unnecessary
complications for existing applications.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-137
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Please note that SYSIBM.SYSDUMMY1 will not be moved until enabling-new-function


mode. The table space SYSEBCDC will remain empty until that time. It is interesting to
note that the table space will contain only one table, which has one row with one column
defined as CHAR(1). We therefore recommend that you do not commit too much space to
this new table space.
SYSALTER is another new table space in DSNDB06. It is used to host the new catalog
table SYSIBM.SYSOBDS, and stores “version 0” information after a table has been altered.

11-138 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Migration - New Tables
SYSIBM.IPLIST
Created in DSNDB06.SYSDDF
Allows multiple IP addresses to be specified for a given LOCATION
SYSIBM.SYSSEQUENCEAUTH
Created in DSNDB06.SYSSEQ2
Records the privileges that are held by users over sequences
SYSIBM.SYSOBDS
Created in DSNDB06.SYSALTER
Contains the initial version of a table space or an index space that is
still needed for recovery

© Copyright IBM Corporation 2004

Figure 11-60. Migration - New Tables CG381.0

Notes:
Version 8 introduces three new tables into the DB2 catalog:
• SYSIBM.IPLIST is a new table that is defined in the existing SYSDDF catalog table
space. It allows multiple IP addresses to be specified for a given LOCATION. The same
value for the IPADDR column cannot appear in both the SYSIBM.IPNAMES table and
the SYSIBM.IPLIST table.
The SYSIBM.IPLIST table is summarized in Table 11-4. For a more detailed explanation
of this table and how it is used, refer to Figure 6-6, "TCP/IP Member Routing", on page
6-14.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-139
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Table 11-4 SYSIBM.IPLIST


Column Name Data Type Description

LINKNAME VARCHAR(24) This column is associated with the value specified in the
NOT NULL LINKNAME column in the SYSIBM.LOCATIONS table and
the SYSIBM.IPNAMES table. The values of the other
columns in the SYSIBM.IPNAMES table apply to the
server identified by the LINKNAME column in this row.

IPADDR VARCHAR(254) This column contains the IP address or domain name of a


NOT NULL remote TCP/IP host of the server. If WLM Domain Name
Server workload balancing is used, this column must
contain the member specific domain name. If Dynamic
VIPA workload balancing is used, this column must
contain the member specific Dynamic VIPA address.

The IPADDR column must be specified as follows:


If IPADDR contains a left justified character string
containing four numeric values delimited by decimal
points, DB2 assumes the value is an IP address in dotted
decimal format. For example,’123.456.78.91’ would be
interpreted as a dotted decimal IP address. All other
values are interpreted as a TCP/IP gethostbyname socket
call. TCP/IP domain names are not case sensitive.

IBMREQD CHAR(1) A value of Y indicates that the row came from the basic
NOT NULL WITH machine-readable material (MRM) tape.
DEFAULT 'N

The index listed in Table 11-5 is defined for SYSIBM.IPLIST.


Table 11-5 SYSIBM.IPLIST Indexes
Index Index Description Index
Name Columns

DSNDUX01 Unique LINKNAME


Clustering

• SYSIBM.SYSSEQUENCAUTH is a new catalog table defined in the existing table


space DSNDB06.SYSSEQ2. It records the privileges that are held by users over
sequences. These are listed in Table 11-6.
Table 11-6 SYSIBM.SYSSEQUENCAUTH
Column Name Data Type Description

GRANTOR VARCHAR(128) Authorization ID of the user who


NOT NULL granted the privileges.

GRANTEE VARCHAR(128) Authorization ID of the user or


NOT NULL group that holds the privileges
or the name of an application
plan or package that uses the
privileges. PUBLIC for a grant
to PUBLIC.

SCHEMA VARCHAR(128) Schema of the sequence.


NOT NULL

NAME VARCHAR(1280 Name of the sequence.


NOT NULL

11-140 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Column Name Data Type Description

GRANTEETYPE CHAR(1) Type of grantee:


NOT NULL blank An authorization ID.
P An application plan or
package. The grantee is a
package if COLLID is not blank.
R Internal use only.

AUTHHOWGOT CHAR(1) Authorization level of the user


NOT NULL from whom the privileges were
received. This authorization
level is not necessarily the
highest authorization level of
the grantor.
L SYSCTRL
S SYSADM
blank Not applicable.

ALTERAUTH CHAR(1) Indicates whether grantee


NOT NULL holds ALTER privilege on the
sequence:
blank Privilege is not held.
G Privilege is held with the
GRANT option.
Y Privilege is held without the
GRANT option.

USEAUTH CHAR(1) Indicates whether grantee


NOT NULLS holds ALTER privilege on the
sequence:
blank Privilege is not held.
G Privilege is held with the
GRANT option.
Y Privilege is held without the
GRANT option.

COLLID VARCHAR(128) If the GRANTEE is a package,


NOT NULL its collection name. Otherwise,
a string of length zero.

CONTOKEN CHAR(8) If the GRANTEE is a package,


NOT NULL the consistency token of the
FOR BIT DATA DBRM from which the package
was derived. Otherwise, blank.

GRANTEDTS TIMESTAMP Time when the GRANT


NOT NULL statement was executed.

IBMREQD CHAR(1) A value of Y indicates that the


NOT NULL row came from the basic
machine-readable material
(MRM) tape.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-141
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The indexes listed in Table 11-7 are defined for SYSIBM.SYSSEQUENCEAUTH.


Table 11-7 SYSIBM>SYSSEQUENCAUTH Indexes
Index Index Index
Name Description Columns

DSNWCX01 Non-Unique SCHEMA


Clustering NAME

DSNWCX02 Non-Unique GRANTOR


SCHEMA
NAME

DSNWCX03 Non-Unique GRANTEE


SCHEMA
NAME

• SYSIBM.SYSOBDS (shown in Table 11-8) is a new catalog table that resides in the
new table space DSNDB06.SYSALTER. The table contains copies of old versions of
DBDs that may be needed for point-in-time recoveries. It contains one row for each
table space or index that can be recovered to an image copy that was made before the
first version was generated.
Table 11-8 SYSIBM.SYSOBDS
Column Name Data Type Description

CREATOR VARCHAR(128) Authorization ID under which the table space or


NOT NULL index was ALTERed.

NAME VARCHAR(128) Name of the object ALTERed.


NOT NULL

DBID SMALLINT Identifier of the database to which the ALTERed


NOT NULL object belongs.

PSID SMALLINT Identifier of the table space or index space


NOT NULL descriptor.

OBID SMALLINT Identifier of the table or index fan set descriptor.


NOT NULL

OBDTYPE CHAR(1) Type of object (ODBREC or ODBFS).


NOT NULL

VERSION SMALLINT Version of original object when ALTERed.


NOT NULL

CREATETS TIMESTAMP Timestamp when the first new version was


NOT NULL created.

DBD VARCHAR OBDREC or OBDFS image.


(30000)
NOT NULL

IBMREQD CHAR(1) A value of Y indicates that the row came from the
NOT NULL basic machine readable material (MRM) tape.

11-142 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty The indexes listed in Table 11-9 are defined for the table SYSIBM.SYSOBDS.
Table 11-9 SYSIBM.SYSOBDS Indexes
Index Index Index
Name Description Columns

DSNDOB01 Unique CREATOR


Clustering NAME

DSNDOB02 Non-Unique DBID


PSID

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-143
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Migration - Other Changes


Column changes
Existing binary columns will be ALTERed to be FOR BIT DATA
columns to ensure they are handled properly during ENFM
processing
Some existing columns converted to FOR BIT DATA
Approx 60 columns added to existing tables
Approx 15 changes to values of existing columns
Approx 45 column definitions have been changed
Approx 10 changes to RI and table check constraints
Column length change to support long names are only done
during ENFM processing

© Copyright IBM Corporation 2004

Figure 11-61. Migration - Other Changes CG381.0

Notes:
A number of columns within the DB2 catalog are used to store binary data rather than
character data. However, these columns are not defined as FOR BIT DATA in V7.
During catalog migration these columns are altered to be defined as FOR BIT DATA. This
is to ensure they are converted correctly to Unicode during enabling-new-function mode. In
addition, a few other columns which store character data are converted to FOR BIT DATA.
This visual summarizes the number of other changes that are made to existing catalog
tables. Once again, please refer to Appendix A of the DB2 SQL Reference, SC18-7426 for
a complete list of new and changed catalog tables.
During the migration to DB2 Version 8 compatibility mode, no column lengths are being
changed to support long names. This occurs during enabling-new-function mode.

11-144 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
CM - Changed Indexes
Table Name Index Name Index Description Index Columns

SYSCOLAUTH DSNACX01 Non-unique CREATOR,


TNAME,
COLNAME

SYSFOREIGNKEYS DSNDRH01 Non-unique CREATOR, TBNAME,


RELNAME

SYSINDEXES DSNDXX04 Non-unique INDEXTYPE


SYSRELS DSNDLX02 Non-unique CREATOR,
TBNAME
SYSSEQUENCESDEP DSNSRX02 Non-unique BSCHEMA, BNAME,
DTYPE
SYSTABLEPART DSNDPX03 Non-unique DBNAME, TSNAME,
LOGICAL_PART

SYSTABAUTH DSNATX04 Non-unique TCREATOR,


TTNAME
SYSTABLES DSNDTX03 Non-unique TBCREATOR,
TBNAME
SYSVIEWDEP DSNGGX04 Non-unique BCREATOR, BNAME,
BTYPE, DTYPE

New indexes on existing catalog tables


© Copyright IBM Corporation 2004

Figure 11-62. CM - Changed Indexes CG381.0

Notes:
This visual outlines the changes that are made to the catalog indexes on catalog tables that
exist in V7. You might want to consider reviewing the indexes you have created on the
catalog, if any. Some of these indexes may be no longer required.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-145
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

ENFM - Catalog Changes


17 catalog and 1 directory table space converted to Unicode
2 DROPed tables
SYSLINKS
No index data sets to be deleted
SYSPROCEDURES
VSAM data sets for Index DSNKCX01 can be deleted after ENFM

SYSIBM.SYSDUMMY1
Moved from DSNDB06.SYSSTR to DSNDB06.SYSEDCBC
Many columns changed to VARCHAR
To support long names
System-defined catalog indexes changed to NOT PADDED
7 catalog and 1 directory table space moved from BP0

© Copyright IBM Corporation 2004

Figure 11-63. ENFM - Catalog Changes CG381.0

Notes:
This visual summarizes the changes made to the DB2 catalog during
enabling-new-function mode processing.
• 18 table spaces are converted from EBCDIC to Unicode. The others remain as
EBCDIC. This process is defined at length in the discussion on enabling-new-function
mode earlier in this unit. These table spaces are:
- SPT01 (directory)
- SYSDBASE
- SYSDBAUT
- SYSDDF
- SYSGPAUT
- SYSGROUP
- SYSGRTNS
- SYSHIST
- SYSJAVA
- SYSOBJ

11-146 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty - SYSPKAGE
- SYSPLAN
- SYSSEQ
- SYSSEQ2
- SYSSTATS
- SYSSTR
- SYSUSER
- SYSVIEWS
Please note that after the table spaces have been converted to Unicode, columns
defined as FOR BIT DATA will not automatically be converted to EBCDIC for display in
tools such as SPUFI in ISPF. For example, the SQL TEXT column in
SYSIBM.SYSPACKSTMT will be displayed in Unicode rather that EBCDIC.
Consequently they will be much harder to read and understand.
Please check your DB2 tools and facilities carefully. You may not be able to execute the
same simple queries against the catalog tables to get the information you are used to.
Important: We therefore recommend that you check the tools you regularly use against
the DB2 catalog to see how they will handle catalog columns defined as FOR BIT DATA
and the new long varchar columns.
You may need to develop new ways to access the data you require. For example, Visual
Explain will convert the TEXT column in SYSIBM.SYSPACKSTMT back to something
you can read.
The following catalog and directory table spaces are not converted to Unicode and
remain as EBCDIC. Many of these tables already contain binary data, so there is no
need to convert them to Unicode. They also contain many logical names which must
interface with external MVS names (for example, data set names), which remain
EBCDIC.
- DBD01 (directory)
- SCT02 (directory)
- SYSLGRNX (directory)
- SYSUTIL (directory)
- SYSCOPY
• Two tables are dropped:
a. SYSIBM.SYSLINKS
This table is no longer used in Version 8. Because the table resides in the
DSNDB06.SYSDBASE table space and has no indexes defined, there are no VSAM
data sets to be cleaned up.
b. SYSIBM.SYSPROCEDURES
This table is no longer used in V7, since the DB2 stored procedure definitions have
been moved to SYSIBM.SYSROUTINES when DB2 was migrated from Version 5 to
Version 6 or Version 7. As the table resides in the DSNDB06.SYSPKAGE table

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-147
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

space, there is no need to delete the underlying table space VSAM data set.
However, the VSAM data set for its only index, DSNKCX01, needs to be deleted
after you are in new-function mode.
• The table SYSIBM.SYSDUMMY1 is moved for the table space DSNDB06.SYSSTR to
its own table space, DSNDB06.SYSEBCDC.
• Columns in almost every catalog and directory table have changed column type and
length to support long names. Many columns are typically changing from char to
varchar 3 times the size (for example; CHAR(8) will go to VARCHAR(24)), and other
columns will grow to VARCHAR(128). The list of actual columns that will change is
extensive. Once again, please refer to Appendix A of the DB2 SQL Reference,
SC18-7426.
• When you migrate DB2 from Version 7 to Version 8 compatibility mode, the default
behavior of the index will be PADDED, as in previous versions of DB2. However during
enabling-new-function mode, all the DB2 defined catalog indexes will be changed to
NOT PADDED.
• Increasing the lengths of some of the catalog table columns to support long names
causes some catalog table rows to exceed the current 4K page size maximum. In these
cases, the table spaces that contain these tables are moved to an appropriately sized
buffer pool during enabling-new-function mode processing.
A total of 7 catalog table spaces and 1 directory table space will be moved from BP0 to
either BP8K0 or BP16K0.
Table 11-10 New Catalog Table Space Buffer Pool Assignments
Table space name Buffer pool Page Size

SPT01 BP8K0 8K

SYSDBASE BP8K0 8K

SYSGRNTS BP8K0 8K

SYSHIST BP8K0 8K

SYSOBJ BP8K0 8K

SYSSTR BP8K0 8K

SYSSTATS BP16K0 16K

SYSVIEWS BP8K0 8K

Note that the catalog and directory table spaces are always using a BPxxK0 buffer pool. In
case these buffer pools are currently used by other DB2 objects, you may want to reassign
those to other buffer pools, and give the catalog objects their dedicated buffer pool (as you
normally do with BP0).

11-148 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.5 msys for Setup - DB2 Customization Center

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-149
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

msys for DB2 Customization Center

z/OS Managed System


Infrastructure for Setup
OS/390 (z/OS msys for Setup)
Web-based
assistants

Traditional way

© Copyright IBM Corporation 2004

Figure 11-64. msys for DB2 Customization Center CG381.0

Notes:
You can install your DB2 Version 8 subsystem either as a host based installation using the
DB2 installation CLIST, or via the “msys for Setup Facility”. The msys Setup DB2
Customization Center is a workstation based facility which replaces the DB2 Installer
workstation tool. In fact, msys for Setup DB2 Customization Center does a lot more for you
than the DB2 Installer it replaces, as you will see in the next few visuals:
• Overview and benefits
• Hardware and software prerequisites
• The msys workplace
• How is msys for Setup used to customize a product?

11-150 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
In the Beginning
For a number of years, OS/390 has offered Web-based
assistants - "WIZARDS"
Convenient way of going through planning and ordering tasks
Defining configuration parameters for some OS/390 components
Why did we introduce them?
Configuring OS/390 products and components is a manual and
document driven process
Program directories
Installation / Configuration Guides
Setting hundreds of parameters by hand
Detailed descriptions in one manual and usage
information in another

© Copyright IBM Corporation 2004

Figure 11-65. In the Beginning CG381.0

Notes:
The work associated with installing and maintaining software products has become
increasingly complex and costly over time. The drive to reduce costs has conversely
increased over time. This has brought many customers today to question the way they
install and manage software:
• It is too complex.
• It takes up too much time.
• It requires skills I either do not have available, or I would prefer to use these resources
for other tasks.
To address these growing concerns, IBM invested in an initiative to make software more
self managing. Such systems can reduce downtime, operating costs and administrative
requirements. The goals of this initiative are for the systems to become:
• Self-configuring: System designed to define itself “on the fly”.
• Self-healing: System capable of autonomic problem determination and resolution.
• Self-protecting: System designed to protect itself from unauthorized access anywhere.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-151
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Self-optimizing: System designed to automatically manage resources in order to meet


enterprise needs in the most efficient fashion.
Web-based Wizards are the first steps in this strategy. They offer these advantages:
• Reduced configuration steps
• No need for the user to understand the syntax in detail
• Reduction in choices and dependencies; do not overwhelm the user
• Automatic generation of jobs and PARMLIB statements
Web-based interactive dialogs for configuration guide the user through a set of high level
questions, and use defaults and best practices where possible to reduce decision making.
Inputs are checked immediately for syntactical and semantic correctness. The right level of
help is available without having to trawl through manuals. The wizards deliver job skeletons
that apply the defined values to the system.
However, wizards did not address issues like these:
• Implementation is still to be done by the customer.
• Configuration data is not stored in a central place.
• No discovery of the current system configuration is possible.

11-152 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
What Is msys for Setup?
z/OS Managed System Infrastructure for Setup (msys for Setup)
It is a base element of z/OS - NO extra charge
Part of a major ease-of-use initiative within IBM
Strategic solution for product installation, configuration, and function
enablement
Reduce the complexity associated with:
Installation
Function enablement
Setup and configuration
Customization
Currently, other z/OS products enabled for msys include:
TCP/IP, UNIX System Services, RACF, AMS, SMS, ISPF, LE, BCP,
Parallel Sysplex and DB2

© Copyright IBM Corporation 2004

Figure 11-66. What Is msys for Setup? CG381.0

Notes:
Managed System Infrastructure for Setup (msys for Setup) is a z/OS initiative to simplify
the customization and installation of all z/OS products. msys for Setup is a base element of
z/OS, which currently supports TCP/IP, UNIX System Services, RACF, AMS, SMS, BCP,
Parallel Sysplex, ISPF, LE, and now also DB2.
msys for Setup participates in the IBM autonomic computing initiative and is an approach to
reduce the complexity associated with function enablement, setup, and configuration. msys
for Setup builds on Web-based wizard technologies for installing and maintaining software.
msys for Setup has been developed to automate all setup processes that do not require
decisions by a system programmer, and assists by deriving values when decisions by a
human operator are required. msys for Setup uses wizard-like configuration dialogs that
guide the user through a set of high-level questions. These configuration dialogs are part of
the graphical user interface called the msys for Setup workplace. The configuration dialogs
use defaults and best practices values wherever this is possible to cut down on the number
of decisions that you have to make. Because the customization process is now handled by

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-153
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

a program instead of being a manual process, input is immediately checked for syntactical
and semantic correctness.
A second component of msys for Setup is the z/OS management directory, which is msys
for Setup’s central repository for configuration data of msys for setup-enabled systems. It is
based on the Lightweight Directory Access Protocol (LDAP) directory support that is
available as part of z/OS and on the Common Information Model (CIM) data schema. The
management directory provides a single interface to all management-related system data.
More information about msys for Setup is available from the Web site:
http://www.ibm.com/servers/eserver/zseries/msys/moreset.html

11-154 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
msys for Setup - Benefits
In the past you needed to:
Manually update a number of system resources
Worry about multiple different interfaces and their syntax
Decide how to set each value
Check if the configuration is valid
Read several feet of documentation

With msys:
Updates to the system are done automatically
The system takes care of the syntax and interfaces
The system does the calculations
Only a small number of values are shown when a decision is made
The validity of a configuration is checked by the system
Online help is just a mouse click away

© Copyright IBM Corporation 2004

Figure 11-67. msys for Setup - Benefits CG381.0

Notes:
msys for Setup is the IBM strategy for installing and maintaining z/OS products. It builds
upon the Wizard technology employing the same easy “interview style” for defining
configuration values. As with Wizards, msys for Setup does not ask you to specify
hundreds of parameters. It uses defaults and best practices values and derives low-level
answers from high-level questions, and it downloads generated skeletons to the host for
execution. msys for Setup caters to both experienced and novice users:
• It provides a consistent look-and-feel for all z/OS products.
• Plug-ins use wizards that reduce the amount of information that the user has to enter to
get up and running.
• Context-sensitive and menu-driven help is available at the user's fingertips.
• It eliminates JCL install jobs and automates many install steps that are typically
performed manually.
In addition, msys for Setup:
• Allows a user to preview changes that will be made to the system.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-155
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Logs updates made to a system and allows users to browse these updates easily.
• Can retrieve the current parameter settings of a product during refresh (for example;
parameter settings for a DB2 subsystem that has to be migrated).
• Allows customization changes to be saved intermediately, so that the user does not
have to customize in one sitting.
• Provides cloning support.
• Provides automatic access to the configuration data of other products, reducing the
amount of information that is needed from the user (for example; LE library names).

11-156 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
msys for Setup - Overview
Product specific customization dialogs store configuration data
in the z/OS Management Directory
Will become the central repository for all configuration data
Update requests are also stored
Describe HOW the configuration is to be applied to system
Set a field in a PARMLIB member
Create a new userid with a set of defined attributes

After the configuration parameters have been specified, msys for


Setup can automatically update the system configuration directly
Details the changes before they are made
Relieves the user of the intricacies of z/OS configuration interfaces
Parmlib members

© Copyright IBM Corporation 2004

Figure 11-68. msys for Setup - Overview CG381.0

Notes:
Managed System Infrastructure for Setup (msys for Setup) establishes a central repository
for product configuration data. It also provides a single interface to the repository. It
automates all processes that do not require decisions by the system administrator and
defines defaults to minimize the situations in which decisions are necessary.
z/OS products that provide an msys for Setup plug-in can be managed using msys for
Setup. Parameter values for a particular product that is enabled for use with msys for Setup
can be specified using the graphical user interface of the product plug-in. These values are
stored in the msys for Setup management directory and eventually used by the host code
of the product plug-in to customize and install the product.
For information about using msys for Setup with DB2 UDB for z/OS and other products in
your z/OS environment, see Managed System Infrastructure for Setup User’s Guide,
SC33-7985.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-157
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

msys for Setup - Framework


The msys framework consists of three components:
msys host program
Resides and runs on a z/OS system and manages all
installation/customization tasks
msys management directory
Uses an LDAP Server
Stores configuration data for all msys-enabled products
msys workplace
Runs on the workstation and provides the user with a Windows
Explorer-style GUI to manage z/OS products

© Copyright IBM Corporation 2004

Figure 11-69. msys for Setup - Framework CG381.0

Notes:
The msys for Setup framework consists of three components:
• The msys for Setup Workplace. It runs on a Windows workstation and provides a
graphical user interface similar to the Windows Explorer. The msys for Setup workplace
is used to manage z/OS products. You simply interact with the workplace. It can be
downloaded to the workstation from any z/OS host.
• The msys for Setup Host Program. The host program usually resides on the z/OS
system. It manages all installation and customization tasks.
• The msys for Setup Management Directory. This is based on the Lightweight
Directory Access Protocol (LDAP) directory support. This stores the configuration data
for all systems on which msys for Setup is enabled. It resides on the z/OS system.

11-158 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Workstation Requirements
Software requirements:
Operating system - either Windows NT 4.0 with fixpack 4 (or higher),
Windows 2000 or Windows XP
Communication:
TCP/IP must be configured and active with connectivity to the host
Administrator rights:
Must logon as Administrator to perform the installation
Hardware requirements:
Processor Pentium or equivalent
Speed 266 MHz or faster
Memory 128 MB, 192 MB is recommended.
Available disk capacity 100 MB or more
Screen resolution 800x600 (1024x768 recommended)

© Copyright IBM Corporation 2004

Figure 11-70. Workstation Requirements CG381.0

Notes:
The msys for Setup Workplace can be installed onto any machine with either Windows
NT4.0 with FixPak 4 (or higher) applied, Windows 2000, or Windows XP. TCP/IP must be
configured and running, with connectivity to the host.
When installing the workplace code, the user ID used for performing the installation must
have administrator rights on the workstation.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-159
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Driving System Requirements


Operating system:
OS/390 R8 or higher / z/OS R1 or higher
z/OS LDAP Server
Non-OS/390 software:
Developer kit for OS/390, Java 2 Technology Edition(Java 1.3.0) or
Java for OS/390 (Java 1.1.8)
(Out of support)

FTP connection from msys workplace

© Copyright IBM Corporation 2004

Figure 11-71. Driving System Requirements CG381.0

Notes:
The msys for Setup Host Program requires z/OS release 1 or higher, or OS/390 release 2.8
or higher, with TCP/IP connectivity to the msys for Setup workstation. So when you are
ready to go to DB2 for z/OS V8, the z/OS version you are on should not pose a problem, as
DB2 V8 requires z/OS 1.3 or above.
The msys for Setup Host Program is a Java-based application which runs in UNIX Systems
Services. It, therefore, requires the Java Development Kit for z/OS to be installed and
available.
msys for setup uses the z/OS LDAP Server as its management directory, or repository.
This, in turn, requires DB2 for z/OS to store the data.

11-160 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
How Does It Work?

Target System

Host
msys Code LDAP-based
Management
Directory
Workplace
Code

© Copyright IBM Corporation 2004

Figure 11-72. How Does It Work? CG381.0

Notes:
A z/OS product must provide plug-in code to the msys for Setup framework in order to be
managed through msys for Setup.
A user interacts with the workstation component of msys for Setup, to enter parameter
values for a specific product through the product's plug-in GUI.
These values are used by the product's host plug-in code to customize and install the
product.
msys for Setup stores these values in the msys for Setup Management Directory.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-161
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

msys for Setup and DB2

msys
for Setup
ins
tal
l/m
igr
ate IMS
IMS CICS
IMS
Web
IMS
Sphere

e
igrat
DB2

ll/ m
insta Strategic Open Access
z/OS Enterprise Server

© Copyright IBM Corporation 2004

Figure 11-73. msys for Setup and DB2 CG381.0

Notes:
DB2 Version 8 ships a plug-in for msys for Setup, known as the DB2 Customization Center.
It is an XML document in the new data set hlq.SDSNXML. In order to be able to run the
DB2 Customization Center in the msys for Setup framework, PTF OA4581 must be
installed.
After you have prepared your z/OS subsystem and workstation for use with msys for Setup,
you can add the DB2 Customization Center to the msys for Setup workplace. After doing
this, you are ready to use the DB2 Customization Center.
During refresh, the DB2 Customization Center retrieves current DB2 and z/OS settings.
These values are stored in the msys for Setup management directory. Then, you provide
information to the DB2 Customization Center that is used to set up your DB2 subsystem.
During customization, you review and, if necessary, modify the values of DB2 system
parameters.
Then, during update, the DB2 Customization Center applies the changes that you made to
the DB2 subsystem.

11-162 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty We introduce these processes in the next few visuals.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-163
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

msys Workplace

© Copyright IBM Corporation 2004

Figure 11-74. msys Workplace CG381.0

Notes:
This visual provides an example of the msys for Setup Workspace.
From here, you can use the navigation tree on the left to install a new product set into msys
for Setup and use msys for Setup to install and customize the product in z/OS.
For more information on using the msys for Setup workplace and the various functions you
need to perform, see Managed System Infrastructure for Setup User’s Guide, SC33-7985.

11-164 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
How Is msys for Setup Used?
A user must perform the following steps for any z/OS product
enabled for msys
1. Install Product Set - Registers the product with msys
2. Refresh Management Directory - Retrieves current product
settings and stores them in the management directory
3. Customize - Allows the user to set product specific settings
4. Update - Performs actual changes on z/OS resources based on
customization

© Copyright IBM Corporation 2004

Figure 11-75. How Is msys for Setup Used? CG381.0

Notes:
You must perform the following steps for any z/OS product enabled for msys for Setup:
1. Install product set:
You must first register a product, such as DB2, with msys for Setup, which will ask for
the name of a product definition file. This is a simple XML file that provides msys for
Setup with the names and locations of the entry points into the product plug-in code.
DB2 Version 8 ships its product definition file in a new SMP/E managed data set called:
hlq.SDSNXML(DSNMXML)
2. Refresh Management Directory:
Refresh retrieves the current parameter settings for the product that is being
customized. It also retrieves information from other product plug-ins that are also
needed by the product that is being customized.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-165
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

The “retrieve” process attempts to minimizes the amount of information required from
the user. For example, for a DB2 migration, the retrieve process can collect the current
subsystem parameters, and the LE library names can be obtained from the LE plug-in.
3. Customize:
The “customize” process presents the user with a series of customization dialogs to
enter product specific settings. Wizards ask the user a series of high level questions
that are used to set various customization parameters. Typical defaults are used for
'advanced' parameters. Wizard summary parameter panels, or “Property sheets”, allow
the user to modify parameter settings explicitly.
4. Update:
The “update” process performs the actual customization of the product on the z/OS host
using the parameter settings entered by the user in the previous step.
During this step, the msys for Setup framework runs a batch job on the z/OS host which
initiates the msys for Setup (Java) host code that customizes the product. For DB2, this
step performs the same tasks as the DB2 install JCL jobs.
msys for Setup also has an update log which lists all the of the steps that have been
performed on the z/OS to customize and install a product. The update log can also list
user actions that may have to be performed outside msys for Setup.

11-166 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Install Product Set

© Copyright IBM Corporation 2004

Figure 11-76. Install Product Set CG381.0

Notes:
Before you can use the DB2 Customization Center to install DB2, you must add the DB2
Customization Center to msys for Setup. This tells msys for Setup that the DB2 libraries
have been loaded on the z/OS system via SMP/E.
DB2 UDB for z/OS includes an XML document that provides msys for Setup with
necessary information. You can use the “Add a product set” wizard from the msys for Setup
workplace to add the DB2 product set to the workplace. When using this wizard, specify
that you already have an up-to-date XML document for DB2. Enter the data set and
member name of the XML document in the following format:
prefix.SDSNXML(DSNMXML)

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-167
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Refresh Management Directory

© Copyright IBM Corporation 2004

Figure 11-77. Refresh Management Directory CG381.0

Notes:
After you have added the DB2 Customization Center to the msys for Setup framework, you
must perform a refresh. During this step, the msys for Setup workplace retrieves any
configuration information about your DB2 subsystem that exists on the z/OS host.
The configuration information is stored in the msys for Setup management directory. If you
are migrating DB2, customizing an existing DB2 subsystem, or enabling DB2 for data
sharing, the refresh step obtains current parameter information from your DB2 subsystem
on the z/OS host.
You enter information such as the DB2 subsystem name, command prefix, and target
library prefix. If you have previously used the DB2 installation CLIST to customize a DB2
subsystem, you can clone this DB2 by specifying the name of the output member
generated by the CLIST. After you have provided this information, you will be able to
perform the refresh.

11-168 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Customizing DB2

© Copyright IBM Corporation 2004

Figure 11-78. Customizing DB2 CG381.0

Notes:
After you have refreshed the msys for Setup workplace, you can customize DB2. In this
step, the DB2 Customization Center contains several wizards that ask you a series of
questions. These questions are used to set values for various DB2 parameters. At the end
of each wizard, you will be shown a list of DB2 parameters and their values. You can
browse this list and modify any parameter value if necessary.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-169
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Performance Configuration Advisor

© Copyright IBM Corporation 2004

Figure 11-79. Performance Configuration Advisor CG381.0

Notes:
One of the wizards that are part of the DB2 Customization Center is the “Performance
Configuration Advisor”. It is an optional task in the Customize step that will recommend
settings for common performance parameters based on your answers to a few simple
questions. It dynamically updates the recommended settings as you adjust your answers to
the questions.
The DB2 for z/OS Performance Configuration Advisor (PCA) recommends a set of DB2
system parameter (DSNZPARM) values and buffer pool sizes. It is intended to provide
good DB2 system level performance for the DB2 system installed through the DB2
Customization Center.
The following information is gathered from the user:
• Type of workload (transaction/decision support /mixed)
• Amount of central storage available to this DB2 subsystem on the LPAR
• Number of local concurrent applications and remote connected users.

11-170 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Based on the above data, PCA recommends a set of system parameter values and buffer
pool sizes. As the user, you have the option to modify the suggested values.
After you have completed the wizards to customize DB2, an “Update tasks” window
appears. This window shows a list of all the tasks that need to be performed on the z/OS
host before DB2 can be used with these new values. These update tasks will vary
depending on whether you have customized DB2 for installation, migration, or data
sharing. Some of the update tasks can be performed by msys for Setup, but some will need
to be performed by you or an authorized user outside of the msys for Setup framework. A
few tasks may be performed by either msys for Setup or an authorized user.
You can choose to complete these tasks over time, but you cannot use DB2 until all of
these tasks have been completed.
If you have already used the DB2 Customization Center to customize a DB2 subsystem,
you can copy the parameter values to another DB2 subsystem that is using msys for
Setup.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-171
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Update

© Copyright IBM Corporation 2004

Figure 11-80. Update CG381.0

Notes:
After you have chosen the parameter values you want to customize, you must update the
DB2 subsystem with the values that you have chosen. The DB2 Customization Center
performs only those tasks that you specified in the “Update tasks” window in the previous
step.
Unlike the DB2 installation CLIST, the DB2 Customization Center does not generate JCL
jobs. The tasks executed during the update are equivalent to those that are performed by
the DB2 installation JCL jobs. If you want to use JCL jobs to configure DB2 on the host, you
can use the DB2 Customization Center to generate an output member that can be used as
input to the DB2 installation CLIST.
The msys for Setup framework may provide the option to enable the batch job that initiates
the “update” step to run independently of the msys for Setup workplace, in a future release.

11-172 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.6 Samples

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-173
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 IVP Sample Programs


Many samples have changes to demonstrate V8 new function
DSNTEP2 enhancements
DSNTIAD and DSNTIAUL enhancements
Online schema changes
New sample jobs
Miscellaneous other changes

© Copyright IBM Corporation 2004

Figure 11-81. DB2 IVP Sample Programs CG381.0

Notes:
DB2 continues to enhance the IVP samples that are shipped, to demonstrate new function
in DB2, to provide more features and enhance the usability of the samples themselves. In
the next few visuals we will review the major changes to the samples that are shipped with
DB2 Version 8:
• DSNSTEP2 enhancements
• DSNTIAUL enhancements
• Online schema changes
• New sample jobs
• Other miscellaneous changes

11-174 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTEP2 Enhancements - New DSNTEP4
Changed to incorporate new DB2 functionality
Support for GET DIAGNOSTICS
Can now handle SQL statements larger than 32K
Handles longer than 18 character table or column names
Can now modify the MAXERRORS value
Currently DSNTEP2 tolerates only 10 errors then it stops
During runtime by including the - #SET MAXERRORS
functional comment in the SQL input statements
Performance enhancements
Current performance impact when processing large result sets
SYSPRINT blocking size was very small
Speeded up the rate in which DSNTEP2 produces output
results
New DSNTEP4
Equivalent to DSNTEP2 but uses multi-row fetch

© Copyright IBM Corporation 2004

Figure 11-82. DSNTEP2 Enhancements - New DSNTEP4 CG381.0

Notes:
DSNTEP2 is a PLI program shipped with DB2 to demonstrate the support for dynamic
SQL. DSNTEP2 has been enhanced for:
• GET DIAGNOSTICS
DSNTEP2 now uses GET DIAGNOSTICS to retrieve error information.
• Large SQL statement:
The sample program DSNTEP2 can now handle SQL statements larger than 32k in
size.
• Greater than 18 character table/column names:
The sample program DSNTEP2 has been modified to handle the longer table and
column names.
• New MAXERRORS value:
A new MAXERRORS parameter has been added in DSNTEP2. It allows you to
dynamically set the number of errors that DSNTEP2 will tolerate. In previous versions of

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-175
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2, DSNSTEP2 stopped processing after it encountered 10 SQL errors. The


MAXERRORS value can be modified during runtime via coding the following functional
comment in the SQL statements.
--#SET MAXERRORS
• SYSPRINT blocking in DSNTEP2:
A change to SYSPRINT blocking in DSNTEP2 will speed up the rate in which
DSNTEP2 outputs its results. The blocking size was very small before, thus impacting
the performance when processing large result sets. DSNSTEP2 can now also use the
system default or user assigned JCL block size.
DB2 V8 also ships a new flavor of DSNTEP2, called DSNTEP4. Its functionality is identical
to DSNTEP2, but it allows you to use multi-row fetch when retrieving data from the result
set of a query.
• Multi-row fetch:
The DSNTEP4 sample program uses multi-row FETCH. You can specify a new
parameter MULT_FETCH. This option is valid only for DSNTEP4. Use MULT_FETCH
to specify the number of rows that are to be fetched at one time from the result table.
The default fetch amount for DSNTEP4 is 100 rows, but you can specify from 1 to
32676 rows. It can be coded as a functional comment statement as follows:
//SYSIN DD *
--#SET MULT_FETCH 250
SELECT * FROM DSN8810.EMP;

11-176 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DSNTIAD, DSNTIAUL
DSNTIAD is a sample dynamic SQL program written in
Assembler
Does NOT allow SQL statements larger than 32K

DSNTIAUL is a sample assembler table unload program


Enhanced to use multi-row FETCH
Modified to handle SQL statements up to 2 MB in size

© Copyright IBM Corporation 2004

Figure 11-83. DSNTIAD, DSNTIAUL CG381.0

Notes:
Here we discuss two popular programs that can be used to execute DDL (DSNTIAD) or
unload data from DB2 tables (DSNTIAUL).
DSNTIAD
DSNTIAD is a sample dynamic SQL program written in assembler. It has NOT been
enhanced to support SQL statements greater than 32k. This is because DSNTIAD is used
as a part of the DB2 migration process to Version 8 compatibility mode when new function
is not supported.
DSNTIAUL
DSNTIAUL is a sample assembler table unload program. It has been enhanced to:
• Handle SQL statements up to 2 MB in size.
• Use multi-row FETCH.
You can specify an additional invocation parameter called “number of rows per fetch”.
It indicates the number of rows per fetch that DSNTIAUL retrieves. You can specify a

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-177
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

number from 1 to 32767. If you do not specify this number, DSNTIAUL retrieves 100 rows
per fetch. This parameter can be specified together with the SQL parameter, as shown in
Example 11-2.
Example 11-2. Invoking DSNTIAUL with Multi-row Fetch

//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB81) PARMS('SQL,250') -
LIB(’DSN810.RUNLIB.LOAD’)
...
//SYSIN DD *
LOCK TABLE DSN8810.PROJ IN SHARE MODE;
SELECT * FROM DSN8810.PROJ;
______________________________________________________________________
In this case, 250 rows are retrieved in a single fetch operation.

11-178 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Online Schema Change
Four new steps in DSNTEJ1
The first step changes the partitioning key on partition 4 of table space
DSN8S81E
The second step adds a fifth partition to the DSN8S81E table space
for the EMP table
The third step reorganizes table space DSN8D81A.DSN8S81E
The fourth step extends the length of a fixed char column in the
PARTS table

Running the DSNTEJ1 job with DSN8S81E table space started


demonstrates that these steps can be done without stopping the
table space
An online schema change

© Copyright IBM Corporation 2004

Figure 11-84. Online Schema Change CG381.0

Notes:
The sample job DSNTEJ1 has been enhanced to demonstrate the use of online schema
changes: Four new steps have been added to the job:
• The first step reduces the partitioning key on partition of table space DSN8S81E.
• The second step adds a fifth partition to the DSN8S81E table space for the EMP table.
• The third step reorganizes table space DSN8D81A.DSN8S81E.
• The fourth step extends the length of a fixed char column in the PARTS table. It also
converts a small integer field to a decimal type field in the EEMP table.
Running the DSNTEJ1 job with DSN8S81E table space started demonstrates that these
steps can be done without stopping the table space.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-179
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

New Sample Jobs (1 of 2)


Job DSNTEJ3M uses MQTs containing materialized data derived
from existing sample tables
Creates and populates base and materialized query tables
Issues EXPLAIN statements that demonstrate the use of MQT by the
optimizer for queries against the base tables
Job DSNTEJ6R uses the z/OS Conversion Services
Prepares and executes program DSN8ED8
A C language sample caller of the utilities Unicode parser (DSNUTILU)
stored procedure
DSNUTILU is the same as DSNUTILS, but accepts Unicode utility
statements
Since DB2 IVPs are provided in EBCDIC, DSN8ED8 uses z/OS
Conversion Services to:
Convert arguments for DSNUTILU statements from EBCDIC to Unicode
Convert DSNUTILU results from Unicode to EBCDIC

© Copyright IBM Corporation 2004

Figure 11-85. New Sample Jobs (1 of 2) CG381.0

Notes:
The following sample jobs are provided for your convenience.
Materialized Query Tables (MQTs)
A new sample job is delivered with DB2 Version 8, to demonstrate the use of Materialized
Query Tables.
The job DSNTEJ3M creates and populates base and materialized query tables. It also
issues EXPLAIN statements that demonstrate the use of MQTs by the optimizer for queries
against the base tables.
Utilities Unicode Parser:
DSNUTILU is a new stored procedure shipped with Version 8. DSNUTILU is a Unicode
version of DSNUTILS. It allows DB2 utilities to be executed as stored procedures; however,
it accepts its input as Unicode.
The new job DSNTEJ6R prepares and executes program DSN8ED8, a C language sample
caller of the utilities Unicode parser (DSNUTILU) stored procedure. Since DB2’s IVPs are
provided in EBCDIC, DSN8ED8 uses z/OS Conversion Services to:

11-180 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Convert arguments for DSNUTILU statements from EBCDIC to Unicode


• Convert DSNUTILU results from Unicode to EBCDIC

Attention: When you run this job, you must take care when working with the mixed
character set in the macro CUNHC. If you change anything here, the job fails.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-181
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

New Sample Jobs (2 of 2)


Three new COBOL samples demonstrate the usage of LOBs
Job DSNTEJ76 prepares and runs the programs DSN8CLPL and
DSN8CLTC
DSN8CLPL is a COBOL version of DSN8DLPL
Uses a LOB locator data type to populate BLOB columns greater than
32 KB in length
DSN8CLTC is a COBOL version of DSN8DLTC
Fetches the data back from DSN8CLPL and verifies that this was the
same as the source data
Job DSNTEJ77 prepares and runs the program DSN8CLRV
DSN8CLRV is a COBOL version of DSN8DLRV
Demonstrates LOB locator functions, parsing a CLOB column to pull
out resume data to be displayed on an ISPF panel
Job DSNTEJ78 prepares and runs the program DSN8CLPV
DSN8CLPV is a COBOL version of DSN8DLPV
Extracts the BLOB data that contains a photo image and displaying it
using GDDM

© Copyright IBM Corporation 2004

Figure 11-86. New Sample Jobs (2 of 2) CG381.0

Notes:
DB2 V8 ships a number of new and enhanced sample jobs.
LOBs
Version 8 provides four new sample programs which demonstrate the usage of LOBs with
COBOL. These sample programs are prepared and executed using three new sample jobs:
• Job DSNTEJ76 prepares and runs the programs DSN8CLPL and DSN8CLTC.
DSN8CLPL is a COBOL version of DSN8DLPL (that is written in C). It uses a LOB
locator data type to populate BLOB columns greater than 32 KB in length. DSN8CLTC
is a COBOL version of DSN8DLTC (written in C). It fetches the data back from
DSN8CLPL and verifies that this was the same as the source data.
• Job DSNTEJ77 prepares and runs the program DSN8CLRV. DSN8CLRV is a COBOL
version of DSN8DLRV (that is written in C). It demonstrates LOB locator functions,
parsing a CLOB column to pull out resume data to be displayed on an ISPF panel.

11-182 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • Job DSNTEJ78 prepares and runs the program DSN8CLPV. DSN8CLPV is a COBOL
version of DSN8DLPV (that is written in C). It extracts the BLOB data that contains a
photo image and displays it using GDDM.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-183
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Miscellaneous Other Changes


Changes made to all C and C++ IVP jobs
All C and C++ language IVP jobs now specify the new precompiler
option CCSID(1047) because the IVP C and C++ source code is
CCSID 1047
Java stored procedure
New sample program MRSPcli.java and MRSPsrv.java
This stored procedure sample returns multiple result sets back to
the caller
WLM stored procedures
All sample stored procedures are converted to WLM-established
stored procedures
Ensure all WLM-established stored procedures have the same
environment attributes
Run in same WLM address space

© Copyright IBM Corporation 2004

Figure 11-87. Miscellaneous Other Changes CG381.0

Notes:
Here we discuss several changes made regarding IVP jobs and stored procedures.
Changes Made to All C and C++ IVP Jobs:
All C and C++ language IVP jobs now specify the new precompiler option CCSID(1047)
because the IVP C and C++ source code is CCSID 1047.
New Sample Java Stored Procedures:
New sample stored procedures, MRSPcli.java and MRSPsrv.java, demonstrate how to
return multiple result sets back to the caller. The current plan is to ship them as part of the
JDBC FMID after DB2 V8 becomes generally available.
WLM Stored Procedures:
All the DB2 established stored procedure samples have been converted to WLM
established stored procedures. Most of the sample stored procedures in Version 7 were
DB2 established which are no longer supported in Version 8.

11-184 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty In addition, all the sample stored procedures have the same environment attributes and
can run in the same WLM application environment and WLM established stored procedure
address space.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-185
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

11-186 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty 11.7 DB2 Version 8 Packaging

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-187
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

DB2 Version 8 Packaging

© Copyright IBM Corporation 2004

Figure 11-88. DB2 Version 8 Packaging CG381.0

Notes:
DB2 Version 8 incorporates several features which include tools for data warehouse
management, Internet data connectivity, database management and tuning, installation,
and capacity planning.
These features and tools work directly with DB2 applications to help you use the full
potential of your DB2 system. When ordering the DB2 base product, you can select the
free and chargeable features to be included in the package.
You must check the product announcement and the program directories for current and
correct information on the contents of DB2 Version 8 package.

11-188 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
Base Engine and Free Features

DB2 UDB for z/OS Version 8

DB2 Base Optional Free Features

Base Engine Management Clients Package

DB2 Extenders Net.Data

z/OS Application Connectivity


msys for Setup for DB2
to DB2 for z/OS

Function Orderable
function

© Copyright IBM Corporation 2004

Figure 11-89. Base Engine and Free Features CG381.0

Notes:
In this topic, we list the DB2 Version 8 base features and optional no-charge features.
DB2 Version 8 Base
The DB2 Version 8 Base Engine is program Number 5625-DB2 and currently consists of
the following:
• DB2 object code
• Externalized parameter macros
• JCL procedures
• TSO CLISTs
• Link-edit control statements, JCLIN
• Install verification
- Sample program source statements (sample problem)
- Sample database data

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-189
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

- Sample JCL
• DB2 Directory/catalog database
• ISPF components (Installation panels, messages, skeleton library, command table)
• DBRM library
• Online help reader and associated books (it can used be instead of BookManager
READ/MVS)
• IRLM Version 2.2
• Call Level Interface feature (which includes JDBC and SQLJ)
• DB2I ISPF panels (in the ordered language)
• REXX language support:
REXX language and REXX language stored procedure support are now shipped as a
part of the DB2 Version 8 base code.(It is no longer a separate FMID as was the case
with V7). As before, the DB2 installation job DSNTIJRX binds the REXX language
support to DB2 and makes it available for use.
• Utilities:
With DB2 Version 7 most utilities were grouped in three separate independent products,
Operational utilities, Diagnostic and Recovery utilities and the Utilities Suite (which
includes all the utilities of the above two products).
In DB2 Version 8, the Operational utilities and the Diagnostic and Recovery utilities are
no longer offered. The Utilities Suite remains and contains all of the IBM DB2 Utilities.
All the utilities are shipped deactivated with the Base Engine. A valid product licence
must be obtained to activate the utilities function. However all utilities are always
available for execution on the DB2 catalog and directory and the DB2 IVP objects.
• DB2 Extenders:
- Audio, Image and Video Extenders
Audio, Image, and Video Extenders have been stabilized at the V7 level and are
provided in DB2 for z/OS, V8, to ensure continuity for current customers. Audio,
Image, and Video Extenders do not support the DB2 for z/OS data sharing function
in a parallel sysplex environment. For the most current information, visit this site:
http://www.ibm.com/software/data/db2/extenders/aiv/aiv390
- XML Extender
- Text Extender
DB2 Text Extender has been stabilized at the V7 level and is provided in DB2 for
z/OS, V8, to ensure continuity for current customers. DB2 Text Extender does not
support the DB2 for z/OS data sharing function in a parallel sysplex environment.
For the most current information, visit:

11-190 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty http://www.ibm.com/software/data/db2/extenders/text/te390
The Extenders discussed so far come with the base product (you do not have to order
them separate), and they are free of charge.
- Net Search Extender:
Although this is a DB2 Extender product, it does not come as part of the “base”
extenders. It is a separately orderable charge feature.
Net Search Extender V7 is compatible and delivered with DB2 for z/OS, V8. Net
Search Extender does not support the DB2 for z/OS data sharing function in a
parallel sysplex environment. For the most current information, visit:
http://www.ibm.com/software/data/db2/extenders/netsearch/
• msys for Setup DB2 Customization Center:
msys for Setup DB2 Customization Center is enabled for the IBM Managed System
Infrastructure (msys). The msys for Setup DB2 Customization Center provides
installation customization for DB2 z/OS. DB2 Customization Center is available as a
plug-in for msys for Setup and therefore requires msys for Setup, which is included with
z/OS Version 1.3.
In addition, msys for Setup has three components:
- msys for Setup workplace, which runs on the workstation
- msys for Setup host code, which runs on a driving system
- msys for Setup management directory, which uses an LDAP server
msys for Setup DB2 Customization Center replaces the DB2 Installer, which was
shipped with DB2 Version 7.
Optional No-charge Features
The optional no-charge features are the same as with DB2 Version 7.
• DB2 Management Clients Package:
We explore this package in more detail in the next visual.
• Net.Data:
Net.Data, a no-charge feature of DB2 Version 8, takes advantage of the z/OS
capabilities as a premier platform for electronic commerce and Internet technology.
Net.Data is a full-featured and easy to learn scripting language allowing you to create
powerful Web applications. Net.Data can access data from the most prevalent
databases in the industry: DB2, Oracle, DRDA-enabled data sources, ODBC data
sources, as well as flat file and Web registry data. Net.Data Web applications provide
continuous application availability, scalability, security, and high performance.
Net.Data is functionally stable at the level it was when it was shipped with DB2 Version
7.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-191
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• z/OS Application Connectivity to DB2 for z/OS:


This is a no-charge, optional feature of DB2 Universal Database Server for z/OS V8.
This feature consists of a component known as the DB2 Universal Database Driver for
z/OS, Java Edition, a pure Java, type 4 JDBC driver designed to deliver high
performance and scalable remote connectivity for Java-based enterprise applications
on z/OS to a remote DB2 for z/OS database server. The driver:
- Supports JDBC 2.0 and 3.0 specification and JDK V1.4 to deliver the maximum
flexibility and performance required for enterprise applications
- Delivers robust connectivity to the latest DB2 for z/OS and WebSphere Application
Server for z/OS
- Provides support for distributed transaction support (2-phase commit)
- Allows custom Java applications that do not require an application server to run in a
remote partition and connect to DB2 z/OS.
This feature is ideal for z/OS customers who require the ultimate scalable and reliable
DB2 connectivity solution anchored on a WebSphere Application Server framework.
DB2 Universal Driver, Java Edition is an integral part of an e-business solution stack
that can help achieve the highest level of Web application availability by leveraging the
most durable OLTP platform.

11-192 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty
DB2 Management Clients Package

DB2
Management
Clients Package

Database DB2 Connect


z/OS
Administration Visual Explain DB2 Estimator Personal
Enablement
Server Edition Kit

Control Center

Development
Center

Replication
Center

Command
Function Center

Orderable
function

© Copyright IBM Corporation 2004

Figure 11-90. DB2 Management Clients Package CG381.0

Notes:
The DB2 Management Clients Package is enhanced in Version 8. It is a collection of
workstation-based client tools that you can use to work with and manage your DB2 for
z/OS environments.
The DB2 Management Clients Package is a separately orderable no-charge feature of DB2
Version 8 (program number 5625-DB2 and feature number is 6001) and it currently
consists of the following:
• DB2 Administration Tools (including Control Center, Replication Center, Command
Center and other tools that support DB2 for z/OS).
- Database Administration Server (DAS):
DAS provides a general mechanism for running z/OS level functions to support the
IBM Universal Database GUI Tools such as Control Center, Command Center and
Replication Center. DAS provides the following functions:

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-193
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Building and creating JCL jobs (Control Center Version 8 supports creating and
storing JCL jobs for most functions including executing DB2 utilities or cloning a
subsystem).
• Reading and writing data sets (supports PS, PDS, PDSE data sets with
RECFM=FB).
• Querying operating system catalog information.
• Executing shell scripts in z/OS UNIX.
• Issuing MVS system commands through an extended console.
DAS provides these functions as an element of the DB2 Management Clients
Package. DAS is required by, and supports administrative tasks by the DB2 UDB
Control Center and Replication Center.
It can be installed in the form of an SMP/E installable package (with FMID
HDAS810).The DB2 Administration Server for z/OS can be ordered as a part of the
DB2 Management Clients Package (with the 390 Enablement feature), or on its own
(without the 390 Enablement feature). This is to allow you to use the DB2
Administration Server for z/OS with your existing DB2 Version 7 systems (where the
Center Support is already installed).
- z/OS Enablement:
IBM DB2 Control Center provides support to help you manage DB2 databases on
an array of operating systems in your work place. A set of stored procedures, a
user-defined function and a set of batch programs must be installed at each DB2
UDB for z/OS subsystem that you want to work with using Control Center and other
tools including Replication Center and Information Catalog Center.
The z/OS Enablement provides these stored procedures, user-defined functions
and batch programs in the form of an SMP/E installable package (with FMID
JDB881D).
• DB2 for z/OS Visual Explain:
Visual Explain is a workstation-based feature of DB2 for z/OS that displays:
- An easy-to-understand graph of the access paths of SQL statements.
- Catalog statistics for referenced objects from the access path graph.
- A list of explainable statements from plans and packages, optionally filtered by costs
or access path criteria.
The graphical representation of the access path allows you to instantly distinguish
operations such as a sort, parallel access or the use of one or more indexes. You can
view suggestions from the graph that describe how you might improve the performance
of your SQL statement.

11-194 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty Visual Explain allows you filter capabilities by access path of explainable SQL
statements. For example, you can choose to only display statements that contain a sort
or have an estimated cost greater than 500 milliseconds.
The report feature of Visual Explain allows you to generate an HTML report regarding
the access path descriptions, statistics, SQL text and cost of current explained SQL
statement.
You can also EXPLAIN SQL statements dynamically and immediately, and graph their
access path. You can enter that statement, have Visual Explain read it from a file, or
extract it from a bound plan or package.
Also available through Visual Explain is the capability for you to browse the real time
settings of DSNZPARMs (subsystem parameters) and DSNHDECP. This function
requires an activated WLM address space where the DSNZWP stored procedure is
installed.
DB2 Visual Explain requires Windows NT Version 4.0/2000/XP, DB2 Connect Version 7
or higher, and one of the following communications software: TCP/IP or
Communications Server 5.0, or SNA Version 3 integrated SNA support in DB2
Universal Database.
The latest version of Visual Explain is available on the Web site:
http://www.ibm.com/software/db2os39_/db2ve/
• DB2 Estimator:
IBM DB2 Estimator for Windows works with DB2 data to estimate application feasibility,
to model application cost and performance, and to estimate required CPU and I/O
capacity. DB2 Estimator is available to download from the DB2 for z/OS Web page.
For more information on DB2 Estimator for Windows, see:
http://www.ibm.com/software/data/db2/os390/estimate/
• DB2 Connect Personal Edition Kit:
DB2 Connect provides connectivity to the mainframe and midrange databases from
Windows, Linux, and UNIX-based platforms. You can connect to DB2 database on
AS/400, VSE, VM, MVS, and OS/390. You can also connect to non-IBM databases that
comply with the Distributed Relational Database Architecture (DRDA). DB2 Connect
Personal Edition is designed for a two-tier environment, where each client connects
directly to the host. DB2 Connect Personal Edition does not accept inbound client
requests for data.
The DB2 Administration Tools and DB2 Development Center are delivered with all
editions of DB2 Universal Database and DB2 Connect products. A restricted-use copy
of DB2 Connect Personal Edition Version 8.1 (5724-B56) for Windows is provided in the
DB2 Management Clients Package feature of DB2 for z/OS, Version 8 to satisfy this
functional dependency.

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-195
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

Optional Chargeable Features

Optional
Chargeable
Features

Query
Net Search
Utilities Suite Management
Extender
Facility

QMF
Enterprise
Edition

QMF QMF High


QMF
Distributed Performance
Classic Edition
Edition option (HPO)

QMF
QMF for QMF for QMF for
Visionary
TSO/CICS Windows WebSphere
Studio Function
Orderable
function

© Copyright IBM Corporation 2004

Figure 11-91. Optional Chargeable Features CG381.0

Notes:
A number of optional chargeable features are available with DB2 Version 8:
• The DB2 Utilities Suite
• Query Management Facility editions
• The DB2 Net Search Extender
The DB2 Utilities Suite
With DB2 V7, the DB2 Utilities are separated from the base product and offered as
separate products licensed under the IBM Program License Agreement (IPLA). The DB2
Utilities were grouped into three categories:
• DB2 Operational Utilities, which included Copy, Load, Rebuild, Recover, Reorg,
Runstats, Stospace, and Unload.
• DB2 Diagnostic and Recovery Utilities, which included Check Data, Check Index,
Check LOB, Copy, CopyToCopy, Mergecopy, Modify Recovery, Modify Statistics,
Rebuild, and Recover.

11-196 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V3.1
Student Notebook

Uempty • DB2 Utilities Suite, which combines the functions of both DB2 Operational Utilities and
DB2 Diagnostic and Recovery Utilities in the most cost effective option.
DB2 Version 8 offers all of the utilities in one package. The only DB2 utility product is the
DB2 Utilities Suite (product number 5655-K61 FMIDs JDB881K and JDB881M). The DB2
Operational Utilities and the DB2 Diagnostic and Recovery Utilities offerings are no longer
available.
The DB2 Utilities are:
• BACKUP SYSTEM
• CHECK DATA
• CHECK INDEX
• CHECK LOB
• COPY
• COPYTOCOPY
• EXEC SQL
• LOAD
• MERGECOPY
• MODIFY RECOVERY
• MODIFY STATISTICS
• REBUILD INDEX
• RECOVER
• REORG INDEX
• REORG TABLESPACE
• RESTORE SYSTEM
• RUNSTATS
• STOSPACE
• UNLOAD
All DB2 utilities operate on catalog, directory, and sample objects, without requiring any
additional products.
Query Management Facility
The Query Management Facility (QMF) is the tightly integrated, powerful, and reliable tool
for query and reporting within IBM’s DB2 family. QMF for OS/390 is also a separately
orderable, priced feature of DB2 Version 8.
The DB2 QMF On Demand feature has been greatly enhanced with V8. It includes:
• Support for DB2 UDB V8, including DB2 Cube Views, long names, Unicode, and
enhancements to SQL
• Drag-and-drop building of OLAP analytics, SQL queries, pivot tables, and other
business analysis and reports
• Visual data “appliances”, such as executive dashboards, that offer unique, visually rich,
interactive functionality and interfaces specific to virtually any type of information task

© Copyright IBM Corp. 2004 Unit 11. Installation and Migration 11-197
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
Student Notebook

• Database explorer for easily browsing and identifying database assets and other
objects they may reference
• QMF for WebSphere, allowing ordinary Web browsers to become “zero maintenance”
thin clients for visual on demand access to enterprise DB2 business data
With this release, DB2 QMF is offered in several simplified editions, enabling you to more
easily apply its DB2 QMF on demand information strategy to single or multiple database
and end-user platforms. QMF V8 consists of the following editions:
• DB2 QMF Enterprise Edition
DB2 QMF Enterprise Edition provides the entire DB2 QMF family of technologies,
enabling enterprise-wide business information across end-user and database
platforms. DB2 QMF Enterprise Edition consists of these components:
- DB2 QMF for TSO/CICS
- DB2 QMF High Performance Option (HPO)
- DB2 QMF for Windows
- DB2 QMF for WebSphere
- DB2 QMF Visionary Studio
Other editions of DB2 QMF offer subsets of QMF Enterprise Edition, as follows.
• DB2 QMF Distributed Edition
DB2 QMF Distributed Edition provides components to support end users functioning
entirely from Web or Windows clients to access enterprise databases. This edition
consists of:
- DB2 QMF for Windows
- DB2 QMF for WebSphere
- DB2 QMF Visionary Studio
• DB2 QMF Classic Edition
DB2 QMF Classic Edition supports end users functioning entirely from traditional
mainframe terminals and emulators (including IBM Host On Demand) to access DB2
UDB databases. This edition consists of:
- DB2 QMF for TSO/CICS
Note that when you are interested in the functions provided by HPO, you must buy the DB2
QMF Enterprise Edition product.
Net Search Extender
DB2 Net Search Extender contains a DB2 stored procedure that adds the power of fast
full-text retrieval to Net.Data, Java, or DB2 CLI applications. It offers application
programmers a variety of search functions, such as fuzzy search, stemming, Boolean
operators, and section search.

11-198 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part without the prior
written permission of IBM.
V2.0
Student Notebook

bibl Bibliography
DB2 UDB for z/OS Version 8 Manuals:
GC18-7418 DB2 Installation Guide
GC18-7422 DB2 Messages and Codes
GC18-7428 DB2 What's New?
GI10-8566 DB2 Program Directory
LY37-3201 DB2 Diagnosis Guide and Reference
LY37-3202 DB2 Diagnostic Quick Reference Card
SC18-7413 DB2 Administration Guide
SC18-7414 DB2 Application Programming Guide and Reference for Java
SC18-7415 DB2 Application Programming and SQL Guide
SC18-7416 DB2 Command Reference
SC18-7417 DB2 Data Sharing: Planning and Administration
SC18-7419 An Introduction to DB2 Universal Database for z/OS
SC18-7423 DB2 ODBC Guide and Reference
SC18-7424 DB2 Reference for Remote DRDA Requesters and Servers
SC18-7425 DB2 Release Planning Guide
SC18-7426 DB2 SQL Reference
SC18-7427 DB2 Utility Guide and Reference
SC18-7429 DB2 Image, Audio, and Video Extenders Administration and
Programming
SC18-7430 DB2 Text Extender Administration and Programming
SC18-7431 DB2 XML Extender for z/OS Administration and Programming
SC18-7433 DB2 UDB for z/OS RACF Access Control Module Guide
SX26-3853 DB2 Reference Summary

Other Publications:
These publications are also relevant as further information sources:
SA22-7521 z/OS ICSF Administrator's Guide
SC33-7985 Managed System Infrastructure for Setup User’s Guide
GA22-7509 z/OS Planning for Multilevel Security

© Copyright IBM Corp. 2004 X-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

IBM Press Books:


ISBN 0-13-100772-6 DB2 SQL Procedural Language for Linux, UNIX, and Windows

IBM Redbooks:
For information on ordering these publications, see “How to Get IBM Redbooks” on
page X-3. Note that some of the documents referenced here may be available in softcopy
only:
SG24-2213 DB2 for OS/390 Version 5 Performance Topics
SG24-2213 DB2 for OS/390 Version 5 Performance Topics
SG24-5351 DB2 for OS/390 Version 6 Performance Topics
SG24-6289 DB2 for z/OS and OS/390 Version 7 Utilities Suite
SG24 6435 DB2 for z/OS and OS/390: Ready for Java
SG24-7083 DB2 for z/OS Stored Procedures: Through the CALL and
Beyond

Web URLs:
http://www.ibm.net
IBM’s Internet Connection Web site
http://www.isi.edu/rfc-editor/
Internet “Requests for Comments” Editor
http://www.ibm.com/support
IBM Support and Downloads
http://www.ibm.com/services
IBM Global Services

These Web sites and URLs are also relevant as further information sources:
http://www7b.software.ibm.com/dmdd/library/techarticle/
0207alazzawe/0207alazzawe.html
DB2 Development Center — The Next-Generation AD Tooling
for DB2
http://www7b.software.ibm.com/dmdd/library/techarticle/
alazzawe/0108alazzawe.html
DB2 Integrated SQL Debugger IBM DB2 Stored Procedure
Builder V7.2

X-2 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V2.0
Student Notebook

bibl How to Get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs,
at this Web site:
http://www.ibm.com/redbooks

© Copyright IBM Corp. 2004 X-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

X-4 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V2.0
Student Notebook

IX Index
Symbols ACCUMACC 6-25
*-property 4-96 ACCUMUID 6-25
, (Comma) 8-19 ACEE 7-39
. (Period) 8-19 Active versions 2-87
_x003A_ 4-80 ACTUAL CF LEVEL 10-58
_x0058_ 4-81 ADD PARTITION BY RANGE 2-21
_x005F_ 4-81 Adding limit keys 2-21
_x0078_ 4-81 Adding partitions
“ (Quotation mark) 8-19
Point-in-time recovery 2-105
Advisory reorg pending 2-70, 2-126, 11-101
Affinity routing 2-14, 2-40
Numerics ALIAS keyword 6-11
00C900AE 2-124 ALIAS port number 6-19
00C900CF 11-16 ALIAS resource group 6-21
00D31033 6-33 Allocate conversation timeout 6-33
00D31203 6-15 ALTER BUFFERPOOL 1-44, 11-44, 11-77
00E7009A 11-17 ALTER COLUMN
00E79000 11-21 SET DATA TYPE 2-87
00E80058 1-46 ALTER GROUPBUFFERPOOL 10-32, 11-44
00E8005A 1-46 ALTER INDEX
00F30056 10-52 CLUSTER 2-120
00F300A2 10-52 NOT CLUSTER 2-120
1 TB NOT PADDED 2-118
Buffer pool maximum 1-38 PADDED 2-118
1,000 archive log data sets 1-81 RBDP 2-119
100,000 open data sets 1-54 ALTER INDEX ADD COLUMN
128 TB table space 1-69 Syntax 2-82
16 exabytes 1-38 ALTER SEQUENCE 3-121, 3-122
2 MB SQL statements 1-90 ALTER TABLE
2M query blocks 6-33 ADD MATERIALIZED QUERY 9-15
32K page writes 2-132 ALTER COLUMN 3-109
32K query block 6-33 ALTER MATERIALIZED QUERY 9-16
32K records 8-29 DROP MATERIALIZED QUERY 9-15
4096 partitions RESET 2-107
DB2 data set names 1-72 ROTATE PARTITION FIRST TO LAST 2-106
64-bit ALTER TABLE ADD COLUMN 2-77
Addressability 1-30 ALTER TABLE ADD PARTITION 2-103, 2-104
Capable IRLM 1-55 ALTER TABLE ALTER COLUMN
IPCS support 1-54 Syntax 2-68
Real memory support 1-26 ALTER TABLE ALTER PART 2-113
Virtual storage 1-32 ALTER TABLE ROTATE PARTITION
64-bit VSTOR DISPLAY output 2-112
General expectations 1-35 Example 2-108
7-bit ASCII 5-29, 5-55 Logical partition 2-109
93 active log data sets 1-79 Point-in-time recovery 2-109
ALTER VIEW REGENERATE 2-75, 11-64
ALTUSER 4-94
A AMODE(64) 1-60
ABIND 11-71 APAR
Abstract JDBC machine 4-10 II13048 11-11
Access lists 4-91 II13049 11-11
Accountability 4-89 II13695 11-7

© Copyright IBM Corp. 2004 Index X-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

OA03095 11-8 Auto-generated keys 4-35


OA03519 11-8 Automatic materialized query table 9-4
OQ04069 5-21 Automatic query rewrite 9-6
OW56073 11-8 Automatic rebinds 11-70
OW56074 11-8 Automatic remigration rebind 11-71
PQ22895 10-55 Automatic summary tables 9-6
PQ25337 10-55 Auxiliary index 2-84
PQ25914 1-20 Auxiliary license jar 4-18
PQ28813 1-85 Availability
PQ30652 3-106 Application maintenance 2-56
PQ31326 1-85 Code maintenance 2-56
PQ36933 1-20 Data maintenance 2-57
PQ48126 1-79 Schema maintenance 2-57
PQ48486 11-7
PQ53067 2-149, 11-40
PQ54042 9-70, 9-71, 9-72 B
PQ56323 7-15 BACKUP SYSTEM 2-128, 8-2, 8-6
PQ56697 5-59 DATA ONLY 2-132, 2-134
PQ57516 1-85 FULL 2-131, 2-134
PQ58787 7-53 Big endian 4-81, 5-51
PQ59207 11-73 Bill of materials 3-93
PQ59805 11-14 Bimodal migration accommodation offering 1-17
PQ61458 9-139 BLOB 11-182, 11-183
PQ68662 9-152 BMP 5-35
PQ71079 5-59 BookManager READ/MVS 11-190
PQ71925 9-156 Boolean term predicate 9-154
PQ72337 8-29 BOTH 9-94
PQ73454 9-152 BP16K0 buffer pool 11-17
PQ73749 9-152 BP8K0 buffer pool 11-17
PQ80841 4-12 BSAFE service 6-28
PQ83744 11-11 BSAM record 8-12
PQ84421 11-24 BSDS 1-41, 1-81, 2-133, 6-10, 6-20, 11-97
APP_ENCODING_CCSID 5-110 Buffer pool
APPC 6-21 Control blocks 1-38
APPENSCH 5-11 Long-term page fixing 1-44
Applicatation encoding scheme 5-55 Monitoring 1-58
Application Development Client 4-9, 7-32 Build subtask 8-31
Application maintenance 2-56 BUILD2 2-16, 2-43, 8-73
ARCHLVL 1-17 Elimination of 2-39
AREO* 2-70, 2-73, 2-83, 2-92, 2-126, 8-36, 8-40, Built-in functions
8-54, 11-101 CHARACTER_LENGTH 3-208
Array fetch 3-76 DECRYPT_BIN 3-203
Array input 3-76, 9-108 DECRYPT_CHAR 3-203
AS 3-125 ENCRYPT 3-203
AS IDENTITY 3-106 ENCRYPT_TDES 3-203
AS SECURITY LABEL 4-102, 4-104 GENERATE_UNIQUE 3-205
ASCCSID 5-59, 11-18 GETHINT 3-203
ASENSITIVE 3-16 GETVARIABLE 3-206
Assign 4-100 Obtaining session variable information 3-206
AST 9-6 POSITION 3-208
Asynchronous write engines 1-39 SUBSTRING 3-208
ATOMIC 9-107 Built-in security labels 4-94
Audio extender 11-190 Byte addressable 1-24
Audit record 4-104
AUTH SIGNON 6-34
autocommit 4-28

X-6 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

C CLASSPATH 4-18, 7-10


C support 11-11 CLASST 11-44
C++ support 11-11 Client Application Enabler 4-9
CACHE 3-111, 3-128, 3-151, 3-155 CLOB 1-90, 4-60
CACHEDYN 1-51, 2-142, 2-143, 9-145 Cloudscape 4-18
Caching sequence numbers 3-151 CLUSTER 2-11, 2-50, 2-51, 2-120
CAE 4-9 Clustering index 2-28, 2-50
Call attach facility 7-38 Non-partitioned secondary index as 2-53
CARDF 9-28 CMTSTAT 6-24, 6-33
CARDINALITY 9-76, 9-77, 9-79, 9-113 COBOL samples 11-182
CARDINALITY MULTIPLIER 9-77, 9-79 COBOL support 11-11
Cardinality values 9-91 Code maintenance 2-56
Cartesian product 9-124 Code page 5-6
CASTOUT 1-44 Code point 5-6
Castout owner 10-32 Coded Character Set Identifier 5-6
CATEGORY 4-93 CODEUNITS16 3-208, 5-81
CATENFM 11-86, 11-98, 11-103 CODEUNITS32 3-208, 5-81
CATENFM ENFMON 11-130 COEXIST 11-71
CATENFM HALTENFM 11-107 COLCARDF 2-70
CATENFM START 11-95 COLDEL 8-14, 8-19, 8-22
CATMAINT 11-58, 11-64, 11-78 COLGROUP 9-92, 9-93
CATMAINT UPDATE 11-47, 11-62, 11-63 Collating sequence 5-7
CCA 6-30 Column level encryption 3-203
CCSID 5-6, 8-16 Command Center 11-193
1200 5-55 COMMENT ON 3-159
1208 5-55 COMMENT ON SEQUENCE 3-123
367 5-55 Common code 4-11
UNICODE 5-54 Common Cryptographic Architecture 6-30
CCSID set 5-7 Common Information Model 11-154
CDB 6-6 Common table expressions
CF 2-136 Instead of nested table expressions 3-92
CF level 12 10-28 Materialization 3-92
Change Log Inventory utility 6-10, 6-11, 6-19, 11-25 Recursive SQL 3-92, 3-93
Changing partition boundaries 8-37 WITH 3-91
Character conversion 5-18 Communications database 6-6
Character conversion methods 5-20 Comparing null values 3-196
CHARACTER_LENGTH 3-208, 5-81 Compatibility mode 11-3, 11-34, 11-57
CHARDEL 8-14 COMPJAVA 7-7, 7-9
CHECK DATA 8-62 COMPJAVA stored procedures 11-21
CHECK INDEX 8-63 Concurrent copy 2-145
CHECK LOB 8-80 CONDBAT 1-40
Sort pipe 8-80 Condition handler 7-21
CHECKPAGE 8-52 Conditional restart
CHGDC 2-142 SYSPITR record 2-136
CHKP 8-61 Conditional trigger 9-84
CI boundary 2-136 CONNECT 7-30
CI size 2-129 Connection properties 4-22, 11-27
Longer than 4KB 2-145 Constraints
CICS 10-52, 11-12 Multilevel security 4-115
CIM 11-154 CONTEXT SIGNON 6-34
CISIZE 2-145 Context switch 9-81
Claim and drain processing 2-13 CONTINUE AFTER FAILURE 7-5
Class castout threshold 10-31 CONTOKEN 11-76
Classes used by the RACF access control module Control Center 11-193
4-99 Control interval larger than 4KB 2-145
Conversion

© Copyright IBM Corp. 2004 Index X-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DRDA 5-18 D
SYSSTRINGS 5-20 DAC 4-90
z/OS conversion services 5-20 DAS 11-193
Coprocessor 11-11 Data caching 9-135
COPY 8-2, 8-63 Data correlation 9-88
CHECKPAGE 8-51 Data corruption 5-19
SYSTEMPAGES 8-50 DATA INITIALLY DEFERRED 9-12
Copy pool 2-129 Data maintenance 2-57
Copy pool backups 2-129 Data Manager 9-60
COPYPOOL 2-138 Data set creation 2-132
CORBA 5-26 Data sharing
Correlated subquery transformation 9-153 Sequence cache 3-155
COUNT 9-94 Data sharing locking 10-7
Coupling Facility 10-9, 10-55, 10-59 Data skew 9-88
CPU cost charging 10-55 Data source property 4-19
CRCR 2-136 data space 1-28
CREATE INDEX Data space buffer pool 1-28
Invalidate DSC 2-124 DATA_SHARING_GROUP_NAME 3-210
CREATE SEQUENCE 3-122 Database Administration Server 11-193
CREATE TABLE Database connection properties 4-24
AS SECURITY LABEL 4-102 clientAccountingInformation 4-26
CREATEIN 3-129 clientApplicationInformation 4-26
CRESTART keyword 2-137 clientUser 4-26
CRM package 2-150 clientWorkstation 4-26
Cross-invalidation 10-30 cliSchema 4-25
CSA 1-57 currentPackageSet 4-25
CSV 8-11, 8-21 currentSchema 4-25
CT 1-52 currentSQLID 4-25
CTHREAD 1-40, 1-58 databaseName 4-24
CURENT PACKAGE PATH 7-45 deferPrepares 4-25
CURRENT APPLICATION ENCODING SCHEME driverType 4-24
5-11 fullyMaterializeLobData 4-24
CURRENT CLIENT_ACCTNG 3-212 gssCredential 4-25
CURRENT CLIENT_APPLNAME 3-212 kerberosServerPrincipal 4-25
CURRENT CLIENT_USERID 3-212 logWriter 4-24
CURRENT CLIENT_WRKSTNNAME 3-212 password 4-24
CURRENT MAINT TYPES 11-40 portNumber 4-24
CURRENT MAINTAINED TABLE TYPES FOR readOnly 4-25
OPTIMIZATION 9-24 resultSetHoldability 4-25
CURRENT PACKAGE PATH 3-212 retrieveMessagesFromServerOnGetMessage
Combining with CURRENT PACKAGESET 7-48 4-25
Stored routines 7-49 securityMechanism 4-25
CURRENT PACKAGESET 7-45 serverName 4-24
CURRENT PRECISION 2-74 traceFile 4-24
CURRENT REFRESH AGE 9-24, 11-39 traceFileAppend 4-24
CURRENT SCHEMA 3-212, 7-43 traceLevel 4-24
CURRENT SQLID 7-42 user 4-24
CURRENT_VERSION 2-86, 2-88, 2-90, 8-56 Database name 6-7
CURRENTAPPENSCH 7-55 DATACAPTURE 11-23
CURRENTCOPYONLY 8-80 Data-only system backup 2-132
CURRENTDATA NO 9-109 Data-partitioned secondary index 2-17, 2-36
Customer Information Control System 11-12 Creating 2-37
Customized profiles 11-30 Data-partitioned secondary indexes 1-74
CYCLE 3-111, 3-127, 3-157 Datasource interface 4-19
DATE 2-78

X-8 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DB2 Administration Tools 11-193 DECPT 8-14, 8-19, 8-22


DB2 authorization 4-98 Default block size 11-45
DB2 checkpoint frequency 11-45 Default connection context 11-30
DB2 CLI 3-26 DEFER YES 2-124
DB2 Connect 3-76, 6-18, 6-33, 9-168, 11-74 DEFER YES index 2-123
License 4-18 Deferred indexes 2-123
RQRIOBLK 3-77 DEFINITION ONLY 9-11
DB2 Connect Personal Edition 4-9 Delayed index availability 2-73
DB2 Connect Personal Edition Kit 11-195 DELETE CCSIDS 11-25
DB2 Customization Center 11-167 DELIMITED 8-14
DB2 Development Center 7-32 UNLOAD 8-13
DB2 Estimator 11-195 Delimited data 8-2
DB2 Extenders 11-190 Delimited file
DB2 Management Clients Package 11-191 CCSID 8-15
DB2 object hierarchy 4-99 Delimited input
DB2 tools 11-13 CCSID 8-16
DB2 Tracker 2-139 DELIMITED LOAD
System recover pending mode 2-139 Considerations 8-15
DB2 UDB Universal Driver for SQLJ and JDBC 4-6 DELIMITED UNLOAD
DB2 Universal driver 11-26 Considerations 8-15
DB2 Utilities Suite 11-196 Delimiter
DB2_GENERATED_ROWID_FOR_LOBS 3-214 CHARDEL 8-15
DB2_RETURNED_SQLCODE 3-86 COLDEL 8-14
DB2_ROW_NUMBER 3-86 DECPT 8-15
DB2-established stored procedures DES 6-28, 6-30
Deprication 7-8 DESCSTAT 11-45
db2profc 4-48 DFSMS 11-9
db2sqlbind 4-49 DFSMSdss
db2sqljcustomize 4-49 RESTORE 8-80
db2sqljupgrade 4-15, 11-30 DFSMShsm 2-129, 2-132, 2-133, 2-138, 8-6
DB2SystemMonitor 4-34 DFSORT 8-30, 8-32, 8-85, 9-92
DB2SystemMonitor class 4-32 Diagnostic and Recovery utilities 11-190
DB2SystemMonitor prerequisites 4-34 Diffie-Hellman 6-28
DBALIAS 6-8 DIS GROUP 10-25
Dbalias 6-7 DISABLE QUERY OPTIMIZATION 9-43
DBCLOB 1-90 DISABLE QUERY OPTIMIZATiON 9-13
DBCS 5-24 Disaster recovery 1-37
DBD 1-52 DISCARD 8-43
DBD01 2-132, 2-134, 2-138 REORG SHRLEVEL CHANGE 8-43
DBET 2-126, 10-40 Discretionary access control 4-90
DBINFO 11-23 Disjoint security labels 4-96
DBM1 virtual storage constraint 1-23 DISPLAY DATABASE
DBRM 5-78 OVERVIEW 1-76
DBRM dependency marker 11-119 DISPLAY FUNCTION SPECIFIC 7-5
DBRMMRIC 11-119 DISPLAY GROUP DETAIL 11-109
DBRMPDRM 11-119 -DISPLAY LOCATION 6-36
DCLGEN 3-43, 3-44 DISPLAY PROCEDURE 7-5
DCREATOR 3-159 DISPLAY THREAD 4-31
DDF 10-51 DISPLAY UNI,STORAGE 5-21
DDF ALIAS 6-11 DIST address space 1-61
DDF NOALIAS 6-12 DM 9-60
DECLARE GLOBAL TEMPORARY TABLE 4-115, DNAME 3-159
9-11 Dominance 4-96
DECLARE VARIABLE 5-11, 5-75 Dominate 4-105, 4-111
Declared global temporary table 9-146 Double Byte Character Set 5-24
Declared temporary table 3-12 DPSI 2-36, 2-37, 8-62

© Copyright IBM Corp. 2004 Index X-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Affinity routing 2-40 DSNB538I 1-42


BUILD2 2-39 DSNB539I 1-44, 1-47
CHECK INDEX 8-63 DSNB610I 1-40, 1-42
Clustering index 2-52 DSNDB06.SYSDDF 6-6
COPY 8-63 DSNDB06.SYSEBCDC 5-61, 5-70
Data sharing 2-40 DSNDB06.SYSSTR 5-70
Displaying 2-49 DSNDCACH 3-199
Dynamic scrollable cursor support 3-13 DSNDSVA 3-211
INDEXTYPE 2-25 DSNDSVS 3-211
Inline statistics 8-67 DSNE132I 10-52
LISTDEF 8-63 DSNE136I 10-52
LOAD 8-64 DSNH102I 5-67, 11-66
LOAD PART contention 2-40 DSNH107I 11-114
LOAD with page set REBUILD PENDING 8-66 DSNH526I 5-59
LOAD with RECOVER PENDING 8-66 DSNH527I 5-59
Partition pruning 2-41 DSNHDECP 5-59, 11-3, 11-18
Partition-level operations 2-40 ASCCSID 5-59
Query parallelism 2-41 ENSCHEME 5-54
REBUILD INDEX 8-70 NEWFUN 11-60
REBUILD PENDING 8-71 SSID 7-39
RECOVER 8-68 UGCCSID 5-53
REORG TABLESPACE 8-72 USCCSID 5-53
REPAIR 8-69 DSNHMCID 11-47
REPORT 8-69 DSNHMCID data-only load module 11-19
RUNSTATS 8-69 DSNI005I 2-149, 10-42
TEMPLATE 8-63 DSNI006I 10-41
Work data sets 8-73 DSNI021I 2-149, 10-41
Work file size 8-70 DSNI042I 10-47
DRAIN ALL 10-44 DSNI043I 10-47
DRDA 4-8, 4-53, 5-18, 6-21, 6-33 DSNI044I 10-47
DRDA blocking 3-36 DSNJ016E 2-145
DRDA V3 3-76 DSNJ017E 2-145
DRDA Version 3 4-9, 11-73 DSNJ031I 2-146
DriverManager DSNJ155I 11-124
Interface 4-19 DSNJ439I 11-122
DROP SEQUENCE 3-123 DSNJCNVB 1-79, 11-121
DSC 2-124 DSNJU003 2-137, 6-11, 6-19, 11-25, 11-47, 11-123
DSMAX 1-54, 2-45, 11-9 DSNJU004 6-12, 11-123
DSN1COMP 8-76 DSNL044I 6-20
DSN1COPY 2-80, 8-55, 8-57, 8-78 DSNL512I 6-20
CHECK option 8-78 DSNL515I 6-20
OBIDXLAT 8-57 DSNMXML 11-165
Versioning pages 8-78 DSNPIT00 11-88
DSN1PRNT 8-76 DSNR031I 2-147
ASCII option 8-78 DSNR046I 10-52
EBCDIC option 8-77 DSNR047I 2-145
Unicode option 8-78 DSNR048I 2-146
DSN6FAC 2-142 DSNRLI 7-12, 7-39
DSN6GRP 2-142 DSNT108I 5-60
DSN6SPRM 2-142 DSNT397I 1-74
DSN6SYSP 2-142 DSNT408I 2-87
DSNADM 4-99 DSNT526I 5-59
DSNB250E 2-148, 10-38, 10-41, 10-46 DSNT527I 5-59
DSNB357I 2-149, 10-41 DSNTEJ1 11-179
DSNB508I 1-43 DSNTEJ3M 11-180
DSNB536I 1-40, 1-42 DSNTEJ6R 11-180

X-10 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DSNTEJ76 11-182 DSNZ015 2-143


DSNTEJ77 11-182 DSNZPARM 10-55
DSNTEJ78 11-183 ABIND 11-71
DSNTEP4 ACCUMACC 6-25
MULT_FETCH 11-176 ACCUMID 6-25
DSNTESQ 11-64 APPENSCH 5-11
DSNTIDXA 11-35 CACHEDYN 1-52, 2-142, 9-145
DSNTIJEN 11-94 CHGDC 2-142
DSNTIJID 11-47 CMTSTAT 6-24
DSNTIJIN 1-81 CTHREAD 1-58
DSNTIJMC 11-94 DESCSTAT 11-45
DSNTIJNE 11-60, 11-78, 11-84, 11-93 DSMAX 1-54, 2-45
ENFM0nn0 11-101 DSVCI 2-145
ENFM0nn7 11-103 EDMDBDC 1-52
DSNTIJNF 11-86, 11-94, 11-110 EDMPOOL 1-52
DSNTIJNG 11-86, 11-94, 11-117 EDMSTMTC 1-51
DSNTIJNH 11-93 EDPROP 2-142
DSNTIJNR 11-94, 11-120 EXTRAREQ 2-142
DSNTIJP8 11-24 EXTRASRV 2-142
DSNTIJPM 11-23, 11-24, 11-64 IDTHTOIN 2-142
DSNTIJSG 11-72 IMMEDWRI 2-142, 10-55
DSNTIJTC 11-17, 11-47, 11-62 INLISTP 9-152
DSNTIJUZ 11-47 LRDRTHLD 2-146
DSNTINST CLIST 11-88 MAINTYPE 9-24
DSNTIP4 2-142, 9-49 MAX_NUM_CUR 11-41
DSNTIP5 2-142 MAX_ST_PROC 7-6, 11-41
DSNTIP7 2-145, 11-78 MAXARCH 11-124
DSNTIPA1 11-35 MAXCSA 1-55
DSNTIPA2 11-43 MAXKEEPD 2-142
DSNTIPC 2-142, 11-42 MAXTYPE1 2-142
DSNTIPD 1-52 MGEXTSZ 2-149
DSNTIPE 2-142, 9-51, 11-42 MXTBJOIN 1-85
DSNTIPF 5-53, 5-78, 11-24 NPGTHRSH 9-114
DSNTIPI 2-142 NUMLKTS 1-55
DSNTIPN 6-25, 11-42 NUMLKUS 1-55
DSNTIPO 2-142, 11-71 OPTOPSE 9-102
DSNTIPP 2-142 PADIX 9-51
DSNTIPR 2-142 PARTKEYU 2-142
DSNU1127I 8-43 PKGLDTOL 11-74
DSNU1129I 8-36 POOLINAC 2-142
DSNU1602I 2-133 REFSHAGE 9-24
DSNU271I 8-42 RELCURHL 3-20
DSNU291I 8-29 RESYNC 2-142
DSNU790I 11-63 RETVLCFK 9-49
DSNUTILB 5-73 SJMXPOOL 9-134
DSNUTILS 11-180 SPRMMXT 1-85
DSNUTILS stored procedure 8-28 SRTPOOL 2-142
DSNUTILU 5-73, 8-83, 11-180 STORMXAB 7-4
DSNWZP 11-20, 11-126 SYSADM 2-142
DSNWZPR 11-23, 11-126 SYSADM2 2-142
DSNX@XAC 4-100 SYSOPR1 2-142
DSNX208E 11-69 SYSOPR2 2-142
DSNX209E 11-69 TABLES_JOINED_THRESHOLD 1-88
DSNX900E 11-21 TCPALVER 2-142
DSNY011 1-46 TCPKPALV 2-142
DSNY012 1-46 UIFCIDS 11-67

© Copyright IBM Corp. 2004 Index X-11


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

URCHKTH 2-146 EXISTS 9-151


URLGWTH 2-146 Expanded storage 1-18
UTLRSTRT 8-29 Expanded storage only hiperspace 1-23
VOLDEVT 11-43 EXPLAIN 9-82, 9-104, 9-156
VOLTDEVT 8-80 QUERYNO 3-199
XLKUPDLT 2-142 STMTCACHE STMTID 3-198
DSSIZE 1-65, 1-67, 2-150 STMTCACHE STMTTOKEN 3-198
DSTATS 9-89 Explicit clustering 2-51
DSVCI 2-145 Explicit clustering index 8-30
DTT 3-12, 3-23, 9-146 Explicit connection 7-38
DTYPE 3-159 Explicit hierarchical locking 10-9
DWQT 11-44 Extending character data types 2-60
Dynamic allocations 1-54 Extending numeric data types 2-60
Dynamic prefetch during IN list index access 9-156 External access control 4-98
Dynamic scrollable cursors 9-108 EXTRAREQ 2-142
DECLARE CURSOR syntax 3-14 EZB.NETACCESS profile 4-118
Optimistic concurrency 3-22
Parallelism 3-25
Read-only cursors 3-23 F
WITH ROWSET POSITIONING 3-26 FACILITY class 4-97
Dynamic statement cache 1-51, 2-115, 2-124 Fact table 9-121
Dynamic VIPA 6-16, 6-18 FAIL column 7-5
DYNAMICRULES 7-43 Fallback SPE 11-54, 11-56, 11-69
False contention 10-12
FARINDREF 9-149
E fast column processing 2-70
ECSA 1-57, 11-15, 11-43 Fast log apply 2-138, 11-45
Edit procedures 4-115 Fast replication services 2-132
EDITPROC 2-78 FETCH CURRENT 3-18, 3-19
EDM pool 10-51 FETCH RELATIVE 3-18
EDMDBDC 1-52 FETCH ROWSET 3-49
EDMDSPAC 1-51 FETCH SENSITIVE 3-12
EDMPOOL 1-52, 11-43 Field procedures 4-115
EDMSTMTC 1-51 FIELDPROC 2-78
EDPROP 2-142 Filter factor estimates 9-167
ENABLE QUERY OPTIMIZATION 9-13 Filter factor estimation 9-88
Enabling-new-function mode 11-3, 11-34, 11-57 FINAL TABLE 3-121, 3-181
Data sharing 11-83 FLA 2-138
Solving space problem 11-96 FlashCopy 2-130
ENCODING 5-11, 9-171 FMID
Encoding scheme 5-7, 5-55 HDDA210 4-52
Encryption functions 3-203 JDB8812 4-54
Endianess 5-51 FOR BIT DATA 4-62, 5-83, 9-171, 11-76, 11-144
ENDING AT 2-20 FOR MIXED DATA 5-55
ENDRBA 2-137 FOR n ROWS 3-39, 3-40, 9-110
ENFM 11-75 FOR ROW n OF ROWSET 3-59, 3-61
ENFORCED 8-61 FOR SBCS DATA 5-55
ENSCHEME 5-54, 5-68 FOR UPDATE KEEP UPDATE LOCKS 3-201
Equi-join predicates 9-122 FORMAT DELIMITED 8-11, 8-13
Equivalence 4-96, 4-107 LOAD 8-13
ERP package 2-124, 2-150 FREEMAIN 9-158
ESAME 1-27 Frequency value distributions 9-90
ESS FlashCopy 2-130 FREQVAL 9-93
ESTOR 1-18 Frozen objects 11-52, 11-119
EXECUTE IMMEDIATE FRRECOV 2-136
Using CLOB/BLOB 1-91 Full system backup 2-131

X-12 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Fully escaped mapping 4-80 GX 5-93

G H
Gaps 3-153 Handler 7-21, 7-23
GBP castout threshold 10-31 Hardware compression 1-49
GBP checkpoint 10-31 HDDA210 4-52
GBPCHKPT 11-45 HEADER OBID 8-16
GBP-dependent objects 10-28 Hexadecimal constant 5-97
GBPOOLT 11-45 HFS 11-26
GDSNCL 4-99 HIDDEN 3-214
GDSNDB 4-99 Hierarchical security level 4-93
GDSNJR 4-99 High Performance Java compiler 7-8
GDSNPK 4-99 HIGH2KEY 2-70
GDSNPN 4-99 Hiperpool 1-23, 1-24, 1-25
GDSNSC 4-99 Hole 3-46
GDSNSG 4-99 Host variable array 3-41, 3-63, 3-67
GDSNSM 4-99 Host-variable 2-42
GDSNSP 4-99 HPJ 7-8
GDSNSQ 4-99 HPSIZE 1-41
GDSNTB 4-99 Hybrid join 9-118
GDSNTS 4-99
GDSNUF 4-99
GENERATED ALWAYS 3-106, 3-107 I
GENERATED BY DEFAULT 2-84, 3-106, 3-107 I/O latency 1-22
GET DIAGNOSTICS 3-54, 3-68, 3-71, 3-80, 7-16, IBM autonomic computing initiative 11-153
7-20, 11-175 IBM subsystem 1-32
Combined-information area 3-81 IBMREQD 11-60, 11-70, 11-119
Condition-information area 3-81 ICF catalog 2-130
Multi-row FETCH 3-85 ICSF 3-205, 6-29
Multi-row INSERT 3-86 ICTYPE 2-90, 2-92
Multi-row insert 3-80 IDBACK 1-40
ROW_COUNT 3-81 IDCAMS 2-104
SQLCA 3-80 IDCAMS REPRO 11-123
Statement-information area 3-81 Identity columns
GET DIAGNOSTICS CONDITION 3-86 ALTER TABLE ALTER COLUMN 3-109
getApplicationTimeMillis() 4-33 GENERATED ALWAYS 3-107
getCoreDriverTimeMicros() 4-33 GENERATED BY DEFAULT 3-107
GETMAIN 9-158 Reloading a table 3-107
getNetworkIOTimeMicros() 4-33 RESTART WITH 3-114
getServerTimeMicros() 4-33 IDENTITY_VAL_LOCAL 3-121, 4-35
GETVARIABLE 3-210 IDFORE 1-40
getVendorCode() 4-29 IDTHTOIN 2-142
Global lock contention 10-12 IFCID
Global lock contention reduction 10-6 0063 1-95
Global temporary tables 0140 1-95
Multilevel security 4-115 0141 1-95
GPBCAHE(ALL) 10-34 0142 1-95, 4-104
Granularity 4-88 0145 1-95
GRAPHIC 5-55 0168 1-95
Graphic string constants 5-93 0316 1-95
GRECP 2-136, 10-47 0317 1-96
GROUP BY expression 3-192 0350 1-96
Group restart 11-71 140 4-106, 4-110, 4-111, 4-112
GSNBP 4-99 148 10-35
GSNUT 4-99 172 3-198

© Copyright IBM Corp. 2004 Index X-13


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

196 3-198 INSTALL_JAR 7-9


22 9-82 Installation Verification Procedure 11-128
3 10-35 INSTS_PER_INVOC 9-82
313 2-146 Integrated Cryptographic Service Facility 3-205,
337 2-147, 3-198 6-28
IFCIDs 1-59 Inter-DB2 read-write interest 10-7
II13048 11-11 International Organization for Standardization 5-34
II13049 11-11 Interpreted Java 7-9
II13695 11-7 IOFACTOR 2-86
Image extender 11-190 IOS_PER_INVOC 9-82
Immediate index availability 2-73 IPCS 1-54
IMMEDWRI 2-142, 10-55 IRLM 10-9
IMMEDWRITE 10-55 IRLM 2.2 1-60
IMMEDWRITE(PH1) 10-55 IRR.WRITEDOWN.BYUSER profile 4-97
IMMEDWRITE(YES) 10-55 IS NULL 3-196
Implicit clustering 2-51 iSeries 4-18, 7-32
Implicit clustering index 8-30 ISO 5-34
Implicit connection 7-38 ISO/IEC 10646 5-26
IMS 11-12 ISO/IEC 9075 Part 10 4-40
In-abort UR 2-136, 2-146 ISO/IEC 9075 Part 13 4-40
Inacative connection 9-147 ISOLATION (CS) 3-19
Inactive connection 6-24, 6-34 ISOLATION UR 10-23
Inactive DBAT 6-32 ISOLATION(RR) 10-54
In-commit 2-136 ITERATE 7-26
INCREMENT BY 3-109, 3-126 IVP 11-128
Incremental image copies IVP enhancements
SYSTEMPAGES 2-89 All sample use WLM stored procedures 11-184
Index avoidance 2-121 COBOL LOB samples 11-182
RECOVER pending 2-122 DSNTEJ1 11-179
Index filter factor estimates 9-167 DSNTEJ3M 11-180
Index key feedback 9-127 DSNTEJ6R 11-180
Index-controlled partitioned table space 2-46 DSNTEJ76 11-182
Index-controlled partitioning 2-11, 2-51 DSNTEJ77 11-182
Partitioning indexes 2-29 DSNTEJ78 11-183
INDEXTYPE 2-25 DSNTEP2 11-175
Indoubt units of recovery 10-50 DSNTEP4 11-176
indoubt URs 10-51 DSNTIAD 11-177
In-flight 2-136 DSNTIAUL 11-177
Information Management System 11-12 Java stored procedures 11-184
Informational RI 9-41 Materialized Query Tables 11-180
CHECK DATA 8-61 IVP suite
LISTDEF 8-61 Version 8 11-128
NOT ENFORCED 8-60
QUIESCE TABLESPACESET 8-61
REPORT TABLESPACESET 8-61 J
Inherit WLM priority from lock waiter 2-147 J2EE 4-40
INITIAL_INSTS 9-82 JAAS 4-28
INITIAL_IOS 9-82 Java 5-26, 5-88
Inline statistics 9-90 Java Authentication and Authorization Service 4-28
INLISTP 9-152, 9-155 Java Common Connectivity 4-10
In-memory workfiles 9-134 Java Cryptography Extension 4-28
INSENSITIVE 3-11 Java Generic Security Service 4-28
INSERT within SELECT Java Native Interface 4-7, 4-10
INPUT SEQUENCE 3-184 Java stored procedures 11-184
OPEN CURSOR 3-183 JAVAENV DD statement 7-10
INST_PER_INVOC 9-82 JCC 4-10, 4-18

X-14 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

JCE 4-28 Log data sets 1-81


JCL jobs 11-172 LOG NO events 2-129
JCLIN 11-189 Log truncation point 2-136
JDB8812 4-54 Logical damage 2-7
JDBC 2.0 4-27 Logical locks 10-8
JDBC 3.0 4-11, 4-27, 4-35 Logical partition 2-109
JDBC-ODBC bridge 4-7 Logical partitions 2-33, 2-48
JDK 131 4-34 LOGICAL_PART 2-112, 2-113
JDK 141 4-34 Long running UR backout 2-145
JDK V1.4 4-52 Long-term page fixing 1-44
JGSS 4-28 Lookaside buffer 1-37
JNDI lookup 11-30 Lookaside pool 1-28
JNI 4-7, 4-10 LOOP 7-28
JOIN_TYPE 9-130 Lossless join 9-33
LOW2KEY 2-70
LPL 10-38
K LPL reason types 10-46
KEEPDYNAMIC(YES) 6-33 LPL recovery 2-136, 2-148
Accounting record 6-34 LPL recovery lock 10-44
LRDRTHLD 2-146
LRSN 2-132
L LRU 1-25
L* (Logical partition status) 2-48
LSTATS 1-45
Language Environment 5-21
LARGE 1-65
LDAP 11-154, 11-158 M
LEAST 9-94 MAC 4-90
LEAST frequently occurring values 9-91 MAINTYPE 9-24
Legacy driver 4-11, 11-26 Managed System Infrastructure for Setup 11-157
Lightweight Directory Access Protocol 11-154, Managed systems 4-116
11-158 Management Client Package 9-168
Limit key values 8-34 Mandatory access control 4-87, 4-90
Limited partition scan details 9-167 Manual LPL recovery 10-42
LIMITKEY 2-25 Mapping
LIMITKEY_INTERNAL 2-25 SQL character sets to XML character sets 4-77
LINKNAME 6-8 SQL to XML 4-77
List prefetch 9-118 Materialized query table 2-78
LISTDEF 8-61, 8-63 Materialized query tables 9-4
Little endian 5-52 Multilevel security 4-115
L-locks 10-8 Materialized snowflakes 9-133
LOAD Materialized views 9-6
Delimited input 8-2 MAX ABEND COUNT 7-5
FORMAT DELIMITED 8-13 MAX_NUM_CUR 11-41
POSITION 8-17 MAX_ST_PROC 7-6, 11-41
Work data sets 8-65 MAXARCH 11-124
LOAD PART concurrrency 2-16 MAXASSIGNEDVAL 3-150, 3-157
LOAD PART job contention 2-39 MAXCSA 1-56, 11-43
LOB 1-53, 2-78 MAXDBAT 1-40, 6-21
Materialization storage 1-53 Maximum key length 1-98
LOB data space 1-53 Maximum sort key size 1-97
LOB locator 11-182 MAXKEEPD 2-142, 2-143
LOBVALA 1-53 MAXTYPE1 2-142
LOBVALS 1-54 MAXVALUE 3-109, 3-111, 3-126
Location alias 6-9, 6-18 MBCS 5-25
Location name 6-6 MDSNBP 4-99
Lock escalation 2-147 MDSNCL 4-99

© Copyright IBM Corp. 2004 Index X-15


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

MDSNDB 4-99 User-maintained 9-6


MDSNJR 4-99 MSTR address space 1-61
MDSNPK 4-99 MSTR SRB 10-55
MDSNPN 4-99 msys for Setup 11-150
MDSNSC 4-99 Customize 11-166
MDSNSG 4-99 Host program 11-158, 11-160
MDSNSM 4-99 Install Product Set 11-165
MDSNSP 4-99 JCL jobs 11-172
MDSNSQ 4-99 Management directory 11-158
MDSNTB 4-99 Plug-in 11-155
MDSNTS 4-99 Refresh Management Directory 11-165
MDSNUF 4-99 Update 11-166
MDSNUT 4-99 Update tasks window 11-171
Member routing in TCP/IP 6-14 Workplace 11-158
MEMLIM 1-39 msys for Setup DB2 Customization Center 11-191
MEMLIMIT 1-59, 11-43 msys for Setup Facility 11-150
Memory usage 11-9 Multi-Byte Character Set 5-25
MESSAGE_TEXT 7-16, 7-20 Multi-index access 9-118
MGEXTSZ 2-149 Multilevel security 4-90
MINVALUE 3-109, 3-111, 3-126 Requirements 4-114
Mixed data 5-24 Multilevel security at the object level 4-98, 4-99
MIXED_CCSID 5-70 Multilevel security with row-level granularity 4-98
MLMT 1-56, 11-43 Multilingual Plane 5-35
MLS 4-90, 4-105 Multiple calls to the same stored procedure 11-21
MLS requirements 4-114 Multiple CCSID set SQL statements
MLS with row granularity Derived value based on a column 5-103
Restrictions 4-114 Derived value not based on a column 5-103
MODE(C) 11-109 ORDER BY 5-113
MODE(E) 11-109 Pair-wise evaluation 5-105
MODE(N) 11-109 SET assignment 5-117
MODIFY 2-80, 2-90, 8-55 Multiple CCSID set statement 5-93
Monitoring Table UDF 5-95
Log offload activity 2-145 Multiple CCSIDs in the same encoding scheme
Long readers 2-146 11-23
Long running UR backout 2-145 Multiple CCSIDs per SQL statement 11-113
System checkpoint 2-145 Multiple open stored procedure result sets 4-34
MOST 9-94 Multiple physical partitions 2-33
MOST frequently occurring values 9-91 Multi-row FETCH 3-34, 11-177
MQT 9-4, 11-180 Multi-row fetch 11-176
ALTER WITH NO DATA table to MQT 9-19 Holes with scrollable cursors 3-46
Automatic query rewrite basics 9-29 Host variable array 3-41
Creation 9-10 Rowset 3-48
CURRENT MAINTAINED TABLE TYPES FOR Multi-row INSERT 3-33
OPTIMIZATION 9-24 ATOMIC 3-68
CURRENT REFRESH AGE 9-24 GET DIAGNOSTICS 3-68
DATA INITIALLY DEFERRED 9-12 Host variable array 3-41
DISABLE QUERY OPTIMIZATION 9-13 NOT ATOMIC CONTINUE ON SQLEXCEPTION
ENABLE QUERY OPTIMIZATION 9-13 3-68
Informational referential integrity 9-40 Rowset 3-48
No unique index 9-43 Multi-table composite 9-103
REFRESH DEFERRED 9-12 Mutli-row FETCH
REFRESH TABLE 9-22 JDBC driver support 3-42
Refreshable table options 9-12 MVPG 1-27
RUNSTATS 9-28 MVS system failure 10-50
Segmented table space 9-22 MXTBJOIN 1-85
System-maintained 9-6

X-16 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

N NPI 2-13, 2-110


Native code 4-7 NPPI 2-48
NEARINDREF 9-149 NPSI 2-36
Nested elements 4-62 NUM_DEP_MQTS 9-45
Net driver 4-9 Numeric types comparison 9-64
Net Search Extender 11-198 NUMLKTS 1-55
Net search extender 11-191 NUMLKUS 1-55
Net.Data 11-191 NUMPARTS 1-66, 1-67, 2-10
NETACCESS statement 4-117 NUMTCB 7-6
Network access control 4-117 NUMTCB=1 11-127
Network security zones 4-116
NEWFUN=NO 11-60, 11-117
NEWFUN=YES 11-86, 11-116
O
OA03095 11-8
New-function mode 11-3, 11-35, 11-57
OA03519 11-8
NEXT VALUE
OA04069 5-21
Cursors 3-145
OA4581 11-162
NEXT VALUE FOR 3-120, 3-134, 3-138
OBDREC 2-98
NEXT VALUE invocation 3-139
OBIDXLAT 8-57
NO CACHE 3-111, 3-128
Object 4-89
NO CYCLE 3-111, 3-127
OCTETS 3-208, 5-81
NO MAXVALUE 3-109, 3-111, 3-126
ODBC 3-26
NO MINVALUE 3-109, 3-111, 3-126
OLDEST_VERSION 2-86, 2-90, 2-92, 2-94, 8-55,
NO ORDER 3-109, 3-111, 3-128
8-56
No outstanding Version 7 utilities 11-22
ON COMMIT PRESERVE ROWS 9-146
NO SCROLL 3-14
ON ROLLBACK RETAIN CURSORS 4-35
NO WLM ENVIRONMENT 7-8, 11-20
Online alter 2-60
NOALIAS 6-11
Online LOAD RESUME 2-57
NOAREORPENDSTAR 8-54 Online REORG 11-90
NOCACHE 3-155 All catalog table 8-79
Non-4K catalog objects 11-148
Online REORG SHRLEVEL REFERENCE 11-79
Non-clustering index 2-28
Online schema
Non-deterministic functions 3-24 Check constraint 2-75
Non-GBP-dependent 10-19 CURRENT PRECISION 2-74
Non-hierarchical categories 4-102
RUNSTATS considerations 2-70
Non-IBM REORG Trigger 2-75
REPAIR VERSIONS 8-55
Online schema changes 1-66, 2-60
Non-partitioned index 2-13, 2-28 Online schema evolution 2-60
Non-partitioned indexes 2-33
Versioning 2-86
Challenges in V7 2-14 Open Group
Non-partitioned partitioning index 2-34 DRDA V3 Technical Standard 1-95
Non-partitioned partitioning indexes 2-48
Open Group’s DRDA 4-9
Non-partitioned secondary index 2-36
OPERATIONAL CF LEVEL 10-58
Nonsargable 9-60
Operational utilities 11-190
NOREOPT(VARS) 9-145
Optional no-charge features 11-191
NOSYSREC 8-31
OPTOPSE 9-102
NOT ATOMIC CONTINUE ON SQLEXCEPTION
ORDER 3-109, 3-111, 3-128
3-71, 3-86
ORDER BY 4-70
NOT CLUSTER 2-51, 2-120
INPUT SEQUENCE 3-185
NOT ENFORCED 8-60, 9-41
OS/390 V2R10 1-15, 1-26
NOT PADDED 2-117, 2-118, 9-51, 11-104
Out of service
Rebuild pending 2-119 V6 11-50
NOT VOLATILE 9-113
Overflow pointer 9-149
NOTEPAD 5-83
OW56073 11-8
NPAGES 9-115
OW56074 11-8
NPGTHRSH 9-114

© Copyright IBM Corp. 2004 Index X-17


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

P Performance 4-8
Package versioning 2-56 Multi-row fetch in local applications 9-107
PACKAGE_NAME 3-210 Multi-row insert in a local environment 9-107
PACKAGE_SCHEMA 3-210 Multi-row operations in a distributed environment
PACKAGE_VERSION 3-210 9-108
PADDED 2-117, 2-118, 9-51 Multi-row update and delete operations in a local
Rebuild pending 2-119 environment 9-107
VARCHAR 2-119 PERMIT 4-94
PADIX 9-51 Persistent stored modules 7-14
Page addressable 1-24 PGFIX 1-44
Page P-locks 10-8 PGFIX(YES) 1-38
Page set close control log records 2-132 PGPROT 1-56
Page set P-lock negotiation 10-8 Physical damage 2-7
Page set P-locks 10-7 Physical data sets 2-13
Panel Physical locks 10-7
DDNTIPX 11-41 Physical partition number 2-104, 2-113
DSNPIT00 11-88 Physical partitions 2-13
DSNTIP4 2-142, 9-49 Physically partitioned 2-13
DSNTIP5 2-142 Piece-level
DSNTIP7 2-145, 11-78 rebuild 2-14
DSNTIPA1 11-35 Recovery 2-14
DSNTIPA2 11-43 PIECESIZE 2-150
DSNTIPC 2-142, 11-42 PKGLDTOL 11-74
DSNTIPD 1-52 PKLIST 7-45
DSNTIPE 2-142, 9-51, 11-42 PL/I support 11-11
DSNTIPF 5-53, 5-78, 11-24 PLAN_NAME 3-211
DSNTIPI 2-142 P-locking 2-40
DSNTIPN 6-25, 11-42 P-locks 10-7
DSNTIPO 11-71 Plug-in 11-157, 11-161
DSNTIPP 2-142 PMB 1-38
DSNTIPR 2-142 Point-in-time recovery
Parallel index build 8-31 Adding partitions 2-105
Parallelism details 9-167 ALTER TABLE ROTATE PARTITION 2-109
Parameter marker 2-42 POOLINAC 2-142
Parent lock contention 10-18 PORT 6-19
PART 2-11 POSITION 3-208, 5-81, 8-17
VALUES 2-13 Postponed abort UR 2-145
PART BY 2-20 PQ22895 10-55
Partial rowset 3-50 PQ25337 10-55
PARTITION 2-20, 2-28 PQ25914 1-20
PARTITION BY 2-18, 2-20, 2-29 PQ28813 1-85
Partition pruning 2-41 PQ30652 3-106
PARTITIONED 2-30, 2-31, 2-33, 2-37, 11-113 PQ31326 1-85
Partitioned index 2-28 PQ36933 1-20
Partitioned partitioning index 2-34 PQ48126 1-79
Displaying 2-46 PQ48486 11-7, 11-14, 11-56, 11-69
Partitioning index 2-28 PQ53067 2-149, 11-40
Partitioning key update 2-148 PQ54042 9-70, 9-71, 9-72
Partition-level operations 2-14, 2-40 PQ56323 7-15
PARTKEYCOLUMN 2-25 PQ56697 5-59
PARTKEYU 2-142 PQ57516 1-85
PARTLEVEL 8-63 PQ58787 7-53
PC 1-56, 11-43 PQ59207 11-73
PC=NO 11-15 PQ59805 11-14
PDSE data set 11-15 PQ61458 9-139
PQ68662 9-152, 9-156

X-18 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

PQ71079 5-59 Qualified partitions 2-42


PQ71925 9-156 QUALIFIER 7-42
PQ72337 8-29 Query block size increase 6-33
PQ73454 9-152 Query Management Facility 11-197
PQ73749 9-152, 9-153 Query parallelism 2-41
PQ80841 4-12 QUERYNO 3-199
PQ83744 11-11 QUIESCE 8-61
PQTY 2-150 Quiesce system activity 2-132
PRECISION 3-158 Quiescing 32K page writes 2-129
Precompiler 11-11 Quotation mark 8-19
NEWFUN 5-79 QWHCEUID 6-25
Precompiler services 5-78 QWHCEUTX 6-25
Predicate pushdown 9-152 QWHCEUWN 6-25
PREFETCH 9-156
Pre-migration checks 11-23
Preventing CCSID changes 11-24 R
PREVIOUS VALUE RACF 6-30, 11-9
Cursors 3-146 RACF MLS option 4-96
PREVIOUS VALUE FOR 3-121, 3-134, 3-139 RACROUTE 4-109
PREVIOUS VALUE invocation 3-139 RBA 2-132
Primary SQL PORT 6-20 RBDP 2-73, 2-76, 2-83, 2-85, 2-119, 2-121, 2-123,
Print Log Map utility 6-10, 6-12 8-46, 8-47
PRIQTY 2-105, 2-150 Limiting the scope for dynamic SQL 2-73
Private Protocol 6-33, 6-34 RBDP* 8-71
Limitations 6-33 RBLP 2-132, 2-133, 2-134, 2-138
Procedure body 7-21 RDS 9-60, 9-130
Property sheets 11-166 RDS sort 1-50
Protocol level(2) 10-25 Read For Castout Multiple 10-28, 10-33
PSM 7-14 Read-up 4-90, 4-96
PSMDEBUG 7-35 Read-write interest 2-14
PSRBD 8-71 Real contention 10-13
PT 1-52 Real memory 1-19
PTF Real storage 1-15
OA4581 11-162 Reasons for LPL 10-39
UQ57144 8-30 REBALANCE 2-115
UQ60475 7-56 INDREFLIMIT 8-36
UQ60476 7-56 OFFPOSLIMIT 8-36
UQ67466 11-14 REORG TABLESPACE 8-35
UQ67626 7-53 REPORTONLY 8-36
UQ72082 4-29 SCOPE PENDING 8-36
UQ72083 4-29 UNLOAD EXTERNAL 8-36
UQ81009 11-7 UNLOAD ONLY 8-36
Pure Java client 4-7 REBUILD INDEX 8-2
Push-down star join 9-130 Rebuild pending 2-73, 2-85, 2-119
RECOVER 8-68
CURRENTCOPYONLY 8-80
Q RECOVER INDOUBT 10-51
QMF 5-13, 11-197 Recovery base log point 2-132, 2-133
Classic Edition 11-198 RECP 2-122
Distributed Edition 11-198 Index avoidance 2-122
Enterprise Edition 11-198 Recursion depth 3-100
High Performance Option 11-198 Recursive SQL 3-92
TSO/CICS 11-198 Common table expressions 3-93
Visionary Studio 11-198 UNION ALL 3-96
WebSphere 11-198 Redbooks Web site X-3
Windows 11-198 REFRESH DEFERRED 9-12

© Copyright IBM Corp. 2004 Index X-19


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

REFRESH TABLE 9-7, 9-22 LOGONLY execution 2-138


REFRESH_TIME 9-25, 9-43 RESTRICT 3-134
REFSHAGE 9-24 RESYNC 2-142
REGION 1-39 RESYNCHMEMBER(YES) 10-52
Relational Data System 9-60, 9-130 Retained locks 10-8
RELBOUND 11-119 RETURN_STATUS 7-16
RELCURHL 3-20 RETURNED_SQLSTATE 3-86
Release dependency indicator 11-60 Returning from V8 new-function 11-60
Release incompatibilities 11-6 RETVLCFK 9-49
Release marker 11-119 Reverse dominance 4-96
RELEASE(DEALLOCATE) 10-8, 10-23 RFCOM 10-28, 10-33, 10-35
RELOAD phase 8-31 RI relationship 9-33
Remapping XES locks 10-17 RID 3-10
REMARKS 3-159 RID pool 1-48
REOPT(ALWAYS) 9-145 RIDBLOCK 1-48
REOPT(NONE) 9-145 RIDLIST 1-48
REOPT(VARS) 2-42, 9-145 RIDMAP 1-48
REORG 11-77 RIDs 1-48
Clustering order 2-51 RLFBIND 11-71
Implicit clustering index 8-30 RLFFUNC 11-71
Reload phase 8-31 RMF 1-59
REORG REBALANCE ROLLBACK 3-153
Overview 8-33 Roll-off partition 2-106
Physical partition range 8-36 Rotate partition
REORG SHRLEVEL CHANGE Referential integrity 2-111
Discard processing 8-43 Syntax 2-107
REORG SHRLEVEL REFERENCE Rotating partitions
Catalog table spaces 8-79 NPI 2-110
REORG TABLESPACE 8-2 Row fetch and rowset fetch mixing 3-40
REBALANCE 2-115 Row level security
REORP 2-108, 2-113, 2-115, 8-33, 8-40 DELETE rules 4-108
REPAIR 8-3, 8-69 LOAD REPLACE rules 4-111
NOAREORPENDSTAR 8-54 LOAD RESUME rules 4-110
REPAIR VERSIONS 2-80, 2-90, 8-55, 8-57 REORG DISCARD rules 4-112
REPEAT 7-28 REORG UNLOAD EXTERNAL rules 4-111
Replication Center 11-193 SELECT rules 4-105
REPORT 8-61, 8-69 UNLOAD rules 4-111
REQUEST=DIRAUTH 4-109 UPDATE rules 4-107
RESET 2-107, 2-110 ROWID 2-78, 3-213, 8-16
Residual predicates 9-60 Rowset 3-32, 3-75
RESIGNAL 7-23, 7-25 Rowset size
Resolving contention 10-13 Maximum 3-32
Resource RQRIOBLK 3-77, 6-33
Security label 4-94 RRS signon 6-24
Resource class RRSAF 6-34, 7-11
SECDATA 4-93 Accounting string 6-34
RESPORT 6-19 RRSAF SET_ID function 3-199
RESTART 8-28 RUNSTATS 2-70, 8-2, 8-69, 9-28, 9-82
RESTART WITH 3-113, 3-156 Extra statistics 9-88
RESTART(CURRENT) 8-28 UPDATE NO REPORT NO 2-85
RESTART(PHASE) 8-28 UPDATE NONE REPORT NO 2-124
RESTARTWITH 3-158 Work data sets 9-94
RESTORE SYSTEM 2-129, 8-2, 8-6
Example 2-138
Execution 2-137 S
LOGONLY 2-130 Same CCSID string comparisons 9-66

X-20 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Sargable 9-167 Gaps 3-153


SAVEPOINT 9-107 GRANT 3-123
SAVEPOINT support 4-35 NEXT VALUE FOR 3-123, 3-138
SBCS 5-24 NEXT VALUE restrictions 3-141
SBCS 367 5-54 PREVIOUS VALUE FOR 3-123, 3-139
SCA 11-97 PREVIOUS VALUE restrictions 3-141
Scalability 1-2, 4-8 REVOKE 3-123
Scalar fullselect 3-162 ROLLBACK 3-139
CASE expressions 3-168 SYSIBM.SYSSEQUENCEAUTH 3-159
SCCSID 5-18, 11-18 SYSIBM.SYSSEQUENCES 3-158
Schema maintenance 2-57 SYSIBM.SYSSEQUENCESDEP 3-159
Schema versioning 2-57 Serialized profile 4-49, 11-30
SCOPE ALL 8-40 Upgrading 4-15
SCOPE PENDING 2-112, 8-40 SERVAUTH class 4-117
REBUILD INDEX 8-46 Session variables 3-210
SDK 4-9 DSNDSVS 3-211
SDSNEXIT 11-19 Session variablesDB2 defined 3-210
SDSNLINK 11-14 SET CURRENT PACKAGE PATH 7-45
SDSNLOAD PDSE data set 11-15 SET CURRENT PACKAGESET 7-45
SDXRRESL 11-43 SET CURRENT PATH 7-30
SECDATA 4-93 SET DATA TYPE clause 2-68
SECLABEL 3-211, 4-93, 4-110 SET LOG RESUME 2-136
SECLABEL verification 4-95 SET LOG SUSPEND 2-129, 2-132, 2-138, 8-6
SECLEVEL 4-93 SET MAXERRORS 11-176
Secondary index 2-13, 2-28, 2-31 SET SYSPARM 2-141
SECQTY 2-149 SET_CLIENT_ID 6-34
Section 0 statements 11-73 setDB2ClientAccountingInformation 4-30
Secureway Server 11-9 setDB2ClientApplicationInformation 4-30
Security setDB2ClientUser 4-30
Object 4-89 setDB2ClientWorkstation 4-30
Security category 4-93 Shift-in 5-25
Security enhancements 4-86 Shift-out 5-25
Security label 4-92 SIGNAL 7-19
Caching 4-108 SIGNATURE 9-46
Security labels 4-90, 4-93 SIGNON 6-34
Assigning 4-94 Single CCSID views 5-109
Defining 4-93 Single composite table 9-103
Performance 4-108 Single-level security systems 4-116
Security level 4-93 SJMXPOOL 9-134, 9-135
Security policy 4-89 SKCT 1-52
Security Server 6-30 SKPT 1-52
SECURITY_OUT 6-29 Sliding secondary allocation 2-150
Secutiry SLM 10-11
Subject 4-90 SLQCODE
Selective partition locking 11-23 -471 11-21
Self configuring 11-151 SMART DB2 extent sizes 2-149
Self healing 11-151 SMF 1-59, 6-24
Self optimizing 11-152 SMP/E 11-167
Self protecting 11-151 SMS-managed data sets 2-130
SENSITIVE DYNAMIC 3-15 SNA 4-118
SENSITIVE STATIC 3-11 Software Development Kit 4-9
Sequences Sort data buffers 1-50
ALTER 3-122, 3-131 Sort data length 9-168
COMMENT 3-123, 3-135 Sort key length 9-168
CREATE 3-122, 3-124 Sort pipe 8-80
DROP 3-123, 3-134 Sort tree nodes 1-50

© Copyright IBM Corp. 2004 Index X-21


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SORTC_PGR_ID 9-104 -331 3-207


SORTDATA 8-29, 8-31 -353 3-75
SORTDEVT 8-31 -359 3-139
SORTKEYS 8-29, 8-30 -419 2-74
SORTN_PGR_ID 9-104 -438 7-20
SORTNUM 8-31 -4702 2-87
Spawned transaction 10-54 -802 3-86
Special register 2-42 -805 7-46
Special register changes 11-17 -811 3-167
SPLIT_ROWS 9-118 -845 3-139
SPRMMXT 1-85 -879 11-19
SPUFI 5-83, 11-71 -904 2-85, 2-124, 2-142, 6-33, 7-6, 11-16, 11-41
SQL Debugger 7-33 -922 7-53
SQL procedure -923 10-52
using RETURN 7-16 SQLColumns() 7-53
SQL stored procedures SQLDA 5-10
Condition handler 7-21 SQLDriverConnect 7-52
Enhanced label support 7-28 SQLERRD(0) 7-16
LOB SQL variables 7-29 SQLERRD(3) 3-54, 9-23
Relax restrictions on variable names 7-29 SQLERRD3 3-85, 3-190
SQL/DS 4-18 SQLERRDC 7-16
SQL/JRT 4-40 SQLERRMC 7-20
SQL/OLB 4-40 SQLESETI 4-30, 6-25
SQL/XML 4-56 SQLException 4-28, 4-34, 4-44
SQL/XML publishing functions 4-57 SQLExtendedFetch 3-76
SQL_ASCII_SCCSID 7-56 SQLFetch 3-76
SQL_ATOMIC_NO 3-76 SQLGetInfo() 7-56
SQL_ATTR_CURSOR_TYPE 3-26, 3-76 SQLJ 4-11
SQL_ATTR_PARAMOPT_ATOMIC 3-76 Catching errors 4-42
SQL_C_WCHAR 7-56 Easier to code 4-42
SQL_CURSOR_DYNAMIC 3-26, 3-76 Monitoring 4-43
SQLCA 3-71, 3-80, 3-86, 3-190, 7-16 Object Language Bindings 4-40
SQLCODE 3-85, 7-16 Portable customized profile 4-12
+20237 3-54 Predictible access path 4-43
+222 3-46 Preparation process 4-47
+231 3-18 Routines and Types 4-40
+347 3-103 Static SQL 4-41
+438 7-20 Static SQL authorization model 4-41
+802 3-86 SQLJ applications 7-45
-101 9-155 SQLJ translator 4-47
-102 5-67, 11-66 SQLNAME 5-11
-107 11-114 SQLScroll 3-76
-127 3-172 SQLSetStmtAttr 3-76
-129 1-85 SQLSTATE 3-85
-1403 7-53 01605 3-103
-171 4-83 02502 3-46
-173 3-20 21000 3-167
-189 11-19 22003 3-86
-20275 4-82 24519 3-47
-20283 7-43 24523 3-35
-244 3-17 42801 3-20
-246 3-70 42815 4-83
-247 3-47 42849 7-5
-249 3-35 42873 3-70
-30005 3-76 56702 3-76
-30082 7-53 FFFFF 11-27

X-22 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SQLTables() 7-53 SYSIBM.LOCATION 6-6


SQLVAR 5-11 SYSIBM.LULIST 6-14
SQLWARN 3-85 SYSIBM.SQLCAMESSAGE 4-29
SQLX Group 4-56 SYSIBM.SYSCOLSTATS 2-70
SRTPOOL 2-142 SYSIBM.SYSCOLUMNS 2-70
SSID 7-39 SYSIBM.SYSCOLUMNS_HIST 2-70
Stage 1 9-167 SYSIBM.SYSCOPY 2-88
Stage 2 9-167 SYSIBM.SYSDUMMY1 5-61, 5-70, 11-78
Star join SYSIBM.SYSINDEXES 2-88
Controlled snowflake materialization 9-140 SYSIBM.SYSINDEXPART 2-88
In-memory workfile 9-134 SYSIBM.SYSLINKS 11-78
Selection of filtering dimensions 9-141 SYSIBM.SYSOBDS 2-88, 11-63, 11-142
Star property 4-96 SYSIBM.SYSPACKSTMT 5-74
Star schema 9-120 SYSIBM.SYSPROCEDURES 11-78
STARJOIN 9-134 SYSIBM.SYSROUTINES 9-79
START DATABASE 10-40 SYSIBM.SYSSEQ2 3-159
ACCESS(FORCE) 2-126 SYSIBM.SYSSEQUENCAUTH 11-140
START DB2 LIGHT(YES) 10-50 SYSIBM.SYSSEQUENCEAUTH 3-124, 3-159,
START WITH 3-112, 3-125 11-63
Static scrollable cursors 3-22 SYSIBM.SYSSEQUENCEDEP 3-124
STATSTIME 2-70 SYSIBM.SYSSEQUENCES 3-123, 3-124, 3-150,
STMT 5-83, 11-76 3-159
STMTID 3-198 SYSIBM.SYSSEQUENCESDEP 3-159
STMTTOKEN 3-198, 3-199 SYSIBM.SYSSTMT 5-74
STOP AFTER nn FAILURES 7-4 SYSIBM.SYSSTRINGS 5-20, 11-77
STOP AFTER SYSTEM DEFAULT FAILURES 7-4 SYSIBM.SYSTABLEPART 2-88
Storage threshold 6-26 SYSIBM.SYSTABLES 2-88, 9-14
Stored Procedure Builder 7-32 SYSIBM.SYSTABLESPACE 2-88
Stored procedures SYSIBM.SYSVIEWDEP 9-45
Migrating to LANGUAGE JAVA 7-9 SYSIBM.SYSVIEWS 5-110, 9-25, 9-45
Stored routines SYSIN 5-73
Failure management 7-3 SYSLGRNX 2-109
WLM resource management 7-6 SYSLISTD 5-73, 8-82
STORMXAB 7-4 SYSLOW 4-94
String types comparison 9-66 SYSMULTI 4-94, 4-116
Striping 2-145 SYSNONE 4-94
Strong typing 4-42 SYSOBDS 8-57, 11-138
STYPE 8-55 SYSODBS 2-80, 2-88
Subject 4-90 SYSOPR 4-100
Subset location name 6-19 SYSOPR1 2-142
Substitution 5-22 SYSOPR2 2-142
SUBSTRING 3-208, 5-81 SYSPACKSTMT 9-171, 11-76
Surrogates 5-43 SYSPITR 2-136, 2-137
SYSADM 2-142, 3-129, 4-100 SYSPITR CRCR 2-137
SYSADM2 2-142 Sysplex distributor 6-18
SYSALTER 11-138 Sysplex query parallelism
SYSCOLDIST 2-70, 9-93 Multilevel security 4-115
SYSCOLDISTSTATS 9-93 SYSPRINT 5-73
SYSCOLUMNS 2-70, 9-93 SYSREC 8-31
SYSCOPY 2-87, 8-40 SYSSTMT 9-171, 11-76
LOGICAL_PART 2-112 SYSTABLEPART
STYPE 8-55 LOGICAL_PART 2-112
SYSCTRL 3-129, 4-100 System checkpoints 2-132
SYSHIGH 4-94, 4-100 System pages 2-69, 2-87
SYSIBM.IPLIST 6-14, 6-18, 11-63, 11-139 System recover pending mode 2-136
SYSIBM.IPNAMES 6-15 System Resource Manager 7-6

© Copyright IBM Corp. 2004 Index X-23


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SYSTEM_ASCII_CCSID 3-211 True index-only access 9-55


SYSTEM_EBCDIC_CCSID 3-211 Two-phase commit 4-53, 11-74
SYSTEM_NAME 3-211 Type 1 4-7
SYSTEM_UNICODE_CCSID 3-211 Type 1 indexes 11-16
System-defined catalog index 2-84 Type 2 4-7, 4-10
System-generated DDNAMEs 1-54 Type 3 4-7
System-level point-in-time recovery Type 4 4-8, 4-10
BACKUP SYSTEM 2-128, 2-131 Type 4 JDBC driver 11-192
COPY NO index 2-138 Type 4 XA driver 4-53
Copy pool 2-129
Copy pool backup 2-130
DB2 Tracker site 2-139 U
Prerequisites 2-130 UCS 5-35
RBDP 2-138 UCS-2 5-35, 7-56
RECP 2-138 UCS-4 5-35
RESTORE SYSTEM 2-129 UGCCSID 5-53
Restoring a DB2 system to arbitrary PIT 2-136 UMCCSID 5-53
Restoring to prior system level backup 2-135 UNI=xx 5-21
SYSTEMPAGES 2-89, 2-98, 8-50 Unicode
SYSTEMPL 5-73, 8-82 CCSID in DSNHDECP 5-53
DB2 column type 5-55
Encoding scheme during CREATE 5-54
T SCCSID greater than 0 5-59
Table UDF UTF-8 encoding 5-36
Block fetch 9-81 Unicode conversion services 11-10
Table-based partitioning 11-113 Unicode parser 11-65, 11-114
Table-controled partitioning Unicode standard 4.0 5-34
Adding limit keys afterwards 2-21 Unicode Transformation Format 5-35
Table-controlled partitioning 2-17, 2-19, 2-51, 2-100 Uniform Resource Identifier 4-72
Clustering 2-50 Uniform Resource Locator 4-73
CREATE example 2-20 UNION ALL 3-96
No partioning index required 2-20 UNIQUE 2-37
Partitioning indexes 2-29 UNIQUE WHERE NOT NULL 2-37
TABLES_JOINED_THRESHOLD 1-88 Universal Character Set 5-35
TCB time 10-55 Universal Client 4-10
TCP/IP 4-116, 6-8, 6-21 Universal Driver 4-19
TCP/IP member routing 6-14 Auto-generated keys 4-35
TCP/IP port number 4-24, 6-19 Batched updates 4-28
TCP/IP socket calls 4-7 Enhanced LOB support 4-37
TCPALVER 2-142 fullyMaterializeLobData 4-37
TCPKPALV 2-142 Multiple open stored procedure result sets 4-34
TEMP database 3-10, 11-17, 11-82 SAVEPOINT support 4-35
TEMPLATE 8-29, 8-63 Scrollable cursor support 4-27
TEXT 5-83, 11-76 UNLOAD
Text extender 11-190 DELIMITED 8-13
Thread-related storage 1-50 Delimited input 8-2
TIME 2-78 UNLOAD DELIMITED 8-20
TIMESTAMP 2-78 Unload phase
Transaction locks 10-8 NOSYSREC 8-32
Transient data type 4-59 UQ57144 8-30
Transition variables 9-84 UQ60475 7-56
Trigger UQ60476 7-56
WHEN condition 9-84 UQ67466 11-14
Trigger body 7-18 UQ67626 7-53
Triggers UQ72081 4-29
Multilevel security 4-115 UQ72082 4-29

X-24 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

UQ72083 4-29 Version generating ALTER 2-87


UQ81009 11-7 Version limits 2-87
URCHKTH 2-146 VERSIONS
URI 4-72 REPAIR 8-55
URL 4-73 Video extender 11-190
URL syntax 4-19, 11-27 VIPA 6-16
URLGWTH 2-146 Visual Explain 5-83, 11-194
USAGE 3-129 Access plan graph 9-173
USCCSID 5-53 Analyzing parallel queries 9-179
User-defined catalog indexes 11-104 Browse subsystem parameters 9-170
User-defined functions 4-57 Enabling 9-169
User-defined objects 2-145 Filter factors 9-175
User-supplied DSNHDECP 11-17 List Static SQL statements 9-170
USING STOGROUP 2-104 Maintain 9-170
USING VCAT 2-104 Setting up 9-168
USS ODBC Sort information 9-178
2MB SQL statements 7-54 VOLATILE 9-113
CURRENTAPPENSCH 7-55 VOLTDEVT 8-80, 11-43
Long name support 7-53 Volume-level backups 2-128
SQLConnect 7-52 VPPSEQT 1-41
Wide APIs 7-56 VPSEQT 1-41
UTF-16 4-81, 5-35, 5-55, 5-82 VPSIZE 1-41
UTF-32 5-35 VPXSEQT 1-41
UTF-8 4-81, 5-35, 5-55, 5-81, 7-56 VSAM
Utility Striping 2-145
Automatic restart 8-28 VSAM CI size 2-129
DPSI support 8-62 VSTOR 1-28
RESTART 8-29 VTAM 6-21, 6-33
SORTDATA 8-29
SORTKEYS 8-29
TEMPLATE 8-29 W
UTLRSTRT 8-29 WARM 10-28, 10-33, 10-35
UTSTMT 5-73, 8-82, 8-83 Web-based wizards 11-152
UX 5-93 WebSphere 3-202
WebSphere Administrative console 4-26
WebSphere Application Server 11-192
V WebSphere Application Server for z/OS 4-53
Valid CCSIDs required 11-18 WebSphere Studio Application Developer 4-49
Validation procedures 4-115 WHILE 7-28
VALIDPROC 2-78 Wide API 7-56
VALUES 2-20 Windows Explorer 11-158
VARCHAR WITH 3-91
NOT PADDED 2-119 WITH NO DATA 9-11
VARGRAPHIC 5-55 WITH NO DATA table 9-19
Varying length index keys 11-112 WITH ROWSET POSITIONING 3-35, 3-39
VDWQT 11-44 WITHOUT ROWSET POSITIONING 3-35
VE 9-167 Wizard 11-166
VERSION 3-211, 8-57 WLM 7-6
Version 2-87 WLM compatability mode 7-12
Version 0 2-69, 2-77 WLM enclave 6-34
Version 8 IVP 11-89, 11-94 WLM enqueue management 2-147
Version generating ALTER 2-87 WLM ENVIRONMENT 7-11
Versioning 2-86 WLM goal mode 7-12
Storing version information 2-87 WLM priority 2-147
System pages 2-98 WLM resource group 6-19
SYSTEMPAGES 2-98 WLM-managed stored procedures 7-8

© Copyright IBM Corp. 2004 Index X-25


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Workfile 9-84 z/OS conversion services 5-20


Workfile caching 9-135 z/OS Conversion sServices 11-10
Workload Manager 7-6 z/OS Cryptographic Services 3-205
Write And Register Multiple 10-28, 10-33 z/OS Enablement 11-194
WRITE CLAIM 10-44 z/OS V1R3 1-31
Write down 4-90 z800 1-15
Write-down 4-96 z890 1-15
write-down 4-105 z900 1-15
write-down checking 4-106 zSeries 1-2
Write-down control 4-116
Write-down privilege 4-107, 4-111, 4-112
WSAD 4-49, 4-50

X
XA transactions 4-20, 4-23
XA two-phase commit 4-54
XES 10-6, 10-9
XES contention 10-12, 10-14
XLKUPDLT 2-142, 2-143
XML 4-56, 4-76, 5-26, 5-88
Built-in functions 4-56
XML built-in functions 4-57
XML composition 4-56
XML data type 4-59
XML documents 4-58
XML Extender 4-56
XML extender 11-190
XML file 11-165
XML fragments 4-58
XML publishing functions 4-57
XML2CLOB 4-57, 4-59
XMLAGG 4-57, 4-70
XMLATTRIBUTES 4-57, 4-65
XMLCONCAT 4-57, 4-68
XMLELEMENT 4-57, 4-61
XMLFOREST 4-57, 4-66
XMLNAMESPACES 4-57
XML Toolkit 4-58
XML transient data type 4-59
XML2CLOB 4-57, 4-62
XMLAGG 4-57
ORDER BY 4-70
XMLATTRIBUTES 4-57, 4-62
XMLCONCAT 4-57
XMLELEMENT 4-57
XMLFOREST 4-57
XMLNAMESPACES 4-57, 4-62, 4-67

Z
z/Architecture 1-2
z/OS 4-18
z/OS Application Connectivity to DB2 for z/OS
11-192
z/OS Conversion services 11-77

X-26 DB2 UDB for z/OS V8 Transition © Copyright IBM Corp. 2004
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V2.0

backpg
Back page
®

You might also like