SG 247858
SG 247858
SG 247858
Justin C. Haase
Dwight Harrison
Adam Lukaszewicz
David Painter
Tracy Schramm
Jiri Sochr
ibm.com/redbooks
International Technical Support Organization
December 2014
SG24-7858-03
Note: Before using this information and the product it supports, read the information in “Notices” on
page xix.
This edition applies to Version 7, Release 1, Modification 0 of IBM i (5770-SS1) and related
licensed programs.
© Copyright International Business Machines Corporation 2010, 2014. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Contents v
5.2.2 The MERGE statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.2.3 Dynamic compound statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.2.4 Creating and using global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.2.5 Support for arrays in procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.2.6 Result set support in embedded SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.2.7 FIELDPROC support for encoding and encryption . . . . . . . . . . . . . . . . . . . . . . . 174
5.2.8 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.2.9 Generating field reference detail on CREATE TABLE AS . . . . . . . . . . . . . . . . . 185
5.2.10 Qualified name option added to generate SQL. . . . . . . . . . . . . . . . . . . . . . . . . 186
5.2.11 New generate SQL option for modernization . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.2.12 OVRDBF SEQONLY(YES, buffer length) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.3 Performance and query optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.3.1 Methods and tools for performance optimization . . . . . . . . . . . . . . . . . . . . . . . . 189
5.3.2 Query optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.3.3 Global statistics cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.3.4 Adaptive query processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.3.5 Sparse indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.3.6 Encoded vector index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.3.7 Preserving EVI indexes on ALTER enhancement . . . . . . . . . . . . . . . . . . . . . . . 196
5.3.8 Keeping tables or indexes in memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.3.9 SQE optimization for indexes on SSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.3.10 SQE support of simple logical files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.3.11 Maximum size of an SQL index increased to 1.7 TB . . . . . . . . . . . . . . . . . . . . 197
5.3.12 QSYS2.INDEX_ADVICE procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.3.13 Improved index advice generation to handle OR predicates . . . . . . . . . . . . . . 198
5.3.14 SKIP LOCKED DATA and NC or UR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.3.15 SQL routine performance integer arithmetic (requires re-create) . . . . . . . . . . . 200
5.3.16 Automatic cancellation of QSQSRVR jobs when an application ends . . . . . . . 200
5.3.17 QAQQINI properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.3.18 ALTER TABLE performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.3.19 Avoiding short name collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.3.20 CREATE PROCEDURE (SQL) PROGRAM TYPE SUB. . . . . . . . . . . . . . . . . . 203
5.3.21 Referential integrity and trigger performance . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3.22 QSQBIGPSA data area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3.23 Validating constraints without checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3.24 Limiting the amount of processing on an RGZPFM cancel. . . . . . . . . . . . . . . . 205
5.3.25 Database reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.3.26 CPYFRMIMPF performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.3.27 QJOSJRNE API option to force journal entries without sending an entry. . . . . 206
5.3.28 QDBRTVSN API performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.3.29 Control blocking for a file using QSYS2.OVERRIDE_TABLE() . . . . . . . . . . . . 207
5.3.30 Improving JDBC performance with JTOpen . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.3.31 Adding total DB opens job level instrumentation to Collection Services . . . . . . 208
5.3.32 SYSTOOLS.REMOVE_INDEXES procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.3.33 Improved SQE statistics for INSERT, UPDATE, and DELETE statements . . . 209
5.3.34 QSYS2.Reset_Table_Index_Statistics procedure . . . . . . . . . . . . . . . . . . . . . . 209
5.3.35 Performance enhancements for large number of row locks . . . . . . . . . . . . . . . 210
5.3.36 Improved DSPJOB and CHKRCDLCK results for many row locks. . . . . . . . . . 211
5.3.37 Chart-based graphical interface SQL performance monitors . . . . . . . . . . . . . . 211
5.3.38 Enhanced analyze program summary detail. . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.3.39 Performance Data Investigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.3.40 Index Advisor: Show Statements - Improved query identification . . . . . . . . . . . 214
5.3.41 Performance improvements for temporary tables . . . . . . . . . . . . . . . . . . . . . . . 214
Contents vii
5.4.53 Navigator – Improved Index Build information . . . . . . . . . . . . . . . . . . . . . . . . . 260
5.4.54 Improved performance for joins over partitioned tables . . . . . . . . . . . . . . . . . . 260
5.4.55 Navigator: Table list totals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.5 DB2 database management and recovery enhancements . . . . . . . . . . . . . . . . . . . . . 261
5.5.1 Preserving the SQL plan cache size across IPLs . . . . . . . . . . . . . . . . . . . . . . . . 261
5.5.2 Plan cache properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.5.3 Prechecking the physical file size during restore . . . . . . . . . . . . . . . . . . . . . . . . 264
5.5.4 Preventing index rebuild on cancel during catch up . . . . . . . . . . . . . . . . . . . . . . 264
5.5.5 QSYS2.SYSDISKSTAT view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.5.6 STRDBMON: FETCH statement shows failures and warnings. . . . . . . . . . . . . . 266
5.5.7 STRDBMON: QQI2 result rows for more statements . . . . . . . . . . . . . . . . . . . . . 266
5.5.8 Adding result set information to QUSRJOBI() and Systems i Navigator. . . . . . . 266
5.5.9 STRDBMON pre-filtering of QUERY/400 command usage . . . . . . . . . . . . . . . . 267
5.5.10 UNIT SSD supported on DECLARE GLOBAL TEMPORARY TABLE . . . . . . . 268
5.5.11 Adding Maintained Temporary Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.5.12 Adding the QSYS2.REMOVE_PERFORMANCE_MONITOR procedure . . . . . 269
5.5.13 STRDBMON: QQI1 fast delete reason code. . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.5.14 Automatically increasing the SQE Plan Cache size . . . . . . . . . . . . . . . . . . . . . 270
5.5.15 Tracking important system limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
5.5.16 DB2 for i Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5.5.17 DISPLAY_JOURNAL (easier searches of Audit Journal) . . . . . . . . . . . . . . . . . 275
5.5.18 IBM i Navigator improved ability to mine journals . . . . . . . . . . . . . . . . . . . . . . . 279
5.5.19 Navigator for i: A new look and no client to manage. . . . . . . . . . . . . . . . . . . . . 282
5.6 DB2 for Web Query for i (5733-QU2, 5733-QU3, and 5733-QU4) . . . . . . . . . . . . . . . 282
5.6.1 DB2 Web Query for i (5733-QU2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.6.2 DB2 Web Query Report Broker (5733-QU3). . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.6.3 DB2 Web Query Software Developer Kit (5733-QU4) . . . . . . . . . . . . . . . . . . . . 284
5.6.4 DB2 Web Query for i Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.7 OmniFind Text Search Server for DB2 for i (5733-OMF) . . . . . . . . . . . . . . . . . . . . . . 285
5.7.1 OmniFind for IBM i: Searching Multiple Member source physical files . . . . . . . . 285
5.7.2 Navigator for i - Omnifind Collection Management . . . . . . . . . . . . . . . . . . . . . . . 286
5.8 WebSphere MQ integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5.9 DB2 Connect system naming attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Contents ix
8.1.17 1.5 TB RDX removable disk cartridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8.1.18 VIOS support for RDX USB docking station for removable disk cartridge . . . . 391
8.1.19 Use of USB flash drive for IBM i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
8.1.20 POWER7+ 770/780 Native I/O support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
8.2 Using IBM i 520-byte sector SAS disk through VIOS . . . . . . . . . . . . . . . . . . . . . . . . . 394
8.3 SAN storage management enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.3.1 IBM SAN Volume Controller and IBM Storwize storage systems . . . . . . . . . . . . 395
8.3.2 Multipathing for virtual I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
8.3.3 DS5000 native attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
8.3.4 Level of protection reporting for multipath disk units. . . . . . . . . . . . . . . . . . . . . . 399
8.3.5 Library control paths for IOP-less Fibre Channel IOA tape attachment . . . . . . . 400
8.3.6 External disk storage performance instrumentation . . . . . . . . . . . . . . . . . . . . . . 401
8.3.7 Thin provisioning for DS8700, DS8800, and VIOS shared storage pools. . . . . . 406
8.4 SSD storage management enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.4.1 DB2 media preference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.4.2 ASP balancer enhancements for SSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
8.4.3 User-defined file system media preference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
8.4.4 177 GB SFF SSD with eMLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.4.5 IBM Disk Sanitizer PRPQ extended to include SSD devices . . . . . . . . . . . . . . . 418
Chapter 11. Integration with IBM BladeCenter and IBM System x . . . . . . . . . . . . . . . 475
11.1 iSCSI software targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
11.1.1 IBM i Integrated server object model with a hardware target . . . . . . . . . . . . . . 476
11.1.2 IBM i Integrated server object model with a software target . . . . . . . . . . . . . . . 477
11.1.3 Direct connect software targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
11.2 Defining iSCSI software target support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
11.2.1 CRTDEVNWSH CL command interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
11.2.2 IBM Navigator for i changes for iSCSI software target support. . . . . . . . . . . . . 479
11.3 Service Processor Manager function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
11.4 VMware support changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
11.4.1 New NWSD types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
11.4.2 VMware ESX server management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
11.4.3 SWA storage spaces for VMware ESX servers . . . . . . . . . . . . . . . . . . . . . . . . 482
11.5 Microsoft Windows support changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
11.6 New planning worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
11.7 IBM Navigator for i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
11.7.1 Create Server task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
11.7.2 Clone Integrated Windows Server task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
11.7.3 Delete Server task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
11.7.4 Launch Web Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
11.7.5 Simplified Windows File Level Backup (FLB) from IBM i . . . . . . . . . . . . . . . . . 490
11.8 New IBM i CL commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.8.1 Install Integrated Server (INSINTSVR) command. . . . . . . . . . . . . . . . . . . . . . . 491
11.8.2 Delete Integrated Server (DLTINTSVR) command. . . . . . . . . . . . . . . . . . . . . . 491
11.9 IBM i changed CL commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
11.9.1 Install Windows Server (INSWNTSVR) CL command . . . . . . . . . . . . . . . . . . . 492
11.9.2 Create NWS Configuration (CRTNWSCFG) and Change NWS Configuration
(CHGNWSCFG) CL commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
11.9.3 Install Linux Server (INSLNXSVR) CL command . . . . . . . . . . . . . . . . . . . . . . . 493
11.9.4 No new integrated Linux servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Contents xi
11.10 Fewer IBM i licensed programs are required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.11 Changes to IBM i integration with BladeCenter and System x documentation . . . . 493
11.11.1 A new IBM i iSCSI Solution Guide PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.11.2 IBM i 7.1 Knowledge Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
11.11.3 IBM i integration with BladeCenter and System x on IBM developerWorks . . 495
11.11.4 New IBM i Technology Updates page on developerWorks. . . . . . . . . . . . . . . 495
11.11.5 IBM i integration with BladeCenter and System x Marketing website . . . . . . . 495
Contents xiii
16.1.4 Java tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
16.1.5 EGL tools and IBM Rational Business Developer V9.0 . . . . . . . . . . . . . . . . . . 623
16.1.6 Rational Team Concert client integration for IBM i . . . . . . . . . . . . . . . . . . . . . . 625
16.1.7 Version 9.0 fix packs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
16.1.8 Migration to Rational Developer for i v9.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
16.1.9 Upgrades to Rational Developer for i v9.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
16.2 IBM Rational Team Concert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
16.2.1 Integration with Rational Developer for i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
16.2.2 What is new in the latest releases of Rational Team Concert. . . . . . . . . . . . . . 632
16.2.3 Rational Team Concert and other Rational products interoperability . . . . . . . . 634
16.2.4 General links for more information about Rational Team Concert . . . . . . . . . . 635
16.3 IBM Rational Development Studio for i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
16.3.1 Source code protection option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
16.3.2 ILE RPG IV compiler and programming language . . . . . . . . . . . . . . . . . . . . . . 637
16.3.3 Sorting and searching data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
16.3.4 ALIAS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
16.3.5 Performance improvement when returning large values. . . . . . . . . . . . . . . . . . 645
16.3.6 ILE COBOL compiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
16.3.7 ILE C and ILE C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
16.4 IBM Rational Open Access: RPG Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
16.4.1 How to use Rational Open Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
16.4.2 IBM Rational Open Access: RPG Edition withdrawn . . . . . . . . . . . . . . . . . . . . 657
16.4.3 Open Access requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
16.5 Other Rational and RPG related tools - ARCAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
16.5.1 ARCAD-Transformer RPG tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
16.5.2 ARCAD Pack for Rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
16.6 IBM Rational Application Management Toolset for i . . . . . . . . . . . . . . . . . . . . . . . . . 660
16.6.1 Application Management Toolset for i Licensing. . . . . . . . . . . . . . . . . . . . . . . . 662
16.6.2 Application Management Toolset for i Requirements . . . . . . . . . . . . . . . . . . . . 662
16.6.3 Accessing Application Management Toolset for i . . . . . . . . . . . . . . . . . . . . . . . 662
16.7 IBM Rational Host Access Transformation Services (HATS) . . . . . . . . . . . . . . . . . . 663
16.7.1 HATS general description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
16.7.2 HATS basic functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
Contents xv
18.1.4 5250 Display Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
18.1.5 5250 Data transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
18.1.6 Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
18.1.7 5250 HMC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
18.1.8 Virtual Control Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
18.1.9 Hardware Management Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
18.1.10 ACS mobile solutions to connect to IBM i, real example. . . . . . . . . . . . . . . . . 819
18.2 IBM i Access for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
18.2.1 Installation enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
18.2.2 .NET Data Provider enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
18.2.3 OLE Data Provider enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
18.2.4 Windows ODBC Driver enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
18.2.5 Data Transfer enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
18.2.6 Personal Communications Emulator enhancements . . . . . . . . . . . . . . . . . . . . 821
18.2.7 Direct Attach Operations Console withdrawal. . . . . . . . . . . . . . . . . . . . . . . . . . 822
18.3 IBM Navigator for i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
18.4 System i Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
18.5 IBM i Access for Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
18.5.1 Requirements for using IBM i Access for Web . . . . . . . . . . . . . . . . . . . . . . . . . 825
18.5.2 AFP to PDF transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
18.6 IBM System i Access for Wireless . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
18.7 IBM i Access references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
Contents xvii
xviii IBM i 7.1 Technical Overview with Technology Refresh Updates
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Memory™ IMS™ POWER®
AIX 5L™ InfoSphere® Print Services Facility™
AIX® Integrated Language Environment® ProtecTIER®
AnyNet® iSeries® PureFlex®
AS/400e™ Jazz™ Quickr®
AS/400® Language Environment® Rational Team Concert™
DataMirror® Lotus Enterprise Integrator® Rational®
DB2 Connect™ Lotus Notes® Redbooks®
DB2® Lotus® Redbooks (logo) ®
developerWorks® Notes® RPG/400®
Domino® OmniFind® RS/6000®
DRDA® OS/400® Sametime®
DS5000™ Passport Advantage® Storwize®
DS6000™ POWER Hypervisor™ System i®
DS8000® Power Systems™ System p®
Electronic Service Agent™ Power Systems Software™ System Storage®
EnergyScale™ POWER6+™ System z®
eServer™ POWER6® SystemMirror®
Express Servers™ POWER7 Systems™ Tivoli®
FlashCopy® POWER7+™ WebSphere®
Guardium® POWER7® Worklight®
i5/OS™ PowerHA® z/OS®
IBM Flex System® PowerSC™
IBM® PowerVM®
Adobe, the Adobe logo, and the PostScript logo are either registered trademarks or trademarks of Adobe
Systems Incorporated in the United States, and/or other countries.
Netezza, and N logo are trademarks or registered trademarks of IBM International Group B.V., an IBM
Company.
Itanium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
LTO, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other
countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Notices xxi
xxii IBM i 7.1 Technical Overview with Technology Refresh Updates
IBM REDBOOKS PROMOTIONS
Download
Android
iOS
Now
This IBM® Redbooks® publication provides a technical overview of the features, functions,
and enhancements available in IBM i 7.1, including all the Technology Refresh (TR) levels
from TR1 to TR7. It provides a summary and brief explanation of the many capabilities and
functions in the operating system. It also describes many of the licensed programs and
application development tools that are associated with IBM i.
The information provided in this book is useful for clients, IBM Business Partners, and IBM
service professionals who are involved with planning, supporting, upgrading, and
implementing IBM i 7.1 solutions.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Rochester Center.
The fourth edition of this IBM Redbooks project was led by:
Debbie Landon
International Technical Support Organization, Rochester Center
Terry D. Ackman, Mark J. Anderson, Sue Baker, Bob Baron, Stacy L Benfield, Robert J
Bestgen, Chris Beyers, David R Bhaskaran, John Bird, Brian K. Bratager, Kent L Bruinsma,
Dan Boyum, Lilo Bucknell, Tony Cairns, Natalie Campbell, Bunny Chaney, David S Charron,
Armin Christofferson, Jason Clegg, Tom Crowley, Jenny Dervin, Collin DeVillbis, Jessica
Erber-Stark, Jerry Evans, Margaret Fenlon, Steve Finnes, Jim Flanagan, Terry A. Ford, Ron
Forman, Scott Forstie, Christopher Francois, Dave Johnson, Robert Gagliardi, Mark Goff,
Maryna Granquist, Roger Guderian, Kristi Harney, Stacy L. Haugen, Terry Hennessy, Mark J
Hessler, Wayne Holm, Steven M Howe, Rafique Jadran, Allan E Johnson, Randy Johnson,
Rodney Klingsporn, Tim Klubertanz, Stephen A Knight, Joe Kochan, Joseph Kochan, Tim
Simon Webb
IBM Marlborough
Zhu Bing, Li LL Guo, Xi R Chen, Sheng Li Li, Jian Sang, Gang Shi, Meng MS Su, Dong Dong
DD Su, Ting Ting Sun, Wei Sun, Gang T Tian, Nan Wang, Shuang Hong Wang, Gan Zhang
IBM China
Vladimir Pavlisin
IBM Slovakia
Sean Babineau, Alison Butterill, Rob Cecco, Phil Coulthard, Mark M. Evans, George G Farr,
Philip Mawby, Barbara Morris
IBM Toronto
Chris Trobridge
IBM UK
Preface xxvii
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review IBM Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Rochester, MN 55901
This section describes the technical changes that were made in this edition of the book and in
previous editions. This edition might also include minor corrections and editorial changes that
are not identified.
Summary of Changes
for SG24-7858-03
for IBM i 7.1 Technical Overview with Technology Refresh Updates
as created or updated on October 29, 2015.
The second edition of this book included all the enhancements that were available in the first
three Technology Refreshes (TR1, TR2, and TR3) that were made available since the product
went to General Availability.
The third edition of this book included all the enhancements that were available in Technology
Refreshes TR4 and TR5 that were announced in 2012.
This fourth edition of the book includes all the enhancements that are available in the next two
Technology Refreshes (TR6 and TR7) that were announced in 2013. There are
enhancements basically in every topic that is related to IBM i. For this reason, every chapter
was modified to include these enhancements.
A valuable starting point for readers of this publication, and anyone involved with the
installation or an upgrade from a previous release of the IBM i operating system is the IBM i
Memo to Users. It is available at the following website:
https://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzaq9/rzaq9.pdf
Make sure that when acquiring the Memo to Users that you always download a current copy.
Updates are occasionally made to the document, with the cover page specifying the version
with a month and year of publication.
More detailed information about IBM i 7.1 enhancements can be found at these websites:
IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp
Upgrade planning:
http://www-947.ibm.com/systems/support/i/planning/upgrade/v7r1/planstmts.html
Planning - Customer Notices and information:
http://www-947.ibm.com/systems/support/planning/notices71.html
The primary way that information has been communicated to the user community is through
the IBM i Knowledge Center. The Knowledge Center contains topics that help you with basic
and advanced tasks. It remains the reference for the platform. The Knowledge Center is
updated periodically, but is not able to react quickly to change. You can find the Knowledge
Center for IBM i 7.1 at:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp
The other communication method the IBM i lab uses is the Request for Announcement (RFA).
The RFA is the formal document that contains information about a function that is being
delivered to the market. For more information and to search for various RFAs, go to the IBM
Offering Information - Announcement letters website at:
http://www-01.ibm.com/common/ssi/
Today, the IBM i zone in IBM developerWorks® and social media are part of the platform
strategy for getting information to Business Partners and clients. The information in
developerWorks is for everyone, not just developers. The IBM developerWorks website can
be found at:
http://www.ibm.com/developerworks/ibmi/
With developerWorks, if there is new information to share with IBM customers, it can be
shared immediately. The following are different ways that information is delivered through
developerWorks:
Article-based Information
Much of the information about developerWorks is in the form of short articles that define a
task or technology. The content provides not only information about a topic, but also tries
to answer the question, “Why is this topic important to me?”. These articles are written by
many developers.
Technology Updates Wiki
The IBM i Technology Updates page in developerWorks is the home for detailed
information about the functions that are delivered with each new Technology Refresh, and
the functions that are delivered through other means between releases. The technology
updates section is organized for easy navigation, searches, and subscription. At the
bottom of these pages is a button that allows you to subscribe to the page so that you are
notified when updates are made to it. Because this page is updated every time a new
program temporary fix (PTF) Group is updated, you can track and monitor new
enhancements and fixes as they are delivered.
developerWorks has many links to other topic and technology areas that IBM i users need,
and is organized to help you get the information that you need when you need it. It also is a
great place for helping you stay informed about what is happening with IBM i.
Figure 1-1 illustrates the various methods that are used to communicate IBM i enhancements
to IBM i users.
Technical Reference
Figure 1-1 Methods for communicating IBM i enhancements
IBM uses Twitter extensively. Steve Will, Chief Architect for IBM i, uses Twitter to notify
followers whenever his blog has something new, what is going on in IBM i development, and
to point to a webcast, article, or blog that might be useful. Follow @Steve_Will_IBMi on
Twitter.
When IBM introduced POWER5 servers, OS/400 was renamed i5/OS. When the IBM
POWER6® platform became available in January 2008, IBM announced a major new release
called i5/OS V6R1. Later that same year, the name was changed to IBM i to disassociate any
presumed dependency of i5/OS on the POWER5 hardware platform. The notations 5.4 and
6.1 were introduced to indicate operating systems release levels V5R4 and V6R1.
User documentation, web page links, and programmed interfaces use IBM i terminology and
others still use the i5/OS nomenclature. This publication uses IBM i terminology, but
occasionally also use i5/OS, typically where it is part of a product name or appears in a
window.
A full convergence of the platforms in 2008 removed the “i” and “p” from the names,
consolidating machine types and redefining the name to be IBM Power Systems. The Power
Systems family of servers can run IBM i, AIX®, and Linux with version requirements for each
OS dependent on the generation of processor that is installed in the system.
The term Technology Refresh refers to the set of PTFs required to support new hardware and
firmware functionality and is one part of the Technology Update. Technology Update refers to
multiple PTFs or PTF Groups that provide additional functions in IBM i and related products.
Moving up to a Technology Refresh is simpler and cheaper than qualifying a point release, so
you can take advantage of new functions and support sooner than in the past.
Backing out of a point or modification release requires a scratch installation of the system.
With a Technology Refresh, it is possible to return to an earlier level of IBM i by simply slip
installing the Licensed Internal Code only.
You can install the new function for an IBM i 7.1 general availability as a Technology Refresh
PTF Group represented by SF99707. The machine code level does not change (V7R1M0).
The new hardware-related and firmware-related machine code content is contained within
PTFs in this Technology Refresh PTF Group. The content is referred to as IBM i 7.1
Technology Refresh 1, IBM i 7.1 Technology Refresh 2, and so on. The current Technology
Refresh level is 7 (TR7) identified by PTF MF99007.
It is important to keep systems up to date with the latest Technology Refresh PTF available.
Subsequent PTFs might depend on it, and those PTFs cannot be loaded until the prerequisite
Technology Refresh PTF is permanently applied, which requires an IPL. Therefore, it is a
preferred practice to keep systems current with the latest Technology Refresh PTFs, whether
through the Technology PTF Group, a Resave, or the Technology Refresh PTF itself.
Subsequent Technology Refreshes for a release are supersets of previous ones, so you need
apply only the latest Technology Refresh to keep the system current.
Figure 1-2 on page 6 illustrates PTFs dependencies between the individual Technical Refresh
(TR) PTFs. The current TR level is TR7 (PTF MF99007 is level 7) with other Technology
Refresh Requisite (TR Reqs) and Managed Add Function PTFs with other PTFs, which are
include into a group collection called Technology Refresh PTF Group, SF99707.
CUM
Other PTFs
TR PTF
TR-Reqs
MF99007
Managed
Added Function
PTFs
For more information about IBM i 7.1 Resaves, see the following IBM i website:
http://www-947.ibm.com/systems/support/i/planning/resave/v7r1.html
To determine the minimum Resave levels that are needed for hardware, see the IBM
Prerequisite website:
https://www-912.ibm.com/e_dir/eServerPrereq.nsf
Instructions for installing Resaves can be found in the Replacing Licensed Internal Code and
IBM i of the same version and release topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahc%2Fupgrad
esameversion.htm
Table 1-1 IBM i 7.1 Technology Refresh history (with 7.1.0 Machine Code)
Technology Description Technology Corresponding 5770-999
Refresh release Refresh PTF 5770-999 Technology
date Group Level Resave Level Refresh PTF
and Marker PTF
The following list describes the columns in Table 1-1 in more detail:
Technology Refresh release date
Date when the Technology Refresh was made available.
Description
Description of the Technology Refresh.
For example, you can run WRKPTFGRP to find the PTF group called SF99707, as shown in
Figure 1-3.
Multiple different levels of the group might be installed on the system. The latest level (the one
with the highest level number) with the status of Installed is the level of the fix group that is
active.
Technology Refresh PTFs for this product are in the format of MF99nnn. The highest number
Technology Refresh PTF on your system, matched with Table 1-1 on page 7, indicates the
Technology Refresh level for this product.
PTF IPL
Opt ID Status Action
RE13260 Permanently applied None
QLL2924 Permanently applied None
MF99007 Permnanetly applied None
MF99006 Permanently applied None
MF99005 Permanently applied None
MF99004 Permanently applied None
MF99003 Permanently applied None
MF99002 Permanently applied None
More...
F3=Exit F11=Display alternate view F17=Position to F12=Cancel
Figure 1-4 Displaying the Technology Refresh PTF level installed
Marker PTFs for this product are in the format of REnnnnn. The highest number Marker PTF
on your system, matched with Table 1-1 on page 7, indicates the Resave level for this
product.
PTF IPL
Opt ID Status Action
RE13260 Permanently applied None
RE13015 Permanently applied None
RE12249 Permanently applied None
RE12066 Permanently applied None
RE11221 Permanently applied None
RE11195 Superseded None
RE11067 Permanently applied None
RE10187 Permanently applied None
More...
F3=Exit F11=Display alternate view F17=Position to F12=Cancel
Figure 1-5 Displaying the IBM i Resave level installed
Tip: If you are skipping one or more Technology Refresh levels, in certain environments,
installation time could be shorter by first installing the latest LIC Resave. For example,
when upgrading from TR4 to TR7, first restore LIC from the latest Resave, apply the latest
Cumulative PTF package, and then install the Technology PTF Group SF99707.
A Technology Refresh PTF is a PTF that can be installed just like any other PTF.
A Technology Refresh PTF must be permanently applied before subsequent PTFs that
require it can be applied. It is considered a preferred practice to apply the Technology Refresh
PTF permanently when it is first applied.
For more information about how to avoid or reduce the impact of a double IPL during PTF
installation, see “Preventing or reducing the impact of a double IPL” on page 13.
Tip: Before you order a Technology Refresh PTF Group, verify that the level of the PTF
Group you need is not already on your system.
A Technology Refresh PTF Group is a set of PTFs that is installed like any other IBM i PTF
Group. You can use the Install Program Temporary Fix (INSPTF) command or Option 8 from
the GO PTF menu.
Important: The Technology Refresh PTF must be permanently applied before subsequent
PTFs can be loaded, which requires an IPL.
For more information about how to avoid or reduce the impact of a double IPL during PTF
installation, see “Preventing or reducing the impact of a double IPL” on page 13.
To install the latest Technology Refresh PTF Group, check the Cover Letter and Preventive
Service Planning (PSP) information for SF99707 on the following websites:
PTF Cover Letters
http://www-912.ibm.com/a_dir/as4ptf.nsf/as4ptfhome
Preventive Service Planning - PSP
http://www-912.ibm.com/systems/electronic/support/s_dir/sline003.nsf/sline003ho
me
You install a Resave by following the instructions in the IBM Software Installation Manual.
The Technology Refresh PTF must be permanently applied on the system before the PTF
that requires it can be loaded. It is a preferred practice to keep a system up to date on
Technology Refresh PTFs to avoid the additional time it would take to apply the Technology
Refresh PTF. PTFs that do not involve parts or modules that are contained in a Technology
Refresh PTF do not require the Technology Refresh PTF to be applied before they can be
loaded.
Ordering and installing the Technology Refresh Resave also ensures that the Technology
Refresh PTF is permanently applied and that the double IPL is avoided. The new function
PTF SI43585 is available to automate, but not eliminate, any additional IPLs required during
PTF installation. When you are installing PTFs, there are two conditions where you must
perform an IPL to apply some of the PTFs, which requires a restart of the PTF installation
after the first IPL, and then perform another IPL to apply the delayed PTFs:
When installing a cumulative PTF package that contains special handling pre-apply PTFs
When installing a technology refresh PTF at the same time as a technology refresh
requisite PTF
If an additional IPL is required, the PTF installation parameters are saved and used during the
next IPL. Instead of seeing the Confirm IPL for Technology Refresh or Special Handling PTFs
window, you see a new message CPF362E: “IPL required to complete PTF install
processing”. However, if you select Automatic IPL=Y on the Install Options for PTFs window,
you do not see any other messages or windows because a power down then occurs. On the
next normal IPL, your second “GO PTF” completes during the “PTF Processing” IPL step in
the SCPF job, and then a second IPL of the partition is done automatically. So when the
system runs the second IPL to sign on, your PTFs are all activated and ready to go.
If an IPL is required for a technology refresh, PTF SI43585 supports installing only from a
virtual optical device or *SERVICE (PTFs downloaded electronically to save files). If you are
installing from a physical optical device, you must perform the additional IPL and second GO
PTF manually. If you received your PTFs on physical DVDs, create an image catalog from the
DVDs and use the new support.
Some large and complex projects are better suited for an actual release, where the entire
body of code in IBM i is rebuilt. Developers are working on the next release of IBM i and
architects are looking at possible content for the next release.
Now, Technology Refreshes include only LIC. Enhancements for other levels of IBM i require
a release. This situation is similar to point/modification releases, which contained only LIC.
A Technology Refresh is an update of an existing release through a PTF Group that contains
PTFs in that release's code stream. When an IBM i Technology Refresh is installed, the
release level of the system does not change, and the system continues to use PTFs for that
release.
IBM i is one of the most secure operating systems in the industry. From the beginning of its
development, security has been an important part of its design.
IBM i 7.1 provides an important set of enhancements that, with leading-edge security
solutions provided by IBM and Business Partners, reduce risk but also simplify security
management and facilitate compliance requirements.
This chapter describes the following security enhancements for IBM i 7.1:
User profile enhancements
Object audit enhancements
Data encryption enhancements
Security enhancements for DB2 for i
DB2 for i security services
Real-time database activity monitoring
Security enhancements for printing
TLS V1.1 and V1.2 support
Java security information
PowerSC Tools for IBM i
For more information about IBM and Business Partner security solutions for IBM i, go to:
http://www-03.ibm.com/systems/power/software/i/security/partner_showcase.html
The User Profile controls can be used with the Display Expiration Schedule (DSPEXPSCD)
command. This combination simplifies the task of administering temporary user profiles.
*USREXPITV Calculated based on the value that is entered in the user expiration interval
parameter.
Date Specifies a date when the user profile expires. The date must be in the job date
format.
Remember: The parameters can be seen only when you use the 5250 user interface.
Display Expiration Schedule (DSPEXPSCD) shows a list of user profiles and their expiration date
(Figure 2-1 on page 17). If no user profiles are set to automatically expire, an empty panel is
generated.
Deleting a profile: The option to delete a profile on an expiration date is only available
through CHGEXPSCDE. Be careful when you use the *DELETE option.
Owned
User Expiration Object New
Profile Date Action Option Owner
CHUA 12/23/10 *DELETE *CHGOWN PREMA
MARIE 12/23/10 *DISABLE
Bottom
F3=Exit F11=Primary group info F12=Cancel F17=Top F18=Bottom
For each regular CL command, and proxy CL commands in the QSYS library, one exit
program can be registered for the Change Exit Point, and up to 10 exit programs can be
registered for the Retrieve Exit Point:
Change Exit Point The exit program is called by the command analyzer before it passes
control to the prompter.
Retrieve Exit Point The exit program is called by the command analyzer before or after
execution of the command.
You can use the enhancement to register an exit program for the
QIBM_QCA_RTV_COMMAND exit point and indicate that you want the exit program to be
called after control returns from the CPP.
These commands generate CD (Command String) audit records for each CL command that
is run by the specified user profile. The model file QASYCDJ5 describes the fields in the CD
audit records. One of these fields, CDCLP, is redefined to convey more information about how
the audited CL command was run.
Before this enhancement, CDCLP had only two possible values, as shown in Table 2-2.
Table 2-2 Original CDCLP field values for a QASYCDJ5 model file
Value Description
Y If the command was run from a compiled OPM CL program, a compiled ILE CL module
that is part of an ILE program or service program, or an interpreted REXX procedure.
Now, the CDCLP field has the values that are shown in Table 2-3.
Table 2-3 New CDCLP field values for a QASYCDJ5 model file
Value Description
Y If the CL command is run from a compiled CL object, for instance an OPM CL program or
an ILE CL module that is bound into an ILE program or service program.
R Indicates that the CL command is being run from an interpreted REXX procedure.
E Indicates that the command was submitted by passing the command string as a parameter
to one of the Command Analyzer APIs: QCMDEXC, QCAPCMD, or QCAEXEC.
B When the command is not being run from compiled CL or interpreted REXX or through a
Command Analyzer API, and is in a batch job. The typical case for a B value is when the
command is in a batch job stream that is run by using the Start Database Reader (STRDBRDR)
or Submit Database Job (SBMDBJOB) command, or is specified for the CMD (Command to run)
parameter on a Submit Job (SBMJOB) command.
N Indicates that the command was run interactively from a command line or by choosing a
menu option that runs a CL command.
The new values for the CDCLP field map to the values for the ALLOW (Where allowed to run)
parameter on the Create Command (CRTCMD) command as follows:
'Y' maps to *IPGM, *BPGM, *IMOD, and *BMOD
'R' maps to *IREXX and *BREXX
'E' maps to *EXEC
'B' maps to *BATCH
One primary mechanism that is used to provide this added security is to obtain control
through various exit points and to parse the SQL statements. However, SQL parsing is not
only complicated, but continually changing as new SQL functions are introduced. In some
cases, it is impossible for a SQL parsing solution to handle concepts such as aliases,
user-defined functions, and user-defined table functions. For those reasons, this approach is
not preferred.
Several releases ago, a single open database exit was implemented in IBM i to relieve user
and third-party software from having to parse SQL statements by providing a list of the files
that are referenced in the SQL statement. Although this exit solved the parsing problem, the
exit was started for every full open. Depending on the amount of resources available and the
number of full opens per second in a system, performance can be a problem.
IBM i 7.1 added the capability to have the exit program QIBM_DB_OPEN called only when a
full open occurs where at least one of the tables that are referenced by the query has object
auditing enabled.
Following are three examples that show how to add the exit program QIBM_DB_OPEN:
The exit program is called if any object auditing is encountered by specifying *OBJAUD in
the PGMDTA parameter:
ADDEXITPGM EXITPNT(QIBM_QDB_OPEN) FORMAT(DBOP0100) PGMNBR(7)
PGM(MJATST/OPENEXIT2) THDSAFE(*YES) TEXT('MJA') REPLACE(*NO)
PGMDTA(*JOB *CALC '*OBJAUD')
The exit program is called when *ALL object auditing is encountered by specifying
*OBJAUD(*ALL) in the PGMDTA parameter:
ADDEXITPGM EXITPNT(QIBM_QDB_OPEN)PGMDTA(*JOB *CALC 'OBJAUD(*ALL)')
The exit program is called when *CHANGE object auditing is encountered by specifying
*OBJAUD(*CHANGE) in the PGMDTA parameter:
ADDEXITPGM EXITPNT(QIBM_QDB_OPEN) PGMDTA(*JOB *CALC 'OBJAUD(*CHANGE)')
For performance reasons, the open exit information is using cache today. Whenever a
QIBM_DB_OPEN exit point is added or removed, only new jobs pick up the change.
With IBM i 7.1 enhancements, ASP encryption can now be turned on and off and the data
encryption key can be changed for an existing user ASP. These changes take a significant
amount of time because all the data in the disk pool must be processed. This task is done in
the background at low priority with a minimal impact on performance.
For more information about ASP encryption enhancements, see 8.1.4, “Encrypted ASP
enhancements” on page 381.
For more information about field procedures, see 5.2.7, “FIELDPROC support for encoding
and encryption” on page 174.
In addition to the self-protecting security features, the operating system and DB2 for i include
built-in encryption capabilities that enable customers to add an additional layer of security
around their data.
You can use Query Manager to design and format printed reports from processed queries.
Those queries can be included in programs that are written in several high-level languages
For this reason, more user profile support is also provided to allow administrators to tailor QM
defaults, limits, and privileges for each user. QM is the only interface on IBM i with the option
to grant and revoke permissions to a specific SQL statement (per user); this is a different
capability and concept because it is not object based.
Note: The IBM licensed program 5770-ST1 “IBM DB2 Query Manager and SQL
Development Kit for IBM i” is required only for Query Manager and SQL application
development. After the applications are created, they can be run on other servers that are
running IBM i that do not have this product installed by using DB2 for i database manager
support.
Figure 2-2 shows an example of the Work with Query Manager profiles option from the
STRQM menu. To get there, choose a user profile, select Y in the Select allowed SQL
statements parameter, and press Enter.
SQL SQL
Opt Statement Opt Statement
MERGE SET ENCRYPTION PASSWORD
REFRESH TABLE SET PATH
RELEASE SAVEPOINT SET SCHEMA
RENAME SET TRANSACTION
REVOKE UPDATE
ROLLBACK
SAVEPOINT
1 SELECT
SET CURRENT DEGREE
Bottom
F3=Exit F12=Cancel F21=Select all F22=QM Statement
Figure 2-2 Select Allowed SQL Statements panel
With IBM i 7.1, you can now audit changes that are made to a Query Manager profile if
auditing is enabled for AUDLVL(*SECURITY). A new journal entry type of X2 contains the old
and new Query Manager profile values.
An outfile is not provided for this journal entry. Instead, the QSYS2.SQLQMProfilesAudit view
can be queried as shown in Example 2-1.
Example 2-1 Creating a permanent table that contains the new journal entries
CREATE TABLE mytablename AS
(SELECT * FROM QSYS2.SQLQMProfilesAudit) WITH DATA
The audit journal entry is externalized using a DB2 for i supplied view in QSYS2, similar to how
the current values of profiles are provided using the QSYS2.SQLQMprofiles view.
The view entry returns a set of data that is available for all journal entries that identify when
the change was made and by whom:
Journal entry time stamp
Current user
Job name, job user, and job number
Thread
Most of the values that are stored in the QM profile have only two possible values. For
example, the values for the authority to use the INSERT statement are Y or N.
The following QM profile values have more than two possible values:
Default Library
Default object creation authority
Relational database connection
Sample size of Query
Maximum records that are allowed on an interactive run
Default collection for QM tables
Query Data Output Type
Table and library for output
Job description and library for batch run
Commitment control lock level
Default printer name
When an SQL statement refers to any column that has the SECURE attribute that is set to YES,
all host variable values appear as “*SECURE” when examined from the database monitor
and plan cache, unless the security officer has started the database monitor or the security
officer is accessing the plan cache.
Access: Only the QSECOFR user can see the variable values. Users that have the
*SECOFR user special authority but are not the QSECOFR user see “*SECURE” for the
variable values.
Figure 2-3 Results of the previous SELECT command that shows the SECURE columns
The database performance analyst sees the output that is shown in Figure 2-4 with the
performance analysis results for this query:
select * from prodlib.employee where salary > 20000;
Function usage support on IBM i has been in place for many releases, providing an
alternative security control for Job Watcher, Cluster Management, IBM Tivoli® Directory
Server administration, Backup Recovery and Media Services for i (BRMS), and other
components.
DB2 for i includes several functions for database administration, monitoring, and access
control.
Because *JOBCTL authority allows a user to change many system critical settings that are
unrelated to database activity, it is not an easy decision for security officers to grant this
authority. In many cases, the request for *JOBCTL is not granted to database analysts, thus
prohibiting the usage of the full set of database tools.
IBM i 7.1 provides a new function usage group called QIBM_DB. The function IDs included for
SQL analysis and tuning are:
QIBM_DB_SQLADM Function usage for IBM i Database Administrator tasks
QIBM_DB_SQLMON Function usage for IBM i Database Information tasks
These new functions allow settings where users with the *ALLOBJ special authority cannot
do the DB2 administration or DB2 monitoring tasks. Also, a group profile can be specified in
this function’s setting. If a user profile is associated with several supplemental group profiles,
the access is granted if at least one of these group profiles is set to *ALLOWED in a particular
function. The adopted authority from program owner profiles has no effect on allowing access
to DB2 administration and DB2 monitoring. Access is always granted according to the user
profile under which a program with adopted authority runs.
This is an alternative to a user exit program approach. Using a function usage ID has several
advantages including the fact that no coding is required, and it is easy to change and is
auditable.
The dilemma that is faced when you choose to enforce password validation on the servers is
that every user who needs to connect to the server needs administrative work that is done on
their behalf. This work uses Add Server Authentication Entry (ADDSVRAUTE) for every target
server, or requires every user to supply a user ID and password on each CONNECT statement.
You can use the special value QDDMDRDASERVER, which is added to the Add Server
Authentication Entry (ADDSVRAUTE) command SERVER parameter, to simplify this process. This
special value allows an administrator to configure a user to work with all possible DDM or
DRDA connections to any system in the Internet Protocol network through a common user ID
and password. After it is configured for a specific user, no additional changes need to be
made for that user, as systems are added to the Relational Database Directory.
As before, this setup does not allow a customer to connect over DRDA/DDM unless they
specify a valid user ID and password on the server authentication entry or
CONNECT statement.
When you attempt a DRDA connection over TCP/IP without specifying a user ID and
password, the DB2 for i client (AR) checks the server authentication list for the user profile
under which the client job is running. If it finds a match between the RDB name on the
CONNECT statement and the SERVER name in an authentication entry (which must be in
uppercase), the associated USRID parameter in the entry is used for the connection user ID. If
a PASSWORD parameter is stored in the entry, that password is also sent on the connection
request.
For DRDA connection requests, if a server authentication entry that specifies the system
name exists, and a user ID and password are not passed in a CONNECT statement, the user
ID and password that are associated with the server authentication entry takes precedence
over the server authentication entry for QDDMDRDASERVER.
For DRDA connection requests, if a server authentication entry specifies the system name
exists, and a user ID and password are passed in a CONNECT statement, the user ID and
password that are associated with the CONNECT statement takes precedence over any
server authentication entry.
For RDB DDM file connection requests, the server authentication entry that specifies the
system name takes precedence over the server authentication entry for QDDMDRDASERVER. For
non-RDB DDM file connection requests, the server authentication entry QDDMSERVER takes
precedence over the server authentication entry for QDDMDRDASERVER.
For example, suppose that you have an environment with three systems (SYSA, SYSB, and
SYSC), where:
SYSA is the application requester (AR).
SYSB and SYSC are the application servers (AS).
You now have two connections with the user ID yourotheruid and password yourotherpwd.
This situation occurs because server authentication entries that specify the real system name
take precedence over server authentication entries that specify QDDMDRDASERVER. You run the
following commands on SYSA:
ADDSVRAUTE USRPRF(YOURPRF) SERVER(QDDMDRDASERVER) USRID(youruid) PASSWORD(yourpwd)
ADDSVRAUTE USRPRF(YOURPRF) SERVER(SYSB) USRID(yourotheruid) PASSWORD(yourotherpwd)
STRSQL
CONNECT TO SYSB user testuserid using 'testpassword'
CONNECT TO SYSC
You now have two connections. The connection to SYSB is made with the user ID testuserid
and password testpassword. This situation occurs because specifying the user ID and
password on a CONNECT statement takes precedence over server authentication entries.
The connection to SYSC is made with user ID 'youruid' and password 'yourpwd', because it
uses the QDDMDRDASERVER authentication entry when no other server authentication entry
exists specifying the system name. You run the following commands on SYSA:
ADDSVRAUTE USRPRF(YOURPRF) SERVER(QDDMDRDASERVER) USRID(youruid) PASSWORD(yourpwd)
ADDSVRAUTE USRPRF(YOURPRF) SERVER(QDDMSERVER) USRID(youruid2) PASSWORD(yourpwd2)
ADDSVRAUTE USRPRF(YOURPRF) SERVER(SYSC) USRID(yourotheruid) PASSWORD(yourotherpwd)
CRTDDMF FILE(QTEMP/DDMF) RMTFILE(FILE) RMTLOCNAME(SYSB *IP)
CRTDDMF FILE(QTEMP/DDMF2) RMTFILE(FILE) RMTLOCNAME(*RDB) RDB(SYSB)
CRTDDMF FILE(QTEMP/DDMF3) RMTFILE(FILE) RMTLOCNAME(*RDB) RDB(SYSC)
The authorization list catalogs are shown in Table 2-4. They are specific for different object
types.
QSYS2/SYSCOLAUTH Columns
Example 2-4 shows one of the SQL commands that can be used to get data of one of the
predefined catalogs listed in Table 2-4.
Figure 2-6 shows the results. The result shows that QSECOFR, as the owner of the
QGPL/CUST SQL table, has all authorities to the table and the authority for user CZZ62690 is
derived from the authorization list CUSTAUTL.
For more information about DB2 for i services, see 5.5.16, “DB2 for i Services” on page 272.
QSYS2.USER_INFO View This view contains information about user profiles. For more
information, see 2.5.1, “QSYS2.USER_INFO view” on
page 30.
QSYS2.FUNCTION_INFO View This view contains details about function usage identifiers. For
more information, see 2.5.2, “QSYS2.FUNCTION_INFO view”
on page 30.
QSYS2.FUNCTION_USAGE View This view contains function usage configuration details. For
more information, see 2.5.3, “QSYS2.FUNCTION_USAGE
view” on page 31.
QSYS2.GROUP_PROFILE_ENTRIES View This view contains one row for each user profile that is part of
a group profile. For more information, see 2.5.4,
“QSYS2.GROUP_PROFILE_ENTRIES view” on page 31.
QSYS2.SQL_CHECK_AUTHORITY() UDF This scalar function indicates whether the user is authorized to
query the specified *FILE object. For more information, see
2.5.5, “SQL_CHECK_AUTHORITY() UDF procedure” on
page 31.
QSYS2.SET_COLUMN_ATTRIBUTE() Procedure This procedure sets the SECURE attribute for a column so
variable values used for the column cannot be seen in the
database monitor or plan cache. For more information, see
2.4.2, “Database Monitor and Plan Cache variable values
masking” on page 22.
See the following website for more information about the column names, types, and
information returned:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/QSYS2.USER_INFO%20catalog
See the following website for more information about the column names, types, and
information returned:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/QSYS2.FUNCTION_INFO%20catalog
Only users with *SECADM special authority can examine the function usage configuration
details that are returned with this view. Users without *SECADM authority who attempt to
reference this view get an SQLCODE -443 error.
See the following website for more information about the column names, types, and
information returned:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/QSYS2.FUNCTION_USAGE%20catalog
Note: A *USRPRF is considered to be a group profile when at least one user profile refers
to it by name in the GRPPRF or SUPGRPPRF fields.
This function has two parameters, library name and file name, which specify the *FILE object.
Existing security solutions, such as intrusion detection systems (IDSs), lack knowledge about
the database protocols and structures, which is required to detect inappropriate activities. For
more information about IDS support on IBM i, go to the following website:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp
Today, an increasing number of industries ask for compliance mandates that generally require
organizations to detect, record, and remediate unauthorized access or changes to sensitive
data, including access or changes by privileged users, while providing a secure audit trail to
validate compliance.
Information security and database managers struggle to implement these types of controls,
especially regarding monitoring privileged users. Heightened focus on business reputation
risk and sensitive data protection is also driving closer internal scrutiny of controls. The result
is clear: Providing effective database security and compliance is not easy.
The following sections discuss more logging and prefiltering capabilities that are supplied for
the database monitor.
Prefilter: The term prefilter is related to filtering capabilities for a database monitor in the
recording process and the term filter is applied to the selection criteria in the collected
data.
Account string FTRCLTACG The user’s accounting code (the ACGCDE parameter
(ACCTNG) on the User Profile object)
There is support for a client-specific filter using the COMMENT parameter in Start Database
Monitor (STRDBMON). However, this support is limited to only one parameter, which is a
character value up to 50 characters long.
More...
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Messages pending on other displays.
Figure 2-7 STRDBMON filter parameters
Example 2-5 shows how to use the FTRSQLCODE parameter to collect QQRID=1000 DBMON
records for all instances of SQL failures due to lock contention.
For more information about SQLSTATEs and SQLCODEs for DB2 for IBM i 7.1, see the SQL
messages and codes topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzala/rzalakickoff.htm
When you determine whether the current user’s SQL must be captured in the SQL
Performance Monitor (database monitor) output, the command now determines whether the
user is a member of the group.
Wildcard group profile names are allowed. For example, if you specify FTRUSER(ADMIN*) and
both ADMINGRP and ADMINGRP2 are group profiles, any SQL run by users in either group
is captured.
SQL Performance Monitor interface for “User” can be used to specify the group profile.
For more information, see 5.5.9, “STRDBMON pre-filtering of QUERY/400 command usage”
on page 267.
Two view mechanisms can be used to start a database performance monitor on a view,
saving both performance and storage.
Use input only columns to capture only a subset of the monitor data in an underlying table.
In DB2 for i, the database performance monitor table has 276 columns. Assume that an
auditing application is interested in collecting only the SQL statement, the variable values,
and the information that identifies the user and job information. This information is contained
in only 20 out of the 276 columns (the columns QQRID, QQJFLD, and QQI5 must also be
added to process the resulting view).
2. Create a view that has 276 columns that match the database monitor table columns, as
shown in Example 2-7. Only the 23 wanted columns are input / output columns; the others
are input only columns (those columns that are just CAST as NULL). The columns must
have the same attributes and be in the same order as in the base database monitor table.
FROM MJATST.looptable4
RCDFMT QQQDBMN;
By enhancing the database product to allow this view, any data that is written to the database
performance monitor view results in only 23 columns in the underlying base table
(mjatst.looptable4). The storage that is used with this technique is a small fraction of a
traditional monitor, so the performance is better. The resulting smaller table contains only the
information necessary for auditing.
For the second view mechanism, use an INSTEAD OF trigger on the view to immediately
process a row of monitor data without storing any data.
By enhancing the database product to allow this view, any rows that are written to the
database monitor file are passed directly to the INSTEAD OF trigger so no monitor storage is
used.
For the view with an INSTEAD OF trigger, the elapsed time and the amount of storage that is
used is under the control of the INSTEAD OF trigger. For example, in Figure 2-8, the
INSTEAD OF trigger sends the data to another system. This action takes some processing
time, but no persistent storage is used on the system that is monitored.
2.6.3 InfoSphere Guardium database activity monitoring support for DB2 for i
IBM InfoSphere® Guardium® is an enterprise information database audit and protection
solution that helps enterprises protect and audit information across a diverse set of relational
and non-relational data sources. These sources include Oracle, Teradata, IBM IMS™, VSAM,
Microsoft Sharepoint, IBM Netezza®, DB2 for z/OS®, and DB2 for Linux, UNIX, and
Windows.
With InfoSphere Guardium V9.0, DB2 for i can now be included as a data source. You can
use this configuration, plus the database security enhancements included in DB2 for i, to
monitor database accesses from native interfaces and through SQL in real time without any
changes to databases or applications, or impacting performance.
The probes forward transactions to a hardened collector in the network, where they are
compared to previously defined policies to detect violations. The system can respond with
Figure 2-9 shows a typical deployment of InfoSphere Guardium database activity monitoring.
Application Servers
The new method is primarily for auditing database access. If you require auditing on a greater
variety of non-database object access, the existing IBM i auditing support of exporting and
importing the audit journal can still be used.
More information about the capabilities of InfoSphere Guardium can be found on the following
website:
http://www-01.ibm.com/software/data/guardium/
Also, see the SSL concepts topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzain/rzainconcepts.htm
For more information about how to configure this support, see 13.5, “IBM HTTP Server for i
support for TLSv1.1 and TLSv1.2” on page 544.
For more information about Java for IBM i, see 15.7, “Java for IBM i 7.1” on page 598.
PowerSC tools for IBM i helps IBM i clients to ensure a higher level of security and
compliance:
Simplifying the management and measurement of security and compliance
Reducing costs of security and compliance
Reducing security exposures
Improving audit capabilities to satisfy reporting requirements
Following are the IBM Systems Lab Services that are related to IBM i security:
IBM i Security Assessment (iSAT)
An experienced IBM i consultant will collect and analyze data using PowerSC Tools for
IBM i. The engagement results in a comprehensive report with findings and
recommendations for improved compliance and security remediation.
IBM i Single Sign On (SSO) Implementation
SSO improves user productivity and saves help desk costs. In this services engagement,
an experienced IBM consultant will advise you on SSO options and provide
implementation assistance on using the SSO suite components of the PowerSC Tools for
IBM i.
IBM i Security Remediation
An experienced IBM consultant will advise you on the best practices to address IBM i
security and compliance issues. The consultant will provide remediation assistance on
using the PowerSC Tools for IBM i.
IBM i Encryption
An experienced IBM consultant will advise you on best practices to implement data
encryption on IBM i using the PowerSC Tools for IBM i Encryption Suite. Tape encryption
implementation services are also available.
Compliance Assessment and Daily compliance dashboard reports at Enables compliance officer to
Reporting Tool LPAR, system, or enterprise level. demonstrate adherence to predefined
security polices.
Security Diagnostics Reports detailing security configuration Reduces operator time involved in
settings and identifying deficiencies. remediating security exposures.
Privileged Access Control Controls the number of privileged users. Ensures compliance with industry
guidelines on privileged users.
Secure Administrator for SAP Manages and controls access to powerful Eliminates sharing of SAP
SAP administrative profiles. administrative profiles with enhanced
security auditing.
Access Control Monitor Monitors security deviations from Prevents user application failures due to
application design. inconsistent access controls.
Network Interface Firewall for Controls access to exit point interfaces Reduces threat of unauthorized security
IBM Exit Points such as ODBC, FTP, RMTCMD, and so on. breach and data loss.
Audit Reporting Consolidates and reduces security audit Simplifies audit analysis for compliance
journal information. officer and auditors.
Certificate Expiration Manager Simplifies management of digital Helps operators prevent system
certificates expiration. outages due to expired certificates.
Password Validation Enhances IBM i operating system Enables security officers to ensure that
protection with stricter password validation. user passwords are not trivial.
Single Sign On (SSO) Suite Simplifies implementation of SSO and Reduces password resets and
password synchronization. simplifies user experience.
Encryption Suite Simplifies implementation of cryptography Helps application developers meet data
using IBM i operating system capabilities. security standards and protect critical
data.
For more information about PowerSC Tools for IBM i, see the PowerSC Tools for IBM i
presentation available at the following website:
https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/c9b3caa
2-f760-48ec-8307-46c67391cb2e/page/3315381b-b389-4c02-a303-3122ece9d777/attachment
/996a6920-646d-420a-ae24-10724e47e4ae/media/PowerSCTools%20forIBM%20i.pdf
After it describes the basics, this chapter describes the more advanced Backup Recovery and
Media Services (BRMS) product in 3.2, “New and improved BRMS for IBM i functions” on
page 50. This chapter then addresses the new BRMS functions and capabilities added to the
IBM Navigator for i and the System Director products in 3.3, “BRMS enhancements to GUI
and web interfaces” on page 84.
A list of references to more information about these topics is included at the end of the
chapter.
Although restore time savings vary depending on the device, media format, and position of
the object on tape, tests restoring the last object from a tape that contains 1.1 million IFS
objects reduced object restore time from 22 minutes to less than 3 minutes.
Save operations now track the physical media position of each saved object. This media
position is a 32 hex character field in the various save commands output files.
Restores commands have a new POSITION parameter, which is used to specify the
hexadecimal position value that appeared in the output files that were previously mentioned.
The following restore interfaces support the POSITION parameter:
Restore Library (RSTLIB), Restore Object (RSTOBJ) and Restore IFS Object (RST)
commands.
QsrRestore and QSRRSTO application programming interfaces.
QsrCreateMediaDefinition application programming interface to create a media definition
for use by parallel restores.
BRMS supports the POSITION parameter.
The default value for the POSITION parameter is special value named *FIRST, which restores
using the current search from the beginning mode. When you use the POSITION (object
location) parameter and value, you must also specify the SEQNBR parameter with the correct
sequence number of the saved object.
In Example 3-1, the Restore Object (RSTOBJ) command restores the SYSTEMS file to the
HARDWARE library. The saved object is sequence number 547 on the tape, the position of
the file on tape is 0000000000190490000000AB430009CA, and the tape device name is
TAP01.
Using ALWOBJDIF(*ALL) for database files is undesirable for the following reasons:
When a file-level difference occurs, the original file is renamed and the saved file
is restored.
When a member level difference occurs, the existing member is renamed and the saved
member is restored.
Because of the duplicated files and members, system resources are wasted and applications
might produce unpredictable results. This situation leaves you with a perplexing choice
between the renamed data or the restored data and leaves clean-up activities to perform.
The following restore menu options now default to ALWOBJDIF(*COMPATIBLE) when you restore
to another system:
21: Restore entire system
22: Restore system data only
23: Restore all user data
This function is available in SF99369 - IBM i integration with BladeCenter and System x
Group PTF Level 6. For more information, see the IBM i integration with BladeCenter and
System x website at:
http://www-03.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.h
tml
To use this feature, specify the Defer ID (DFRID) parameter on the restore operation. DFRID is
an optional parameter on the Restore Library (RSTLIB), Restore Object (RSTOBJ), and Restore
Object List API (QSRRSTO) commands.
In previous releases, the Defer ID parameter was used to defer the restore of views (logical
files) and Materialized Query Tables (MQTs) that were restored before their based-on files.
Note for BRMS users: If you were using the Recovery Order List to self-manage the order
of a BRMS restore to compensate for journal dependencies, that control is no longer
needed. Consider changing OVERRIDE(*YES) and
RCYORDLST(<user-specified-restore-order>) to RCYORDLST(*NONE).
Before DAOS, attachments were part of each Domino database (.nsf) file. If a large
attachment is sent to 40 mail users, there are 40 occurrences, one in each mail file.
With DAOS, attachments that exceed a configured size are pulled out of the.nsf files and are
placed as objects. In the example that is described in the previous paragraph, rather than one
occurrence of the attachment being stored in each mail file, there is one NLO stored per
Domino server, thus saving storage space.
BRMS DAOS support was made available through PTFs for V6R1 (SI34918) and
V5R4 (SI31916).
When you configure DAOS on Domino servers, be careful with the configuration of
attachment sizes that are externalized into NLOs. If you select a small size, many NLO
objects can be created, each of which is an IFS object that can significantly lengthen the IFS
backup time. The default is 4096, but consider using 1,000,000 or larger.
DAOS references
The following references provide more information about DAOS:
DAOS Quick Start Guide:
http://www.lotus.com/ldd/dominowiki.nsf/dx/daos-quick-start-guide
DAOS Best Practices:
http://www.lotus.com/ldd/dominowiki.nsf/dx/daos-best-practices
DAOS Estimator:
http://www.ibm.com/support/docview.wss?rs=463&uid=swg24021920
BRMS Online Lotus Server Backup Reference:
http://www-03.ibm.com/systems/i/support/brms/domdaos.html
The QIBMLINK link list for IBM IFS directories is now automatically added to the supplied
system backup control group *SYSGRP for new installations only. In V5R4 i5/OS and IBM i
6.1, QIBMLINK existed, but was not automatically added to *SYSGRP. Add QIBMLINK
manually to *SYSGRP in existing installations. QIBMLINK is used to save system IFS files
and directories.
The QALLUSRLNK link list was added in IBM i 7.1. QALLUSRLNK is used to save user IFS
directories and files. QALLUSRLNK is used with the QIBMLINK link list. QALLUSRLNK omits
the following directories:
/QSYS.LIB
/QDLS
/TMP/BRMS
/QIBM/ProdData
/QOpenSys/QIBM/ProdData
Usage of QIBMLINK followed by QALLUSRLNK enables more granularity than the *LINK
control group entry and ensures that IBM directories are restored before user directories if a
system restore is necessary. The usage of the QALLUSRLNK link list with the QIBMLINK link
list also avoids the duplication of saved data that occurs with the combination of using
QIBMLINK and *LINK.
The WRKMEDIBRM command previously could not show more than 999,999 objects in the saved
objects field. In IBM i 7.1, if more than 999,999 objects or files are saved in a single library or
save command, BRMS lists the actual number rather than 999,999 objects on the
WRKMEDIBRM Object Detail panel.
Figure 3-1 shows a WRKMEDIBRM Object Detail panel. The circled field shows a saved item
with more than 999,999 objects.
Figure 3-1 WRKMEDIBRM Object Detail panel with more than 999,999 objects
The BRMS recovery report QP1ARCY previously could not show more than 999,999 in the
saved objects column. In IBM i 7.1, if more than 999,999 objects or files are saved in a single
library or save command, BRMS lists the actual number, rather than 999,999 objects on the
BRMS recovery report QP1ARCY.
Figure 3-2 BRMS recovery report showing more than 999,999 objects saved
Additionally, the STRRCYBRM command was enhanced to override specific recovery elements
so that they use another time period. This process requires that the override recovery
element (OVERRIDE) parameter is set to *YES. This action affects the ACTION parameter values
of *REPORT and *RESTORE.
The STRRCYBRM command keywords that enable overriding recovery elements are as follows:
OVERRIDE: This keyword specifies whether you want to use another time period for a
specific recovery element.
– *NO: This keyword indicates that you do not want to specify another date and time
range for a specific recovery element. Recovery elements and overrides are ignored if
specified.
– *YES: This keyword indicates that you want to specify another date and time range for
a specific recovery element.
In Example 3-2, the STRRCYBRM command selects all restore items that are found regardless of
time, except for the operating system restore items, which select nothing newer than 6 p.m.
on 03/01/2010 because of the *SAVSYS override.
In Example 3-3, the STRRCYBRM command selects all restore items found up to and including
03/01/2010, except for security data and configuration data, which is restored through the
current date.
Example 3-3 STRRCYBRM command using recovery element override of *SECDTA and *SYSCFG
STRRCYBRM PERIOD((*AVAIL *BEGIN) (*AVAIL '03/01/10'))
RCYELEM((*SECDTA ((*AVAIL *BEGIN) (*AVAIL *END)))
(*SYSCFG ((*AVAIL *BEGIN) (*AVAIL *END))))
Figure 3-5 BRMS recovery report updates for support of ALWOBJDIF(*COMPATIBLE) special value
Figure 3-6 OUTPUT support parameters in the BRMS Change Recovery Policy function
In the recovery defaults of the Work with Media Information (WRKMEDIBRM) command.
The WRKMEDIBRM recovery defaults apply only to the current session and are not
permanent.
Figure 3-10 is the Recovery Defaults panel. The various output selection fields are in the
box. The output fields are nearly identical to the output keywords of the RSTLIBBRM
command.
Suppose that a system has a system ASP and three independent ASPs. Each of the ASPs
has library TOOLS and the entire system, including the IASPs, was saved. There are three
saves of library TOOLS. You can use this function to select which of the saves to restore.
The new keyword is SAVASP. Values for the parameters for the RSTLIBBRM command are as
follows:
*ANY
The library and objects that are saved is restored from any ASPs save. This value is the
default value, which works as it did before IBM i 7.1.
*SYSTEM
The saved library and objects are restored from the system ASP save.
ASP number 1 - 32
The library and objects are restored from the specified user ASP, or the system ASP if 1 is
specified.
ASP name
The library and objects are restored from the specified ASP save.
There are limitations about which objects can be restored to non-system ASPs. These
objects are not allowed to be in user or independent ASPs.
The red circle in Figure 3-13 shows the SAVASP keyword for the RSTLIBBRM command.
Figure 3-13 Save ASP (SAVASP) keyword of the BRMS RSTLIBBRM command
In IBM i 7.1, there are now options to configure which systems receive information about
backups and which do not. Distributed backup support reduces the data on systems that have
no need to know about the saved history from other systems. The distributed backup function
is available through the Change Network Group menu option of the BRMS System Policy
(BRMSSYSPCY) menu.
Suppose that maintenance is running and a second job issues a BRMS command that
attempts to use files in the QUSRBRM library that is used by the maintenance job. In this
case, a BRM6714 Job (job-name) is being held by job (maintenance-job-name)
message is issued to that second job’s message queue and is displayed (Figure 3-17).
BRM6714
The job that is running the STRMNTBRM command during the period where maintenance
requires exclusive use of the BRMS files lists, but does not display, message BRM6715 BRM
restricted procedure started and message BRM6716 BRM restricted procedure ended,
as shown in Figure 3-18.
BRM6715
BRM6716
Figure 3-18 BRM restricted procedure messages in the STRMNTBRM job message queue
In a typical maintenance run, you might see several pairs of these messages.
The Print Media Movement (PRTMOVBRM) command has a new *NEXT value on its TYPE
parameter. TYPE(*NEXT), combined with a future date specified in the select date (SLTDATE)
parameter, generates a report of future media moves.
Figure 3-19 Print Media Movement panel using the TYPE parameter value of *NEXT
Tip: The PRTMOVBRM command has multiple functions for date entry and calculations in
the “Select dates” section. Be sure to review the details of these options in the help text by
pressing F1 with the cursor in the field.
Figure 3-20 WRKMEDBRM Work with Media Panel with Remove volume error status option
When in the *INZ status, the media volume can be reinitialized by running one of the following
commands
Work with Media using BRM (WRKMEDBRM) command option 10
Work with Media Library Media (WRKMLMBRM) command option 5
Initialize Media using BRM (INZMEDBRM) command
This new function is also available through the IBM System Director Navigator for i web
interface and IBM i Access graphical user access (GUI) client.
When media is marked for duplication, BRMS no longer expires the media when the Start
Maintenance for BRM (STRMNTBRM) command, the Start Expiration for BRM (STREXPBRM)
command, or the Work with Media using BRM (WRKMEDBRM) command option 7 (expire) is run.
Volumes warned . . . . . . . : 0
Volumes expired . . . . . . . : 0
Previously expired . . . . . . : 213
Total expired count . . . . . : 213
***** END OF LISTING *****
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F24=More keys
Figure 3-22 Error message when you attempt to expire a volume marked for duplication
The media position function is automatically and invisibly used by BRMS, but requires object
level detail (*YES, *OBJ, *MBR) specified for the saved items in the control group, or on the
Save Library using BRM (SAVLIBBRM) command. BRMS saves retain the media positions in
the BRMS database files and BRMS restores retrieve the media positions from the BRMS
database files.
3.2.14 BRMS support for the special value *COMPATIBLE for ALWOBJDIF
BRMS restore functions support the *COMPATIBLE special value for the ALWOBJDIF parameter
described in 3.1.2, “New ALWOBJDIF (*COMPATIBLE) restore option” on page 47.
RDX is available natively attached to an IBM i partition, by using either USB or SATA
connectivity, or they also can be virtualized through VIOS or iVirtualization. The SATA support
was implemented back to IBM i 6.1 through a PTF. The RDX dock is available in either a
5.25 inch internal (SATA or USB) or external USB version. The dock supports all RDX
cartridges. The cartridges are reliable, rugged, and they can be shipped through courier
transport. Because the media is not tape-based, there is no requirement to clean the drive.
For more information about the RDX standard, see the RDX consortium website at:
http://www.rdxstorage.com/rdx-technology
RDX devices show up as RMSxx devices, but are classified as optical (random access
spinning media) devices and can be used by optical, not tape, commands.
BRMS supports these devices as optical devices as well. The following limitations apply:
The Media Policy option ‘Mark history for duplication’ is restricted to *NO. BRMS does not
allow the duplication of optical history items.
BRMS does not track opposite side volume identifiers or double-sided volumes. Each
piece of media is viewed as a single volume.
There is no optical support for the following BRMS functions: Dump BRMS (DMPBRM), add
media information to BRMS (ADDMEDIBRM), extract media information (EXTMEDIBRM), and
print media exceptions (PRTMEDBRM). There is no reclaim support for optical / RDX devices.
You can specify only one device parameter per BRMS operation; optical does not support
cascading or parallel devices.
Optical media also cannot be shared on a BRMS network. Remote duplication is not
supported.
DUPMEDBRM duplicates only entire optical volumes. The output volume must have the exact
same physical characteristics as the input volume.
Optical devices do not support software encryption.
Bottom
Type command, press Enter.
===>
To remove all overrides for move policies that allow movement when a volume is marked
for duplication, run the following command:
CALL QBRM/Q1AOLD PARM('MOVMRKDUP ' '*CLEAR')
Statement of direction: In releases that follow IBM i 7.1, this limitation will be removed.
The function will be integrated into any relevant CL commands and the GUI.
All statements about the future direction and intent of IBM are subject to change or
withdrawal without notice, and represent goals only.
Statement of direction: In releases that follow IBM i 7.1, this limitation will be removed. All
statements about the future direction and intent of IBM are subject to change or withdrawal
without notice, and represent goals only.
Leading zeros: XXX must always be three digits, so leading zeros must be added to the
front of the numbers.
Statement of direction: In releases that follow IBM i 7.1, interfaces will be provided on the
WRKPCYBRM TYPE(*SYS) work panel.
All statements about the future direction and intent of IBM are subject to change or
withdrawal without notice, and represent goals only.
The following command can be used to display the maximum flight recorder size:
CALL PGM(QBRM/Q1AOLD) PARM('FRSIZE ' '*DISPLAY').
Significant compression can be reached with the *HIGH setting, but at the cost of a longer
save time.
To show the current override that is being used, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('PARMOVR' '*DISPLAY' 'SAVFDTACPR')
Bottom
Type command, press Enter.
===>
Statement of direction: In releases that follow IBM i 7.1, the function will be integrated
into any relevant CL commands and the GUI.
All statements about the future direction and intent of IBM are subject to change or
withdrawal without notice, and represent goals only.
In releases V6R1M0 and later, the ASYNCBRING parameter can be overridden to help improve
IFS save performance. To use the ASYNCBRING parameter, run the following commands:
To override the new ASYNCBRING parameter, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('PARMOVR' '*ADD' 'ASYNCBRING' '*YES')
To disable the ASYNCBRING parameter, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('PARMOVR' '*ADD' 'ASYNCBRING' '*NO ')
To remove the override for the ASYNCBRING parameter, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('PARMOVR' '*REMOVE ' 'ASYNCBRING')
To show the current override for the ASYNCBRING parameter, as shown in Figure 3-25,
run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('PARMOVR' '*DISPLAY' 'ASYNCBRING')
Bottom
Type command, press Enter.
===>
Statement of direction: In releases that follow IBM i 7.1, interfaces will be provided on
BRMS commands and on the control groups to use the new Asynchronous Bring
parameters and these calls will not work. If *YES was set in the call statement, then the
backup policy reflects that action on a new field after the upgrade finishes.
All statements about the future direction and intent of IBM are subject to change or
withdrawal without notice, and represent goals only.
Bottom
Type command, press Enter.
===>
To disable this option for all media classes that have this option enabled, run the following
command:
CALL QBRM/Q1AOLD PARM('INZONEXP ' '*CLEAR ')
Initializing volumes: For IBM ProtecTIER® or other virtual tape library technologies, if
volumes are initialized when they expire, performance for operations using the library might
be temporarily degraded as the device frees the storage that was being used by the
volume.
For native IBM i virtual tape, if volumes are initialized when they expire, system
performance might be temporarily degraded as the system frees the storage that was
being used by the volume.
For native IBM i virtual tape, if multiple virtual tape devices are registered with BRMS, the
STRMNTBRM job log might contain CPF41B0 (‘Incorrect image catalog name specified.’)
messages. These messages can be safely ignored.
To run PRTRPTBRM TYPE(*CTLGRPSTAT) with a FROMSYS parameter other than *LCL, the
following PTFs are required: IBM i 7.1: SI50292, IBM i 6.1: SI50291.
Additionally, for systems other than *LCL, the BRMS Network Feature licensed program is
required.
The retention section starts on a new page with a heading of *RETENTION. Each control
group has an entry with full and incremental media retention information.
Selection Criteria
Start date and time . . . . . . : 10/15/13 *AVAIL
Ending date and time . . . . . . : *END 23:59:59
Auxiliary storage pool . . . . . : *ALL
Library . . . . . . . . . . . . : *CTLGRP
From System . . . . . . . . . . : SPEED
Number of Number of Save
Control Start Start End End Objects Objects Size Save Media Save
Group Date Time Date Time Duration Saved Not saved (MB) GB/HR Class Volume Status
BIGRETAIN 10/16/13 11:11:34 10/16/13 11:12:37 0:01:03 30 0 2055 115 VRTTAP GEN008 *NOERR
LOWRETAIN 10/16/13 11:08:59 10/16/13 11:10:00 0:01:01 30 0 2055 118 VRTTAP GEN006 *NOERR
MEDRETAIN 10/16/13 11:10:13 10/16/13 11:11:20 0:01:07 30 0 2055 108 VRTTAP GEN007 *NOERR
5770BR1 V7R1M0 100416 Backup Statistic Report 10/10/13 12:00:00 Page 2
Number of Number of Save
Control Start Start End End Objects Objects Size Save Media Save
Group Date Time Date Time Duration Saved Not saved (MB) GB/HR Class Volume Status
*RETENTION *FULL 0:00:00 *INCR 0:00:00 0:00:00 0 0 0 0
BIGRETAIN 2100 D 0:00:00 2100 D 0:00:00 0:00:00 0 0 0 0
LOWRETAIN 0007 D 0:00:00 0007 D 0:00:00 0:00:00 0 0 0 0
MEDRETAIN 0090 D 0:00:00 0090 D 0:00:00 0:00:00 0 0 0 0
* * * * * E N D O F L I S T I N G * * * * *
Tip: Be aware that *RDB does not appear as an option and is not listed in the online help
text, but it is a valid entry and functions as described here.
Use the WRKRDBDIRE command to add or modify existing remote database entries as
needed.
This function can be useful in an environment where the initial save is performed in a
multi-stream method to a virtual tape library with multiple virtual tape drives. In many cases,
saving to multiple virtual tape drives within a VTL can provide better performance. However,
duplicating those virtual tapes to the same quantity of physical media might not be wanted,
cost-effective, or a good use of high-capacity media.
To enable this function, create a data area using the following command:
Tip: To revert BRMS back to the prior method of duplication (a one-to-one volume
relationship), delete the QUSRBRM/Q1AALWMFDP data area.
The job log from the DUPMEDBRM command contains messages that indicate which batch
jobs are running the duplication. Each of these duplication jobs sends completion or error
messages to the BRMS log, so monitor the BRMS log to verify that each of the duplication
jobs has completed successfully.
To enable DUPMEDBRM batch job options for the current job, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('DUPBATCH' '*SET' 'nn')
Where nn is the number of batch jobs to use. This value must be greater than or equal to '00',
and less than or equal to the number of device resources available to be used during the
duplication. The value of '00' indicates to use the default behavior.
To display the current job's DUPMEDBRM batch job options, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('DUPBATCH' '*DISPLAY')
To remove the current job's DUPMEDBRM batch job options, run the following command:
CALL PGM(QBRM/Q1AOLD) PARM('DUPBATCH' '*REMOVE')
3.2.27 Ability to use save files in independent ASPs as targets for save
operations
Media policies that have the “Save to save file” field set to *YES allows independent auxiliary
storage pools (IASP) names to be specified for the ASP for save files field.
The IASP that is specified for the ASP for save files field must have an ASP number in the
range 33 - 99.
Note: If the IASP did not exist before installation and configuration of BRMS, running either
the DSPASPBRM or WRKASPBRM command is necessary to make the IASP known to
BRMS. After it is known to BRMS, the IASP name can be specified in the “ASP for save
files” parameter.
3.2.28 Move media using BRM (MOVMEDBRM) allows for multiple locations
The From location (LOC) parameter on the MOVMEDBRM command allows multiple values
(up to 10) to be specified.
Tip: To revert to the previous MONSWABRM behavior, run the following commands:
CRTDTAARA DTAARA(QUSRBRM/Q1ASYNCMSG) TYPE(*CHAR)
CHGOBJOWN OBJ(QUSRBRM/Q1ASYNCMSG) OBJTYPE(*DTAARA) NEWOWN(QBRMS)
IBM Navigator for i and IBM Systems Director are web-based interfaces that had limited
BRMS function in IBM i 6.1. The capabilities of these interfaces were greatly expanded into a
full-featured BRMS interface, bringing these web interfaces into parity with the client-based
System i Navigator product.
Note: IBM Navigator for i is the current name for the product that was previously known as
IBM Systems Director Navigator for i.
This section describes the new capabilities and changes in the System DIrector web
interfaces and points out which ones are also new to the System i Navigator product. This
section describes the following enhanced functions:
Added support for the IBM Systems Director web browser environment:
– IBM Systems Director navigation to BRMS functions
– IBM Navigator for i navigation to BRMS functions
Enhancements to the BRMS initial window
BRMS advanced functions window
Scheduling support for BRMS
Added option to the BRMS Log to filter messages by control groups
Ability to mark and unmark volumes for duplication
Multiple email address support
High availability support for independent ASPs in a BRMS network
Required features: To use the IBM i 7.1 enhancements, you must install the BRMS 7.1
plug-ins. There are instructions at the following link for installing the plug-ins for the
client-based System i Navigator:
http://www-03.ibm.com/systems/i/support/brms/pluginfaq.html
Also in IBM i 7.1, management of tape devices and libraries were added to IBM Systems
Director and IBM Navigator for i. For more information, see 17.7, “New journal management
enhancements” on page 700.
3.3.1 Added support for the IBM Systems Director web browser environment
Functions previously available and new IBM i 7.1 functions accessible through IBM Navigator
for i are now also available through IBM Systems Director.
Both products’ BRMS functions are almost functionally and visually identical. The major
differences are the navigation steps to get to the BRMS functions and the main BRMS
window.
IBM Systems Director is intended for multiple systems and multiple system platforms. IBM
Navigator for i is intended for managing environments that are running IBM i.
To access the BRMS functions for a particular IBM eServer iSeries or IBM i, complete the
following steps:
1. Log on to IBM Systems Director.
2. Select a system resource that is an IBM i system or partition.
3. Access the IBM i resource.
4. Navigate to the BRMS function.
Figure 3-32 IBM Systems Director Navigate Resources group list window
Figure 3-33 IBM Systems Director Navigate Resources Operating System group list
To access the IBM Navigator for i Welcome window (Figure 3-37 on page 91), you must meet
the following conditions:
The *ADMIN HTTP server must be started on the IBM i system.
Open a web browser to http://system-name:2001 (where system-name is the host name
of the IBM i system).
Log on with an IBM i user profile with sufficient privileges.
2. After successfully logging in, the Welcome window opens, as shown in Figure 3-37. If the
section for “IBM i Management” is collapsed, click the plus to the left of the text to expand
the list.
1.
2.
3.
Figure 3-39 Enhancements to BRMS web initial window as shown in IBM Systems Director
The format of the window is a page with a tab. Selecting the tab (top arrow) brings that page
to the foreground and places the others in the background.
The arrow at the left points to an object that, when clicked, can hide the left navigation pane.
The remaining figures in this chapter do not show the navigation pane.
The small circled icon, when selected, opens a menu of actions. In Figure 3-40, the menu is
shown for the BRMS Backup Control Groups field.
Figure 3-45 Selecting Run Maintenance from the Select Action drop-down menu
Figure 3-53 shows the window that opens, which is a list of scheduled BRMS maintenance
tasks.
In IBM i 6.1, only active or completed tasks can be viewed or monitored. In IBM i 7.1,
scheduled tasks can be viewed, including those tasks that are scheduled by System i
Navigator.
BRMS 5250 support lists System i Navigator, IBM Systems Director, and IBM Navigator for i
BRMS tasks and jobs.
3.3.7 Added option to the BRMS Log to filter messages by control groups
In IBM i 7.1, the BRMS Log can now be filtered by control group. You can filter by one control
group at a time. Similar functionality is now available in the System i Navigator client.
To use this option, you can select BRMS Log from the BRMS initial menu that is shown in
Figure 3-38 on page 92. You can also navigate to the Task list menu shown in Figure 3-52 on
page 103, except that instead of clicking Open, click BRMS Log.
Another way that you can accomplish the same objective is by choosing BRMS Log from the
Select Action menu of the BRMS advanced function menu page, as shown in Figure 3-54.
Figure 3-54 Selecting BRMS Log from the Select Action drop-down menu
Figure 3-55 New Control group selection of BRMS Log - Include window
The new control group selection parameters are shown. The Browse button displays a list of
controls groups from which you can make selections.
In Figure 3-57, the Volume list menu is displayed with the Open option specified.
Figure 3-58 Volumes table with the menu displayed for volume GEN008
Click OK and the volume is marked duplication. The Volumes window opens again.
Figure 3-60 Volume menu that shows Unmark volume(s) for duplication
Because the Unmark volume(s) for duplication option is shown, you know that the volume
is marked for duplication.
The Image Catalog column is removed and the Marked for Duplication column is shown. You
can now see the Marked for Duplication status of each volume without selecting each one.
To configure this support, access the Global Policy Properties menu from the BRMS
advanced menu, as shown in Figure 3-64.
Figure 3-66 Email address field with multiple entries on the Network policy properties window
This feature is only available through the BRMS graphical user interfaces of IBM Systems
Director web environment, IBM i Navigator web environment, or System i Navigator running
on a PC.
4. On the Manage Disk Pool History to Send window, click List actions from the menu bar
and select New, as shown in Figure 3-68.
To determine what the Remote Receives value is for the remote system, view it by going
back to the Global Policy Properties window, clicking Network properties, and clicking
Manage Systems. The value is listed under the Remote Receives column for that remote
system, as shown in Figure 3-70.
6. Return to the Send Disk Pool History addition of a new disk pool window and click OK to
complete the addition.
Figure 3-71 shows the Global Policy Properties - Backup Maintenance Options window. The
new Run move policies and Expire partial volume sets are circled.
Figure 3-72 Global Policy Properties - Backup Maintenance Options, Reorganize BRMS database option
Collects
BRMS Network
BRMS
Media
BIG! BRMS Network
Database
iSeries A Collects
BRMS Network
BRMS BRMS
2
Media Media
Database Database
BRMS
Media
Database
iSeries B iSeriesC
iSeries A Collects
BRMS Network
BRMS BRMS
2
Media Media
Database Database
BRMS
Media
Database
iSeries B iSeriesC
iSeries A Collects
BRMS BRMS
Media Media
Database Database
iSeries B iSeriesC
The central “Enterprise System’ (HUB) pulls important information from systems (NODES)
defined in its “Enterprise” network. From this information, specific notifications, verifications,
and various other functions can be made and help an administrator manage the health of
their BRMS backups/recoveries from one central server. This situation is beneficial for
customers with multiple BRMS systems or BRMS networks.
To access this feature, point your web browser to http://<systemname>:2001. Sign on with
your IBM i user profile and password, then click IBM i Management → Backup, Recovery
and Media Services → Advanced → Enterprise Services. You can see the initial
Enterprise Network window in Figure 3-74.
For more information about what you can do with BRMS Enterprise, see BRMS Enterprise
Enhancements, REDP-4926.
IBM PowerHA SystemMirror for i is offered in two editions for IBM i 7.1:
IBM PowerHA SystemMirror for i Standard Edition (5770-HAS *BASE) for local data center
replication only
IBM PowerHA SystemMirror for i Enterprise Edition (5770-HAS option 1) for local or
multi-site replication
Customers already using PowerHA for i with IBM i 6.1 are entitled to an upgrade to PowerHA
SystemMirror for i Enterprise Edition with IBM i 7.1.
As PowerHA SystemMirror for i now has N-2 support for clustering, it is possible to skip one
level of IBM i just by running the earlier command twice. As such, a V5R4M0 system within a
clustered environment can be upgraded towards IBM i 7.1 by skipping IBM i 6.1.
The following subsections provide a brief overview of these enhancements. For more
information, see PowerHA SystemMirror for IBM i Cookbook, SG24-7994.
The available commands are similar to the ones that you use for IBM DS8000® Copy
Services, but some parameters are different:
Add SVC ASP Copy Description (ADDSVCCPYD): This command is used to describe a single
physical copy of an auxiliary storage pool (ASP) that exists within an SAN Volume
Controller and to assign a name to the description.
Change SVC Copy Description (CHGSVCCPYD): This command changes an existing auxiliary
storage pool (ASP) copy description.
Remove SVC Copy Description (RMVSVCCPYD): This command removes an existing ASP
copy description. It does not remove the disk configuration.
Display SVC Copy Description (DSPSVCCPYD): This command displays an ASP copy
description.
Work with ASP Copy Description (WRKASPCPYD) shows both DS8000 and SAN Volume
Controller / V7000 copy descriptions.
Start SVC Session (STRSVCSSN): This command assigns a name to the Metro Mirror,
Global Mirror, or FlashCopy session that links the two ASP copy descriptions for the
source and target IASP volumes and starts an ASP session for them.
Change SVC Session (CHGSVCSSN): This command is used to change an existing Metro
Mirror, Global Mirror, or FlashCopy session.
End SVC ASP Session (ENDSVCSSN): This command ends an existing ASP session.
Display SVC Session (DSPSVCSSN): This command displays an ASP session.
If you use this command with the *CREATE action, it does the following actions:
Creates the IASP using the specified non-configured disk units.
Creates an ASP device description with the same name if one does not exist yet.
If you use this command with the *DELETE action, it does the following actions:
Deletes the IASP.
Deletes the ASP device description if it was created by this command.
Additional Parameters
CFGGEOMIR command
The Configure Geographic Mirror (CFGGEOMIR) command that is shown in Figure 4-3 can be
used to create a geographic mirror copy of an existing IASP in a device cluster resource
group (CRG).
The command can also create ASP copy descriptions if they do not exist yet and can start an
ASP session. It performs all the necessary configuration steps to take an existing stand-alone
IASP and create a geographic mirror copy. To obtain this command, the 5770-HAS PTF
SI44148 must be on the system that is running IBM i 7.1.
Additional Parameters
Section 4.1.4, “PowerHA SystemMirror for i graphical interfaces” on page 131 introduces
GUIs that are associated with a high availability function, including two existing interfaces and
the PowerHA GUI.
After this PTF is installed, a 7.1 node can be added to a 5.4 cluster. A node can also be
upgraded from a 5.4 cluster node directly to a 7.1 cluster node if this PTF is installed during
the upgrade.
The main intent of this enhancement is to ensure that nodes can be upgraded directly from
5.4 to 7.1. PowerHA replication of the IASP still does not allow replication to an earlier
release, so for a complete high availability solution, other than during an upgrade of the HA
environment, keep all nodes at the same release level.
This enhancement is available in both 6.1 with 5761-SS1 PTF SI44564 and 7.1 with
5770-SS1 PTF SI44326.
Bottom
Press Enter to continue.
Bottom
Type reply below, then press Enter.
Reply . . . .
Removed feature: The clustering GUI plug-in for System i Navigator from High
Availability Switchable Resources licensed program (IBM i option 41) was removed in
IBM i 7.1.
The High Availability Solutions Manager GUI has the following characteristics:
“Dashboard” interface
No support for existing environments
Cannot choose names
Limited to four configurations
You can access the new GUI by completing the following steps, as shown in Figure 4-7:
1. Expand IBM i Management.
2. Select PowerHA.
The PowerHA GUI handles the high availability solution from one single window. It supports
the following items:
Geographic mirroring
Switched disk (IOA)
SVC/V7000/DS6000/DS8000 Metro Mirror
SVC/V7000/DS6000/DS8000 Global Mirror
SVC/V7000/DS6000/DS8000 FlashCopy
SVC/V7000/DS6000/DS8000 LUN level switching
For more information about the PowerHA GUI, see Chapter 9, “PowerHA User Interfaces” in
the PowerHA SystemMirror for IBM i Cookbook, SG24-7994.
High Availability
Cluster Resource Solutions Manager
Services GUI GUI PowerHA GUI
Single Node Management 9
Quick Problem
9 9
Determination
Flexible Configuration 9 9
IASP Configuration and
Limited 9
Management
Add/Remove Multiple
9
Monitored Resources
Guided Wizards 9 Limited 9
Figure 4-8 Main differences between the graphical interfaces
Note: As the PowerHA GUI is a combination of the two other ones, those GUIs will be
withdrawn in a later release.
Using NPIV with PowerHA SystemMirror for i does not require dedicated Fibre Channel IOAs
for each SYSBAS and IASP because the (virtual) IOP reset that occurs when you switch the
IASP affects the virtual Fibre Channel client adapter only, instead of all ports of the physical
Fibre Channel IOA, which are reset in native-attached storage environment.
For an overview of the new NPIV support by IBM i, see Chapter 7, “Virtualization” on
page 319.
For more information about NPIV implementation in an IBM i environment, see DS8000 Copy
Services for IBM i with VIOS, REDP-4584.
Asynchronous delivery, which also requires the asynchronous mirroring mode, works by
duplicating any changed IASP disk pages in the *BASE memory pool on the source system
and sending them asynchronously while you preserve the write-order to the target system.
With the source system available, you can check the currency of the target system and
memory impact on the source system because of asynchronous geographic mirroring. Use
the Display ASP Session (DSPASPSSN) command to show the total data in transit, as shown in
Figure 4-9.
Copy Descriptions
Session . . . . . . . . . . . . SSN
Option . . . . . . . . . . . . . OPTION
ASP copy: ASPCPY
Preferred source . . . . . . . *SAME
Preferred target . . . . . . . *SAME
+ for more values
Suspend timeout . . . . . . . . SSPTIMO *SAME
Transmission delivery . . . . . DELIVERY *ASYNC
Mirroring mode . . . . . . . . . MODE *SAME
Synchronization priority . . . . PRIORITY *SAME
Tracking space . . . . . . . . . TRACKSPACE *SAME
FlashCopy type . . . . . . . . . FLASHTYPE *SAME
Persistent relationship . . . . PERSISTENT *SAME
ASP device . . . . . . . . . . . ASPDEV *ALL
+ for more values
Track . . . . . . . . . . . . . TRACK *YES
More...
Figure 4-10 CHGASPSSN command - * ASYNC Transmission delivery parameter
Note: You must stop the geographic mirroring session by running the ENDASPSSN command
before you change this setting.
With LUN level switching single-copy (that is, non-replicated) IASPs that are managed by a
cluster resource group device domain and in a supported storage can be switched between
IBM i systems in a cluster.
A typical implementation scenario for LUN level switching is where multi-site replication
through Metro Mirror or Global Mirror is used for disaster recovery and protection against
storage subsystem outages. When this scenario happens, additional LUN level switching at
the production site is used for local high availability protection, eliminating the requirement for
a site-switch if there are IBM i server outages.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 4-13 IBM i ADDASPCPYD enhancement for DS8000, DS6000 LUN level switching
An ASP session is not required for LUN level switching, as there is no replication for the IASP
involved.
Important: For LUN level switching, the backup node host connection on the DS8000 or
DS6000 storage system must not have a volume group (VG) assigned. PowerHA
automatically unassigns the VG from the production node and assigns it to the backup
node at site-switches or failovers.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 4-14 IBM i ADDSVCCPYD enhancement for V7000, V3700, SVC LUN level switching
An ASP session is not required for LUN level switching, as there is no replication for the IASP
involved.
Important: For LUN level switching, the backup node host connection on the V7000,
V3700, or SAN Volume Controller storage system must not have a host connection
assigned. PowerHA automatically unassigns the host connection from the production node
and assigns it to the backup node at site-switches or failovers.
4.1.8 IBM System SAN Volume Controller and IBM Storwize V7000 split cluster
Support is now also added to use the split cluster function of the IBM System Storage SAN
Volume Controller and IBM Storwize V7000. The split cluster environment is commonly used
on other platforms. This support enables IBM i customers to implement the same
mechanisms as they use on those platforms.
A split-cluster setup uses a pair of storage units in a cluster arrangement. These storage units
present a copy of an IASP to one of two servers on their local site, with PowerHA managing
the system side of the takeover. As with any split cluster environment, you can end up with a
“split brain” or partitioned state. To avoid this, the split cluster support requires the use of a
For more information, see IBM i and IBM Storwize Family: A Practical Guide to Usage
Scenarios, SG24-8197.
You can use the IBM System Storage DS8000 series FlashCopy SE licensed feature to
create space-efficient FlashCopy target volumes that can help you reduce the required
physical storage space for the FlashCopy target volumes. These volumes are typically
needed only for a limited time (such as during a backup to tape).
A space-efficient FlashCopy target volume has a virtual storage capacity that is reported to
the host that matches the physical capacity of the fully provisioned FlashCopy source volume,
but no physical storage space is ever allocated. Physical storage space for space-efficient
FlashCopy target volumes is allocated in 64-KB track granularity. This allocation is done on
demand for host write operations from a configured repository volume that is shared by all
space-efficient FlashCopy target volumes within the same DS8000 extent pool, as shown in
Figure 4-15.
Non-provisioned
space-efficient volumes
(no space ever allocated)
Repository Volume
(over-provisioned,
e.g., 500 GB virtual and
100 GB real capacity)
Figure 4-15 DS8000 Space-Efficient FlashCopy
From a user perspective, the PowerHA setup (not the DS8000 FlashCopy setup) for
space-efficient FlashCopy is identical to the setup for traditional FlashCopy with the nocopy
option. The reason for this situation is PowerHA SystemMirror for i internally interrogates the
DS8000 to determine the type of FlashCopy relationship and makes sure that it uses the
corresponding correct DS CLI command syntax. The syntax check is done for either
traditional FlashCopy or FlashCopy SE when you run the mkflash and rmflash commands.
The reverse of the FlashCopy is performed by using the Change ASP Session (CHGASPSSN)
command with OPTION(*REVERSE) as shown in Figure 4-16.
Bottom
F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel
F13=How to use this display F24=More keys
Figure 4-16 IBM i command CHGASPSSN
Note: The ability to reverse the FlashCopy is also available with the “no copy” option, in
which case the FlashCopy relationship is removed as well.
This improvement removes the need to manually detach and reattach the Global Mirror
session that existed on previous releases. PowerHA now handles the entire process in a
single command.
With IBM i 7.1, PowerHA SystemMirror for i now allows advanced node failure detection by
cluster nodes. This task can be accomplished by registering with an HMC or Virtual I/O
Server (VIOS) management partition on IVM managed systems. The clusters are notified
when severe partition or system failures trigger a cluster failover event instead of causing a
cluster partition condition.
For LPAR failure conditions, it is the IBM POWER® Hypervisor™ (PHYP) that notifies the
HMC that an LPAR failed. For system failure conditions other than a sudden system power
loss, it is the flexible service processor (FSP) that notifies the HMC of the failure. The CIM
server on the HMC or VIOS can then generate a power state change CIM event for any
registered
CIM clients.
Whenever a cluster node is started, for each configured cluster monitor, IBM i CIM client APIs
are used to subscribe to the particular power state change CIM event. The HMC CIM server
generates such a CIM event and actively sends it to any registered CIM clients (that is, no
heartbeat polling is involved with CIM). On the IBM i cluster nodes, the CIM event listener
compares the events with available information about the nodes that constitute the cluster to
determine whether it is relevant for the cluster to act upon. For relevant power state change
CIM events, the cluster heartbeat timer expiration is ignored (that is, IBM i clustering
immediately triggers a failover condition in this case).
Using advanced node failure detection requires SSH and CIMOM TCP/IP communication to
be set up between the IBM i cluster nodes and the HMC or VIOS. Also, a cluster monitor must
be added to the IBM i cluster nodes, for example, through the new Add Cluster Monitor
(ADDCLUMON) command, as shown in Figure 4-17. This command enables communication with
the CIM server on the HMC or VIOS.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 4-17 Add Cluster Monitor (ADDCLUMOD) command
Changes within the System Licensed Internal Code (SLIC) provide more efficient processing
of data that is sent to the target system if there is a full resynchronization. Even with source
and target side tracking, some instances require a full synchronization of the production copy,
such as any time that the IASP cannot be normally varied off, because of a sudden cluster
node outage.
The achievable performance improvement varies based on the IASP data. IASPs with many
small objects see more benefit than those IASPs with a smaller number of large objects.
System System
ASP ASP
*NWSSTG *NWSSTG
Figure 4-18 Geographic mirroring for an IBM i hosted IBM i client partition environment
The advantage of this solution is that no IASP is needed on the production (client) partition,
so no application changes are required.
In October 2012, geographic mirroring of PowerHA SystemMirror for i can eliminate the
transfer of temporary storage spaces for the IBM i client partition. This enhancement reduces
the amount of a network traffic between IBM i host partition on the production node side and
on the backup node side.
For more information about Suspend/Resume and Live Partition Mobility, see Chapter 7,
“Virtualization” on page 319.
PowerHA SystemMirror for i is required to support these two new administration domain
monitored resource entries.
For a complete list of attributes that can be monitored and synchronized among cluster nodes
by the cluster administrative domain see the Attributes that can be monitored topic in the
IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzaig/rzaigrefattribmon.htm
Additional enhancements are made to adding and removing monitored resource entries in the
cluster administrative domain:
When adding new entries to the administrative domain, this can now be done even if the
object cannot be created on all nodes. If the creation is not possible on all of the nodes in
the administrative domain, the MRE will be in an inconsistent state to remind you that the
object must still be manually created.
When removing entries from the administrative domain from the cluster administrative
domain, you are now able to do this when some of the nodes in the administrative domain
are not active.
The processing that is associated with cluster administrative domains has also been
enhanced by the use of the QCSTJOBD job description. This allows any IBM initiated jobs to
be run in the QSYSWRK subsystem from the job queue QSYSNOMAX. It improves
processing by eliminating potential issues caused by contention that might exist with
customer jobs when using the QBATCH subsystem and job queue.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 4-19 Work with Monitored Resources (WRKCADMRE) command
The default for the ADMDMN parameter is to use the administrative domain that the current
node is a part of. You are then presented with the panel shown in Figure 4-20. From this list,
you can sort the entries or go to an entry of interest.
Bottom
Parameters for option 1 or command
===>
F1=Help F3=Exit F4=Prompt F5=Refresh F9=Retrieve
F11=Order by type and name F12=Cancel F24=More Keys
Figure 4-20 Example of output from WRKCADMRE command
New IP address . . . . . . . . .
................................................................
: New IP address (NEWINTNETA) - Help :
: :
: Specifies the cluster interface address which is being :
: added to the node information or replacing an old cluster :
: interface address. The interface address may be an IPv4 :
: address (for any cluster version) or IPv6 address (if :
: current cluster version is 7 or greater). :
: :
: More... :
: F2=Extended help F10=Move to top F12=Cancel :
F3=Exit F4= : F13=Information Assistant F20=Enlarge F24=More keys :
F24=More keys : :
:..............................................................:
Figure 4-21 IBM i change cluster node entry
Table 4-1 Cluster commands enabled to run from any active cluster node
ADDASPCPYD ADDSVCCPYD CHGASPSSN CHGSVCSSN
CHGASPCPYD CHGSVCCPYD DSPASPSSN ENDSVCSSN
DSPASPCPYD DSPSVCCPYD ENDASPSSN STRSVCSSN
RMVASPCPYD RMVSVCCPYD STRASPSSN
WRKASPCPYD
In addition, the administrative domain commands listed in Table 4-2 have been enhanced to
run from any active node in the cluster, providing that at least one eligible node is active. A
parameter has been added to these commands to specify the active cluster node to be used
as the source for synchronization to the other nodes in the administrative domain,
Table 4-2 Administrative domain commands enabled to run from any active cluster node
ADDCADMRE RMVCADMRE
WRKCADMRE PRTCADMRE
Several enhancements were made in the area of the integrity preservation and journaling.
The main objectives of these enhancements are to provide easier interfaces for the setup and
monitoring of the database’s persistence, including HA setups.
The Start Journal Library (STRJRNLIB) command was introduced in IBM i 6.1. This command
defines one or more rules at a library or schema level. These rules are used, or inherited, for
journaling objects.
There is an equivalent in the IBM Navigator for i to do the same task. Click Expand File
Systems → Select Integrated File System → Select QSYS.LIB. Select the library that you
want to journal, as shown in Figure 4-23.
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaki/rzakikickoff.htm
You can also run this command to filter the remote journals. Filtering out journal entries that
are not needed on the target system can decrease the amount of data that is sent across the
communication line.
This remote journal filtering feature is available with option 42 of IBM i, that is, feature 5117
(HA Journal Performance). Ensure that critical data is not filtered when you define remote
journal filtering.Three criteria can be used to filter entries sent to the remote system:
Before images
Individual objects
Name of the program that deposited the journal entry on the source system
The filtering criteria are specified when you activate a remote journal. Different remote
journals that are associated with the same local journal can have different filtering criteria.
Remote journal filtering can be specified only for asynchronous remote journal connections.
Because journal entries might be missing, filtered remote journal receivers cannot be used
with the Remove Journaled Changes (RMVJRNCHG) command. Similarly, journal receivers that
filtered journal entries by object or by program cannot be used with the Apply Journaled
Change (APYJRNCHG) command or the Apply Journaled Change Extend (APYJRNCHGX)
command.
The Work with Journal Attributes (WRKJRNA) command can now monitor, from the target side,
how many seconds the target is behind in receiving journal entries from the source system.
Also, new in IBM i 7.1 is the ability, from the source side, to view the number of
retransmissions that occur for a remote journal connection.
QSYS2.Display_Journal is a new table function that you can use to view entries in a journal
by running a query.
There are many input parameters of the table function that can (and should) be used for best
performance to return only those journal entries that are of interest. For more information
about the special values, see the QjoRetrieveJournalEntries API topic in the IBM i 7.1
Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/apis/QJORJRNE.htm?lang=
en
Unlike many other UDTFs in QSYS2, this one has no DB2 for i provided view.
This function provides a result table with data similar to what you get from using the Display
Journal Command (DSPJRN) command.
With commitment control, you have assurance that when the application starts again, no
partial updates are in the database because of incomplete transactions from a prior failure.
As such, it is one of the building blocks of any highly available setup and it identifies the
recovery point for any business process.
If your application was deployed using independent ASPs (IASPs), you are using a database
instance that is in that IASP. This situation has an impact on how commitment control works.
However, if you switch from the system disk pool (ASP group *NONE), commitment control is
not affected. The commitment definitions stay on the system disk pool. A new feature in
IBM i 7.1 is that if you later place independent disk pool resources under commitment control
before system disk pool resources, the commitment definition is moved to the independent
disk pool. This situation means that if your job is not associated with an independent ASP, the
commitment definition is created in *SYSBAS; otherwise, it is created in the independent
ASP. If the job is associated with an independent ASP, you can open files under commitment
control that are in the current library name space. For example, they can be in the
independent ASP or *SYSBAS.
If the first resource that is placed under commitment control is not in the same ASP as the
commitment definition, the commitment definition is moved to the resource's ASP. If both
*SYSBAS and independent ASP resources are registered in the same commitment definition,
the system implicitly uses a two-phase commit protocol to ensure that the resources are
committed atomically in the event of a system failure. Therefore, transactions that involve data
in both *SYSBAS and an independent ASP have a small performance degradation versus
transactions that are isolated to a single ASP group.
When recovery is required for a commitment definition that contains resources that are in
both *SYSBAS and an independent ASP, the commitment definition is split into two
commitment definitions during the recovery. One is in *SYSBAS and one in the independent
ASP, as though there were a remote database connection between the two ASP groups.
Resynchronization can be initiated by the system during the recovery to ensure that the data
in both ASP groups is committed or rolled back atomically.
SMAPP affects the overall system performance. The lower the target recovery time that you
specify for access paths, the greater this effect can be. Typically, the effect is not noticeable,
unless the processor is nearing its capacity.
The journal receiver threshold value influences the number of parallel writes that journal
allows. The higher the journal receiver threshold value, the more parallel I/O requests are
allowed. Allowing more parallel I/O requests can improve performance.
For more information, see the TechDocs “IBM i 7.1 and changes for journaling” at:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105661
If you saved any journal receivers using SAVOBJ or SAVLIB with the STG(*FREE) option, the
receiver chain is effectively broken and the *CURCHAIN option fails to retrieve journal entries.
By specifying the *CURAVLCHN option, if journal receivers exist in the receiver chain that are not
available because they were saved with the storage freed option, those journal receivers are
ignored and the entries are retrieved starting with the first available journal receiver in the
chain.
For more information, see Chapter 17, “IBM Navigator for i 7.1” on page 667.
For more information about external storage capabilities with IBM i, see IBM i and IBM
Storwize Family: A Practical Guide to Usage Scenarios, SG24-8197.
Because of its unique and self-managing computing features, the cost of ownership of DB2
for i is a valuable asset. The sophisticated cost-based query optimizer, the unique single level
store architecture of the OS, and the database parallelism feature of DB2 for i allow it to scale
almost linearly. Rich SQL support makes not only it easier for software vendors to port their
applications and tools to IBM i, but it also enables developers to use industry-standard SQL
for their data access and programming. The IBM DB2 Family has this focus on SQL
standards with DB2 for i, so investment in SQL enables DB2 for i to use the relational
database technology leadership position of IBM and maintain close compatibility with the
other DB2 Family products.
Reading through this chapter, you find many modifications and improvements as part of the
new release. All of these features are available to any of the development and deployment
environments that are supported by the IBM Power platforms on which IBM i 7.1 can be
installed.
Many DB2 enhancements for IBM i 7.1 are also available for Version 6.1. If you must verify
their availability, go to:
https://www.ibm.com/developerworks/ibmi/techupdates/db2
This link takes you to the DB2 for i section of the IBM i Technology Updates wiki.
Previously, XML data types were supported only through user-defined types and any handling
of XML data was done using user-defined functions. In IBM i 7.1, the DB2 component is
complemented with support for XML data types and publishing functions. It also supports
XML document and annotation, document search (IBM OmniFind®) without decomposition,
and client and language API support for XML (CLI, ODBC, JDBC, and so on).
For more information about moving from the user-defined function support provided through
the XML Extenders product to the built-in operating support, see the Replacing DB2 XML
Extender With integrated IBM DB2 for i XML capabilities white paper:
https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler?contentId=K$63TzTF
kZwiPCA$cnt&roadMapId=IbOtoNReUYN4MDADrdm&roadMapName=Education+resources+for+IBM+
i+systems&locale=en_US
An XML value can be transformed into a serialized string value that represents an XML
document using the XMLSERIALIZE (see “XML serialization” on page 163) function.
Similarly, a string value that represents an XML document can be transformed into an XML
value using the XMLPARSE (see “XML publishing functions” on page 162) function. An XML
value can be implicitly parsed or serialized when exchanged with application string and binary
data types.
The XML data type has no defined maximum length. It does have an effective maximum
length of 2 GB when treated as a serialized string value that represents XML, which is the
same as the limit for Large Object (LOB) data types. Like LOBs, there are also XML locators
and XML file reference variables.
With a few exceptions, you can use XML values in the same contexts in which you can use
other data types. XML values are valid in the following circumstances:
CAST a parameter marker, XML, or NULL to XML
XMLCAST a parameter marker, XML, or NULL to XML
IS NULL predicate
COUNT and COUNT_BIG aggregate functions
COALESCE, IFNULL, HEX, LENGTH, CONTAINS, and SCORE scalar functions
XML scalar functions
A SELECT list without DISTINCT
INSERT VALUES clause, UPDATE SET clause, and MERGE
SET and VALUES INTO
XML values cannot be used directly in the following places. Where expressions are allowed,
an XML value can be used, for example, as the argument of XMLSERIALIZE.
A SELECT list that contains the DISTINCT keyword
A GROUP BY clause
An ORDER BY clause
A subselect of a fullselect that is not UNION ALL
A basic, quantified, BETWEEN, DISTINCT, IN, or LIKE predicate
An aggregate function with the DISTINCT keyword
A primary, unique, or foreign key
A check constraint
An index column
No host languages have a built-in data type for the XML data type.
XML data can be defined with any EBCDIC single byte or mixed CCSID or a Unicode CCSID
of 1208 (UTF-8), 1200 (UTF-16), or 13488 (Unicode-specific version). 65535 (no conversion)
is not allowed as a CCSID value for XML data. The CCSID can be explicitly specified when
you define an XML data type. If it is not explicitly specified, the CCSID is assigned using the
value of the SQL_XML_DATA_CCSID QAQQINI file parameter (5.3.17, “QAQQINI properties” on
page 200). If this value is not set, the default is 1208. The CCSID is established for XML data
types that are used in SQL schema statements when the statement is run.
XML host variables that do not have a DECLARE VARIABLE that assigns a CCSID have their
CCSID assigned as follows:
If it is XML AS DBCLOB, the CCSID is 1200.
If it is XML AS CLOB and the SQL_XML_DATA_CCSID QAQQINI value is 1200 or 13488, the
CCSID is 1208.
Otherwise, the SQL_XML_DATA_CCSID QAQQINI value is used as the CCSID.
Because all implicit and explicit XMLPARSE functions are run by using UTF-8 (1208), defining
data in this CCSID removes the need to convert the data to UTF-8.
xmlagg Combines a collection of rows, each containing a single XML value to create an XML
sequence that contains an item for each non-null value in a set of XML values.
xmlattributes Returns XML attributes from columns, using the name of each column as the name of the
corresponding attribute.
xmlcomment Returns an XML value with the input argument as the content.
xmlconcat Returns a sequence that contains the concatenation of a variable number of XML input
arguments.
xmlgroup Returns a single top-level element to represent a table or the result of a query.
xmlparse Parses the arguments as an XML document and returns an XML value.
xmlrow Returns a sequence of row elements to represent a table or the result of a query.
xmlserialize Returns a serialized XML value of the specified data type generated from the
XML-expression argument.
xmltext Returns an XML value that has the input argument as the content.
xmlvalidate Returns a copy of the input XML value that is augmented with information obtained from
XML schema validation, including default values and type annotations.
xsltransform Converts XML data into other forms, accessible for the XSLT processor, including but not
limited to XML, HTML, and plain text.
You can use the SET CURRENT IMPLICIT XMLPARSE OPTION statement to change the value of
the CURRENT IMPLICIT XMLPARSE OPTION special register to STRIP WHITESPACE or to PRESERVE
WHITESPACE for your connection. You can either remove or maintain any white space on an
implicit XMLPARSE function. This statement is not a committable operation.
XML serialization
XML serialization is the process of converting XML data from the format that it has in a DB2
database to the serialized string format that it has in an application.
You can allow the DB2 database manager to run serialization implicitly, or you can start the
XMLSERIALIZE function to request XML serialization explicitly. The most common usage of
XML serialization is when XML data is sent from the database server to the client.
Implicit serialization is the preferred method in most cases because it is simpler to code, and
sending XML data to the client allows the DB2 client to handle the XML data properly. Explicit
serialization requires extra handling, which is automatically handled by the client during
implicit serialization.
In general, implicit serialization is preferable because it is more efficient to send data to the
client as XML data. However, under certain circumstances (for example, if the client does not
support XML data) it might be better to do an explicit XMLSERIALIZE.
With implicit serialization for DB2 CLI and embedded SQL applications, the DB2 database
server adds an XML declaration with the appropriate encoding specified to the data. For .NET
applications, the DB2 database server also adds an XML declaration. For Java applications,
depending on the SQLXML object methods that are called to retrieve the data from the
SQLXML object, the data with an XML declaration added by the DB2 database server is
returned.
After an explicit XMLSERIALIZE invocation, the data has a non-XML data type in the database
server, and is sent to the client as that data type. You can use the XMLSERIALIZE scalar
function to specify the SQL data type to which the data is converted when it is serialized
(character, graphic, or binary data type) and whether the output data includes the explicit
Although implicit serialization is preferable because it is more efficient, you can send data to
the client as XML data. When the client does not support XML data, you can consider doing
an explicit XMLSERIALIZE. If you use implicit XML serialization for this type of client, the DB2
database server then converts the data to a CLOB (Example 5-1) or DBCLOB before it sends the
data to the client.
Without this mechanism to store associated XML schemas, an external resource might not be
accessible when needed by the database. The XSR also removes the additional impact that
is required to locate external documents, along with the possible performance impact.
An XML schema consists of a set of XML schema documents. To add an XML schema to the
DB2 XSR, you register XML schema documents to DB2 by calling the DB2 supplied stored
procedure SYSPROC.XSR_REGISTER to begin registration of an XML schema.
To remove an XML schema from the DB2 XML schema repository, you can call the
SYSPROC.XSR_REMOVE stored procedure or use the DROP XSROBJECT SQL statement.
An XML schema consists of one or more XML schema documents. In annotated XML
schema decomposition, or schema-based decomposition, you control decomposition by
annotating a document’s XML schema with decomposition annotations. These annotations
specify the following details:
The name of the target table and column in which the XML data is to be stored
The default SQL schema for when an SQL schema is not identified
Any transformation of the content before it is stored
The annotated schema documents must be stored in and registered with the XSR. The
schema must then be enabled for decomposition. After the successful registration of the
annotated schema, decomposition can be run by calling the decomposition stored procedure
SYSPROC.XDBDECOMPXML.
The data from the XML document is always validated during decomposition. If information in
an XML document does not comply with its specification in an XML schema, the data is not
inserted into the table.
Annotated XML schema decomposition can become complex. To make the task more
manageable, take several things into consideration. Annotated XML schema decomposition
requires you to map possible multiple XML elements and attributes to multiple columns and
tables in the database. This mapping can also involve transforming the XML data before you
insert it, or apply conditions for insertion.
Here are items to consider when you annotate your XML schema:
Understand what decomposition annotations are available to you.
Ensure, during mapping, that the type of the column is compatible with the XML schema
type of the element or attribute to which it is being mapped.
Ensure complex types that are derived by restriction or extension are properly annotated.
Confirm that no decomposition limits and restrictions are violated.
Ensure that the tables and columns that are referenced in the annotation exist at the time
the schema is registered with the XSR.
Using the MERGE statement is potentially good in a Business Intelligence data load scenario,
where it can be used to populate the data in both the fact and the dimension tables upon a
refresh of the data warehouse. It can also be used for archiving data.
In Example 5-2, the MERGE statement updates the list of activities that are organized by Group
A in the archive table. It deletes all outdated activities and updates the activities information
(description and date) in the archive table if they were changed. It inserts new upcoming
activities into the archive, signals an error if the date of the activity is not known, and requires
that the date of the activities in the archive table be specified.
Each group has an activities table. For example, activities_groupA contains all activities
Group A organizes, and the archive table contains all upcoming activities that are organized
by groups in the company. The archive table has (group, activity) as the primary key, and date
is not nullable. All activities tables have activity as the primary key. The last_modified column
in the archive is defined with CURRENT TIMESTAMP as the default value.
There is a difference in how many updates are done depending on whether a NOT ATOMIC
MERGE or an ATOMIC MERGE was specified:
In an ATOMIC MERGE, the source rows are processed as though a set of rows is processed
by each WHEN clause. Thus, if five rows are updated, any row level update trigger is fired
five times for each WHEN clause. This situation means that n statement level update triggers
are fired, where n is the number of WHEN clauses that contain an UPDATE, including any WHEN
clause that contains an UPDATE that did not process any of the source rows.
In a NOT ATOMIC MERGE setting, each source row is processed independently as though a
separate MERGE statement ran for each source row, meaning that, in the previous case, the
triggers are fired only five times.
After running a MERGE statement, the ROW_COUNT statement information item in the SQL
Diagnostics Area (or SQLERRD(3) of the SQLCA) is the number of rows that are operated on
by the MERGE statement, excluding rows that are identified by the ELSE IGNORE clause.
The ROW_COUNT item and SQLERRD(3) do not include the number of rows that were operated
on as a result of triggers. The value in the DB2_ROW_COUNT_SECONDARY statement information
item (or SQLERRD(5) of the SQLCA) includes the number of these rows.
No attempt is made to update a row in the target that did not exist before the MERGE statement
ran. No updates of rows were inserted by the MERGE statement.
If an error occurs during the operation for a row of source data, the row being processed at
the time of the error is not inserted, updated, or deleted. Processing of an individual row is an
atomic operation. Any other changes that are previously made during the processing of the
MERGE statement are not rolled back. If CONTINUE ON EXCEPTION is specified, execution
continues with the next row to be processed.
Global variables have a session scope, which means that although they are available to all
sessions that are active on the database, their value is private for each session. Modifications
to the value of a global variable are not under transaction control. The value of the global
variable is preserved when a transaction ends with either a COMMIT or a ROLLBACK statement.
When a global variable is instantiated for a session, changes to the global variable in another
session (such as DROP or GRANT) might not affect the variable that is instantiated. An attempt to
read from or to write to a global variable created by this statement requires that the
authorization ID attempting this action holds the appropriate privilege on the global variable.
The definer of the variable is implicitly granted all privileges on the variable.
A created global variable is instantiated to its default value when it is first referenced within its
given scope. If a global variable is referenced in a statement, it is instantiated independently
of the control flow for that statement.
A global variable is created as a *SRVPGM object. If the variable name is a valid system
name but a *SRVPGM exists with that name, an error is generated. If the variable name is not
a valid system name, a unique name is generated by using the rules for generating system
table names.
If a global variable is created within a session, it cannot be used by other sessions until the
unit of work is committed. However, the new global variable can be used within the session
that created the variable before the unit of work commits.
An array type is a data type that is defined as an array of another data type. Every array type
has a maximum cardinality, which is specified on the CREATE TYPE (Array) statement. If A is an
array type with maximum cardinality M, the cardinality of a value of type A can be any value 0
- M inclusive. Unlike the maximum cardinality of arrays in programming languages such as C,
the maximum cardinality of SQL arrays is not related to their physical representation. Instead,
the maximum cardinality is used by the system at run time to ensure that subscripts are within
bounds. The amount of memory that is required to represent an array value is proportional to
its cardinality, and not to the maximum cardinality of its type.
SQL procedures support parameters and variables of array types. Arrays are a convenient
way of passing transient collections of data between an application and a stored procedure or
between two stored procedures.
If WITH ORDINALITY is specified, an extra counter column of type BIGINT is appended to the
temporary table. The ordinality column contains the index position of the elements in the
arrays. See Example 5-6.
The ARRAY UNNEST temporary table is an internal data structure and can be created only
by the database manager.
Returning a known number of result sets is simpler. However, if you write the code to handle a
varying number of result sets, you do not need to make major modifications to your program if
the stored procedure changes.
This procedure allows for transparent encryption / decryption or encoding / decoding of data
that is accessed through SQL or any other interface. It allows for transparent encryption or
encoding of data that is accessed through SQL or natively.
When values in the column are changed, or new values are inserted, the field procedure is
started for each value, and can transform that value (encode it) in any way. The encoded
value is then stored. When values are retrieved from the column, the field procedure is started
for each value, which is encoded, and must decode it back to the original value. Any indexes
that are defined on a non-derived column that uses a field procedure are built with encoded
values.
The transformation your field procedure performs on a value is called field-encoding. The
same routine is used to undo the transformation when values are retrieved, which is called
field-decoding. Values in columns with a field procedure are described to DB2 in two ways:
The description of the column as defined in CREATE TABLE or ALTER TABLE appears in the
catalog table QSYS2.SYSCOLUMNS. This description is the description of the
field-decoded value, and is called the column description.
The description of the encoded value, as it is stored in the database, appears in the
catalog table QSYS2.SYSFIELDS. This description is the description of the field-encoded
value, and is called the field description.
The field procedure is also started during the processing of the CREATE TABLE or ALTER TABLE
statement. That operation is called a field-definition. When so started, the procedure
provides DB2 with the column’s field description. The field description defines the data
characteristics of the encoded values. By contrast, the information that is supplied for the
column in the CREATE TABLE or ALTER TABLE statement defines the data characteristics of the
decoded values.
The data type of the encoded value can be any valid SQL data type except ROWID or
DATALINK. Also, a field procedure cannot be associated with any column that has values that
are generated by IDENTITY or ROW CHANGE TIMESTAMP.
If a DDS-created physical file is altered to add a field procedure, the encoded attribute data
type cannot be a LOB type or DataLink. If an SQL table is altered to add a field procedure, the
encoded attribute precision field must be 0 if the encoded attribute data type is any of the
integer types.
A field procedure cannot be added to a column that has a default value of CURRENT DATE,
CURRENT TIME, CURRENT TIMESTAMP, or USER. A column that is defined with a user-defined data
type can have a field procedure if the source type of the user-defined data type is any of the
allowed SQL data types. DB2 casts the value of the column to the source type before it
passes it to the field procedure.
The FIELDPROC support is extended to allow masking to occur to that same column data
(typically based on what user is accessing the data). For example, only users that need to see
the actual credit card number see the value, whereas other users might see masked data. For
example, XXXX XXXX XXXX 1234.
The new support is enabled by allowing the FIELDPROC program to detect masked data on
an update or write operation and returning that indication to the database manager. The
database manager then ignores the update of that specific column value on an update
operation and replaces it with the default value on a write.
A new parameter is also passed to the FIELDPROC program. For field procedures that mask
data, the parameter indicates whether the caller is a system function that requires that the
data are decoded without masking. For example, in some cases, RGZPFM and ALTER TABLE
might need to copy data. If the field procedure ignores this parameter and masks data when
these operations are run, the column data is lost. Hence, it is critical that a field procedure
that masks data properly handles this parameter.
5.2.8 Miscellaneous
A number of functions are aggregated under this heading. Most are aimed at upscaling or
improving the ease of use for existing functions.
If you specify a referential constraint where the parent is a partitioned table, the unique index
that is used for the unique index that enforces the parent unique constraint must be
non-partitioned. Likewise, the identity column cannot be a partitioned key.
All object references in a single SQL statement must be in a single relational database. When
you create an alias for a table on a remote database, the alias name must be the same as the
remote name, but can point to another alias on the remote database. See Example 5-10.
To do this, the SQL statement is coded to refer to the RDB directory entry alias name as the
first portion (RDB target) of a 3-part name. By changing the RDB directory entry to have a
different destination database using the Remote location (RMTLOCNAME) parameter, the
SQL application can target a different database without having to change the application.
Example 5-11 shows some sample code that pulls daily sales data from different locations.
The concurrent access resolution option can have one of the following values:
Wait for outcome
This value is the default. This value directs the database manager to wait for the commit or
rollback when it encounters locked data that is being updated or deleted. Locked rows that
are being inserted are not skipped. This option does not apply for read-only queries that
are running under COMMIT(*NONE) or COMMIT(*CHG).
Use currently committed
This value allows the database manager to use the currently committed version of the data
for read-only queries when it encounters locked data being updated or deleted. Locked
rows that are being inserted can be skipped. This option applies where possible when it is
running under COMMIT(*CS) and is ignored otherwise. It is what is referred to as “Readers
do not block writers and writers do not block readers.”
The concurrent access resolution values of USE CURRENTLY COMMITTED and SKIP LOCKED DATA
can be used to improve concurrency by avoiding lock waits. However, care must be used
when you use these options because they might affect application functions.
You can specify the usage for concurrent access resolution in several ways:
By using the concurrent-access-resolution clause at the statement level for a
select-statement, SELECT INTO, searched UPDATE, or searched DELETE
By using the CONACC keyword on the CRTSQLxxx or RUNSQLSTM commands
With the CONACC value in the SET OPTION statement
In the attribute-string of a PREPARE statement
Using the CREATE or ALTER statement for a FUNCTION, PROCEDURE, or TRIGGER
If the concurrent access resolution option is not directly set by the application, it takes on the
value of the SQL_CONCURRENT_ACCESS_RESOLUTION option in the QAQQINI query
options file.
CREATE statement
Specifying the CREATE OR REPLACE statement makes it easier to create an object without
having to drop it when it exists. This statement can be applied to the following objects:
ALIAS
FUNCTION
PROCEDURE
SEQUENCE
TRIGGER
VARIABLE
VIEW
To replace an object, the user must have both *OBJEXIST rights to the object and *EXECUTE
rights for the schema or library, and privileges to create the object. All existing privileges on
the replaced object are preserved.
BITAND Runs a bitwise AND operation. 1 only if the corresponding bits in both arguments are 1
BITANDNOT Clears any bit in the first argument Zero if the corresponding bit in the second argument is 1;
that is in the second argument. otherwise, the result is copied from the corresponding bit in
the first argument
BITOR Runs a bitwise OR operation. 1 unless the corresponding bits in both arguments are zero
BITXOR Runs a bitwise exclusive OR 1 unless the corresponding bits in both arguments are the
operation. same
BITNOT Runs a bitwise NOT operation. Opposite of the corresponding bit in the argument
The arguments must be integer values that are represented by the data types SMALLINT,
INTEGER, BIGINT, or DECFLOAT. Arguments of type DECIMAL, REAL, or DOUBLE are
cast to DECFLOAT. The value is truncated to a whole number.
The bit manipulation functions can operate on up to 16 bits for SMALLINT, 32 bits for
INTEGER, 64 bits for BIGINT, and 113 bits for DECFLOAT. The range of supported
DECFLOAT values includes integers -2112 - 2112 -1, and special values such as NaN (Not a
Number) or INFINITY are not supported (SQLSTATE 42815). If the two arguments have
different data types, the argument that is supporting fewer bits is cast to a value with the data
type of the argument that is supporting more bits. This cast affects the bits that are set for
negative values. For example, -1 as a SMALLINT value has 16 bits set to 1, which when cast
to an INTEGER value has 32 bits set to 1.
The result of the functions with two arguments has the data type of the argument that is
highest in the data type precedence list for promotion. If either argument is DECFLOAT, the
data type of the result is DECFLOAT(34). If either argument can be null, the result can be
null. If either argument is null, the result is the null value.
The result of the BITNOT function has the same data type as the input argument, except that
DECIMAL, REAL, DOUBLE, or DECFLOAT(16) returns DECFLOAT(34). If the argument can
be null, the result can be null. If the argument is null, the result is the null value.
Use the BITXOR function to toggle bits in a value. Use the BITANDNOT function to clear bits.
BITANDNOT(val, pattern) operates more efficiently than BITAND(val, BITNOT(pattern)).
Example 5-12 is an example of the result of these operations.
This change has the potential of improving performance on queries that make this type of
calculations. Example 5-13 shows the syntax for constructing a simple INCLUDE statement
when you create such an index.
This enhancement is the second installment in extending DB2 for i on 7.1 to use implicit or
explicit remote three-part names within SQL.
Example 5-14 declares the global temporary table from a remote subselect, which is followed
by the insert.
Figure 5-3 displays the output that is generated from Example 5-14 on page 181.
Figure 5-3 Clearly showing the result was from a remote subselect
ISVs can use this support to prevent their customers from seeing or changing SQL routines
that are delivered as part of their solution.
Figure 5-5 Obfuscate (for SQL function and procedure objects) check box
This enhancement is the third installment for extending DB2 for i on 7.1 to use implicit or
explicit remote three-part names within SQL.
Example 5-16 Create a table in the local database that references a remote database with the AS
clause
CREATE TABLE DATALIB.MY_TEMP_TABLE AS (SELECT CURRENT_SERVER CONCAT ' is the
Server Name', IBMREQD
FROM X1423P2.SYSIBM.SYSDUMMY1) WITH DATA
Running the example SQL produces the output that shows that a remote table was accessed,
as shown in Figure 5-7.
Figure 5-7 Output from the SQL showing that the remote table was accessed
CREATE TABLE AS is enhanced to store the originating column and table as the reference
information in the file object.
When using LIKE to copy columns from another table, REFFLD information is copied for each
column that has a REFFLD in the original table.
When using AS to define a new column, any column that directly references a table or view
(not used in an expression) has a REFFLD defined that refers to that column. A simple CAST
also generates REFFLD information (that is, CAST (PARTNAME as varchar(50)))
System i Navigator and the QSQGNDDL() are enhanced to include the qualified name option,
making it easier to redeploy generated SQL. Any three-part names (object and column) are
left unchanged. Any schema qualification within the object that does not match the database
object library name are left unchanged.
The qualified name option specifies whether qualified or unqualified names should be
generated for the specified database object. The valid values are:
‘0’ Qualified object names should be generated. Unqualified names
within the body of SQL routines remain unqualified (by default).
‘1’ Unqualified object names should be generated when a library is found
that matches the database object library name. Any SQL object or
column reference that is RDB qualified is generated in its fully qualified
form. For example, rdb-name.schema-name.table-name and
rdb-name.schema-name.table-name.column-name references retain
their full qualification. This option also appears on the Generate SQL
dialog within System i Navigator, as shown in Figure 5-9 on page 187.
The default behavior is to continue to generate SQL with schema
qualification.
This enhancement makes it easier to proceed with DDS to SQL DDL modernization.
The following examples provide samples of the new generate SQL option for modernization.
There is a generate additional indexes option for keyed physical and logical files whether
more CREATE INDEX statements are generated for DDS created keyed physical, keyed
logical, or join logical files.
The resulting v statement after using the generate SQL for modernization option is shown in
Example 5-18.
Example 5-18 Resulting CREATE VIEW statement after using Generate SQL for modernization option
CREATE VIEW MJATST.GVJ (
F1_5A , F2_5A )
AS
SELECT
Q01.F1_5A , Q02.F2_5A
FROM MJATST.GT AS Q01 INNER JOIN
MJATST.GT2 AS Q02 ON ( Q01.F1_5A = Q02.F1_5A )
RCDFMT FMT;
There is also a generate index instead of view option, which specifies whether a CREATE
INDEX or CREATE VIEW statement is generated for a DDS-created keyed logical file.
Example 5-19 shows the DDS created keyed logical file.
This setting means that the number of records are the number of records that fit into a 32 KB,
64 KB, 128 KB, or 256 KB buffer.
Runtime performance is affected by many issues, such as the database design (the
entity-relationship model, which is a conceptual schema or semantic data model of a
relational database), the redundancy between functional environments in composite
application environment, the level of normalization, and the size and volumes processed. All
of these items influence the run time, throughput, or response time, which is supported by the
IT components and is defined by the needs of the business. Performance optimization for
database access must address all the components that are used in obtained acceptable and
sustainable results, covering the functional aspects and the technical components that
support them.
This section describes the query optimization method. It describes what is behind the
changes that are implemented in the database management components to relieve the
burden that is associated with the tools and processes a database administrator uses or
follows to realize the non-functional requirements about performance and scalability. These
requirements include the following:
Global Statistics Cache (GSC)
Adaptive Query Processing
Sparse indexes
Encoded vector index-only access, symbol table scan, symbol table probe, and
INCLUDE aggregates
Keeping tables or indexes in memory
In today’s business world, the dynamics of a business environment demand quick adaptation
to changes. You might face issues by using a too generic approach in using these facilities.
Consider that you made the architectural decision for a new application to use a stateless
runtime environment and that your detailed component model has the infrastructure for it. If
the business processes it supports are changing and require a more stateful design, you
When you define components for a database support, develop a methodology and use
preferred practices to obtain the best results. Any methodology must be consistent,
acceptable, measurable, and sustainable. You want to stay away from ad hoc measures or
simple bypasses.
IBM i provides statistics about I/O operations, provided by the database management
function. These statistics show accumulated values, from which you can derive averages, on
the I/O operations on tables and indexes. These statistics do not take into account the
variability and the dynamic nature of the business functions these objects support. So if you
want to use these statistics to define those objects to be placed either in memory or on faster
disks, you must consider a larger scope.
For example: Since the introduction of solid-state drives (SSD), which have a low latency, the
IBM i storage manager has awareness about this technology and uses it as appropriate.
Since release 6.1, you can specify the media preference on the CREATE TABLE/INDEX and
ALTER TABLE/INDEX commands along with the DECLARE GLOBAL TEMPORARY TABLE (see 5.3.9,
“SQE optimization for indexes on SSD” on page 196). The SYSTABLESTAT and
SYSINDEXSTAT catalog tables provide more I/O statistics (SEQUENTIAL_READS and
RANDOM_READS) in release 7.1 on these objects. These statistics, generated by the
database manager, indicate only possible candidates to be housed on SSD hardware.
Further investigation of the run time and the contribution to the performance and capacity or
the infrastructure reveals whether they are eligible for those settings.
For more information about SSDs, see Chapter 8, “Storage and solid-state drives” on
page 373.
Finally, and as a last resort, there is now a stored procedure available that you can use to
cancel long running SQL jobs using the QSYS2.CANCEL_SQL procedure.
Even with all the technologies that are used, the access plans might still yield an incorrect
(that is, not obeying the rule of capping the cost) result. This situation can, for example, be the
result of not having an index to navigate correctly through the data. For that reason, IBM i
supports the technology to create temporary indexes autonomically until the system
undergoes an IPL. This index can be used by any query that might benefit from its existence.
These autonomous indexes can be viewed and carry information that a database
Other elements that can contribute to incorrect access plans are as follows:
Inclusion of complex or derivated predicates, which are hard to predict without running the
query about the existence of stale statistics on busy systems
Hidden correlations in the data, often because of a poor design, data skew, and data
volatility
Changes in the business or infrastructure environment
In the last case, this situation is more likely to happen with variations in both memory and
processor allocations on partitioned systems, which are reconfigured using dynamic
partitioning. It can also be caused when the data is changed frequently in bulk.
If you want to read more about the database query engine, see Preparing for and Tuning the
SQL Query Engine on DB2 for i5/OS, SG24-6598.
To reduce this labor-intensive work, the DB2 Statistics Manager was revised. By default, it
now collects data about observed statistics in the database and from partially or fully
completed queries. This data is stored in the Global Statistics Cache (GSC), which is a
system-wide repository, containing those complex statistics. The adaptive query processing
(AQP) (see 5.3.4, “Adaptive query processing” on page 192) inspects the results of queries
and compares the estimated row counts with the actual row counts. All of the queries that are
processed by the SQL Query Engine (SQE) use this information to increase overall efficiency.
One of the typical actions the SQE can take is to use the live statistics in the GSC, compare
the estimated row count with the actual row count, and reoptimize and restart the query using
the new query plan. Furthermore, if another query asks for the same or a similar row count,
the Storage Manager (SM) can return the stored actual row count from the GSC. This action
allows generating faster query plans by the query optimizer.
Typically, observed statistics are for complex predicates, such as a join. A simple example is a
query that joins three files, A, B, and C. There is a discrepancy between the estimate and
actual row count of the join of A and B. The SM stores an observed statistic into the GSC.
Later, if a join query of A, B, and Z is submitted, SM recalls the observed statistic of the A and
B join. The SM considers that observed statistic in its estimate of the A, B, and Z join.
The GSC is an internal DB2 object, and the contents of it are not directly observable. You can
harvest the I/O statistics in the database catalog tables SYSTABLESTAT and
SYSINDEXSTAT or by looking at the I/O statistics using the Display File Description (DSPFD)
command. This command provides only a limited number of I/O operations. Both counters
(catalog tables and the object description) are reset at IPL time.
When the query compiler optimizes the query plans, its decisions are heavily influenced by
statistical information about the size of the database tables, indexes, and statistical views.
The optimizer also uses information about the distribution of data in specific columns of
tables, indexes, and statistical views if these columns are used to select rows or join tables.
The optimizer uses this information to estimate the costs of alternative access plans for each
query.
In IBM i 7.1, the SQE query engine uses a technique called adaptive query processing (AQP).
AQP analyzes actual query runtime statistics and uses that information to correct previous
estimates. These updated estimates can provide better information for subsequent
optimizations. It also focuses on optimizing join statements to improve the join orders and
minimizing the creation of large dials for sparsely populated join results. This inspection is
done during the run of a query request and observes its progress. The AQP handler wakes up
after a query runs for at least 2 seconds without returning any rows. Its mission is to analyze
the actual statistics from the partial query run, diagnose, and possibly recover from join order
problems. These join order problems are because of inaccurate statistical estimates. This
process is referred to as the AQP Handler.
After a query completes, another task, the AQP Request Support, starts and runs in a system
task so that it does not affect the performance of user applications. Estimated record counts
are compared to the actual values. If significant discrepancies are noted, the AQP Request
Support stores the observed statistic in the GSC. The AQP Request Support might also make
specific recommendations for improving the query plan the next time the query runs.
Both tasks collect enough information to reoptimize the query using partially observed
statistics or specific join order recommendations or both. If this optimization results in a new
plan, the old plan is stopped and the query is restarted with the new plan, provided that the
query has not returned any results. The restart can be done for long running queries during
the run time itself.
AQP looks for an unexpected starvation join condition when it analyzes join performance.
Starvation join is a condition where a table late in the join order eliminates many records from
the result set. In general, the query can run better if the table that eliminates the large number
of rows is first in the join order. When AQP identifies a table that causes an unexpected
starvation join condition, the table is noted as the forced primary table. The forced primary
table is saved for a subsequent optimization of the query. That optimization with the forced
primary recommendation can be used in two ways:
The forced primary table is placed first in the join order, overriding the join order that is
implied by the statistical estimates. The rest of the join order is defined by using existing
techniques.
The forced primary table can be used for LPG preselection against a large fact table in the
join.
Figure 5-10 provides a sample of how a join can be optimized. The estimated return of rows
on table C proved to be much smaller during the execution of the query, forcing the SQE to
recalculate the number of rows that are returned and dramatically reduced the size of the
result set.
Nested Loop Join Temporary List Probe Nested Loop Join Temporary List Probe
A B C
Table Scan Table Scan Temporary Sorted List Table Scan Cache Probe Temporary Sorted List
A
C
Table Scan Cache Table Scan
Table Scan
The reason for creating a sparse index is to provide performance enhancements for your
queries. The performance enhancement is done by precomputing and storing results of the
WHERE selection in the sparse index. The database engine can use these results instead of
recomputing them for a user-specified query. The query optimizer looks for any applicable
sparse index and can choose to implement the query by using a sparse index. The decision is
based on whether using a sparse index is a faster implementation choice.
Besides the comparison of the WHERE selection, the optimization of a sparse index is identical
to the optimization that is run for any Binary Radix index.
Example 5-21 shows creating a sparse index over a table in which events are stored. These
events can be of four types:
On-stage shows (type OSS)
Movies (type MOV)
Broadcasts (BRO)
Forums (FOR)
In the first index, select type OSS, MOV, and BRO, and in the second index, all of the types. In
the first index, the query selection is a subset of the sparse index selection and an index scan
over the sparse index is used. The remaining query selection (EVTYPE=FOR) is run following
the index scan. For the second index, the query selection is not a subset of the sparse index
selection and the sparse index cannot be used.
These two methods can be used with GROUP BY or DISTINCT queries that can be satisfied by
the symbol table. This symbol table-only access can be further employed in aggregate
queries by adding INCLUDE values to the encoded vector index.
Selection is applied to every entry in the symbol table. The selection must be applied to the
symbol table keys unless the EVI was created as a sparse index with a WHERE clause. In that
case, a portion of the selection is applied as the symbol table is built and maintained. The
query request must include matching predicates to use the sparse EVI.
However, for grouping queries where the resulting number of groups is relatively small
compared to the number of records in the underlying table, the performance improvement is
low. Even more, it can perform poorly when many groups are involved, making the symbol
table large. You are likely to experience poor performance if a large portion of the symbol
table is put into the overflow area. Alternatively, you experience a significant performance
improvement for grouping queries when the aggregate is specified as an INCLUDE value of the
symbol table.
INCLUDE aggregates
To enhance the ability of the EVI symbol table to provide aggregate answers, the symbol table
can be created to contain more INCLUDE values. These results are ready-made numeric
aggregate results, such as SUM, COUNT, AVG, or VARIANCE values that are requested over
non-key data. These aggregates are specified using the INCLUDE keyword on the CREATE
ENCODED VECTOR INDEX request.
These included aggregates are maintained in real time as rows are inserted, updated, or
deleted from the corresponding table. The symbol table maintains these additional aggregate
values in addition to the EVI keys for each symbol table entry. Because these results are
numeric results and finite in size, the symbol table is still a desirable compact size.
The included aggregates are over non-key columns in the table where the grouping is over
the corresponding EVI symbol table defined keys. The aggregate can be over a single column
or a derivation.
The optimizer attempts to match the columns that are used for the selection against the
leading keys of the EVI index. It then rewrites the selection into a series of ranges that can be
used to probe directly into the symbol table. Only those symbol table pages from the series of
ranges are paged into main memory. The resulting symbol table entries that are generated by
the probe operation can then be further processed by any remaining selection against EVI
keys. This strategy provides for quick access to only the entries of the symbol table that
satisfy the selection.
This enhancement allows encoded vector indexes on the table being altered to be preserved
if the data type or other attribute of a key column of the index is not changed by the ALTER.
This function applies only during the run time of a query, and might therefore be substituted
for the Set Object Access (SETOBJACC) command that puts the table or index in memory in a
static function. After the query completes, the memory might be freed again, contrary to the
effects of the Set Object Access (SETOBJACC), where you must clear it using the *PURGE option
on the Storage Pool (POOL) parameter of the command.
Similarly, the DB2 database manager reduces the amount of storage that is occupied by a
table that does not contain any data. This reduces the storage space that is needed for
unused objects. This situation is also referred to as deflated table support.
Indexes must have the SSD attribute specified through the UNIT(*SSD) parameter on the
Create Logical File (CRTLF) or Change Logical File (CHGLF) CL commands, or by using the
UNIT SSD clause on the SQL CREATE INDEX statement. For more information, see 5.4.11,
“CHGPFM and CHGLFM UNIT support” on page 221.
The QSYS2.INDEX_ADVICE procedure also has options to return the index advice as a result
set, either in raw advice format or in condensed format. When the job ends or disconnects,
the objects in QTEMP are automatically removed. The QSYS2.INDEX_ADVICE procedure also
has options to return the index advice as a result set, either in raw advice format or in
condensed format.
When the procedure is called with advice_option=0, the index advice level of the target file is
determined. If the advice file originated from an IBM i 5.4 or 6.1 system, the file is altered to
match the 7.1 advice format. This alteration is a one time conversion of the advice file. After
this is established, the user can query QTEMP.CONDENSEDINDEXADVICE to condense the index
advice against the target index advice file.
If any of the OR'ed indexes are missing, the optimizer is not able to use the indexes for
implementation of the OR-based query. This relationship is surfaced within the
QSYS2/SYSIXADV index advice table within a new DEPENDENT_ADVICE_COUNT column.
This column has a data type of BIGINT and the column value means the following:
Zero: This advised index stands on its own, no OR selection.
Greater than zero: Compare this column against the TIMES_ADVISED column to
understand how often this advised index has both OR and non-OR selection. Dependent
implies it depends on other advised indexes and all of the advised indexes must exist for a
bitmap implementation to be used
When Index Advisor shows highly dependent advice, use the Exact Match capability from
Show Statements to find the query in the plan cache. Additional information about the exact
match capability can be found on the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/IBM%20i%20Te
chnology%20Updates/page/Index%20Advisor%20-%20Show%20Statements%20-%20improved%20q
uery%20identification
After it is found, use Visual Explain to discover the dependent index advice specific to that
query. Some restrictions with this support are as follows:
OR'ed predicate advice appears only if no other advice is generated
Maximum of five predicates OR'ed together
Advised for files with OR'd local selection that gets costed in the primary (first) join dial
when optimizing a join query
Figure 5-12 shows the execution of the query with the advised indexes and no new advice are
registered.
Before this enhancement, SKIP LOCKED DATA was allowed only when the isolation level was
CS or RS.
To achieve the improved code generation, SQL procedures, functions, and trigger routines
must be re-created after you upgrade the operating system to IBM i 7.1.
This improvement applies to the following usage of the SQL SET statement:
SET v1 = v1 + <in lit> where v1 is a smallint, in, and bigint
SET v1 = v1 - <in lit> where v1 is a smallint, in, and bigint
The following statements will generate inline ILE C code:
SET v1 = v1 + <integer literal>
SET v1 = v1 + <bigint literal>
SET v1 = v1 + <negative integer literal>
SET v1 = <any literal> + v1
SET v1 = <any literal> +/- <any literal>
Job termination is improved to signal an SQL Cancel request to any QSQSRVR jobs that
being used by the application. The cancellation interrupts some long running operations,
allowing the QSQSRVR job to observe that the application is ending.
SQL Server Mode users must apply only the PTF to receive the improved cancel handling
support.
FIELDPROC_ENCODED_COMPARISON (For more Specifies the amount of optimization that the optimizer might use
information, see 5.2.7, “FIELDPROC support for when queried columns have attached field procedures.
encoding and encryption” on page 174.)
MEMORY_POOL_PREFERENCE Specifies the preferred memory pool that database operations use.
This option does not ensure usage of the specified pool, but directs
database to run its paging into this pool when supported by the
database operation.
PSEUDO_OPEN_CHECK_HOST_VARS This parameter can be used to allow SQE to check the selectivity
of the host variable values at pseudo-open time. If the new set of
host variable values requires a different plan to perform well, SQE
reoptimizes the query. The possible values are:
*DEFAULT: The default value is *NO.
*NO: Do not check host variable selectivity at pseudo-open
time. This behavior is compatible with the previous behavior.
*OPTIMIZE: The optimizer determines when host variable
selectivity should be checked. In general, the SQE engine
monitors the query. If, after a certain number runs, the engine
determines that there is no advantage to checking host
variable values (the selectivity is not changing enough or
selectivity changes result in the same plan), the optimizer will
stop checking for host variable selectivity changes at
pseudo-open time. Full opens do the normal plan validation.
*YES: Always check host variable selectivity at pseudo-open
time.
If the REOPTIMIZE_ACCESS_PLAN QAQQINI option is set to
*ONLY_REQUIRED, the PSEUDO_OPEN_CHECK_HOST_VARS option has no
effect.
SQL_CONCURRENT_ACCESS_RESOLUTION (For more Specifies the concurrent access resolution to use for an SQL
information, see “Concurrent access resolution” query.
on page 178.)
SQL_XML_DATA_CCSID (For more information, see Specifies the CCSID to be used for XML columns, host variables,
“XML data type” on page 161.) parameter markers, and expressions, if not explicitly specified.
TEXT_SEARCH_DEFAULT_TIMEZONE Specifies the time zone to apply to any date or dateTime value that
is specified in an XML text search using the CONTAINS or SCORE
function. The time zone is the offset from Coordinated Universal
Time (Greenwich mean time). It is only applicable when a specific
time zone is not given for the value.
SQL_GVAR_BUILD_RULE Influences whether global variables must exist when you build SQL
procedures, functions, triggers, or run SQL precompiles. For more
information, see 5.4.42, “New QAQQINI option:
SQL_GVAR_BUILD_RULE” on page 252.
If you have many SQL routines whose names begin with common first five characters, the
creation of the routines is slowed down by name conflicts and rebuild attempts that determine
whether a system name has been used.
The QGENOBJNAM data area can be used to control the system name that is generated by
DB2 for i for SQL routines. Through use of the data area, the performance of the SQL routine
creation can be greatly improved.
To be effective, the data area must be created as CHAR(10) and must be within a library that
is in the library list.
The user that creates the routine must have *USE authority to the data area.
When the PROGRAM NAME clause is used on CREATE TRIGGER to specify the system name of the
program, the data area has no effect on the operation.
In Example 5-24, MNAME123 is always used for the system name of the trigger program.
Example 5-24 Using the system name of the program in CREATE TRIGGER command
create trigger newlib/longname_trig123 after insert on newlib/longname_table123
program name mname123 begin end
Example 5-25 Automatically assigned trigger system programs according to the value of QGETNOBJNAM
create schema newlib;
cl: CRTDTAARA DTAARA(NEWLIB/QGENOBJNAM) TYPE(*CHAR) LEN(10) ;
cl: CHGDTAARA DTAARA(NEWLIB/QGENOBJNAM *ALL) VALUE('?????50000');
create procedure newlib.longname_proc123_srv () PROGRAM TYPE SUB language sql begin end;
create procedure newlib.longname_proc123_srva () PROGRAM TYPE SUB language sql begin end;
create procedure newlib.longname_proc123_srvb () PROGRAM TYPE SUB language sql begin end;
create function newlib.longname_func123() returns int language sql begin return(10); end;
create function newlib.longname_func123a() returns int language sql begin return(10); end;
create function newlib.longname_func123b() returns int language sql begin return(10); end;
PROGRAM TYPE SUB procedures perform better because ILE service programs are activated a
single time per activation group, whereas ILE programs are activated on every call. The cost
of an ILE activation is related to the procedure size, complexity, number of parameters,
number of variables, and the size of the parameters and variables.
The only functional difference to be noted when you use PROGRAM TYPE SUB is that the
QSYS2.SYSROUTINES catalog entry for the EXTERNAL_NAME column is formatted to
show an export name along with the service program name.
The default threshold for *DUMMY cursors is 150, but can be configured to be a higher
threshold through the QSQCSRTH data area.
*DUMMY cursors exist when unique SQL statements are prepared using a statement name
that is not unique. The SQL cursor name is changed to '*DUMMY' to allow the possibility of
the cursor being reused in the future.
Prepared SQL statements are maintained within a thread scoped internal data structure that
is called the Prepared Statement Area (PSA). This structure is managed by the database and
can be compressed. The initial threshold of the PSA is small and gradually grows through
use. For an application with heavy *DUMMY cursor use, you observe *DUMMY cursors being
hard closed at each PSA compression.
This type of application is gaining little value from the PSA compression and must endure the
performance penalty of its *DUMMY cursors being hard closed.
A new data area control is being provided for this type of user. QSQBIGPSA indicates that the
application wants to start with a large size for the PSA threshold. By using this option, the
application skips all the PSA compressions that it takes to reach a large PSA capacity. Use
this control with care, as PSA compression has value for most SQL users.
One way to determine the value of this data area for an application is to use the Database
Monitor and look for occurrences of QQRID=1000 & QQC21='HC’ & QQC15 = 'N'. To use this
control, the QSQBIGPSA data area must exist within the library list for a job when the first
SQL PREPARE statement is ran. The data area merely needs to exist; it does not need to be set
to any value.
This operation can be a long running one. CHECK(*NO) enables the constraint without
checking. If the data is not checked when the constraint is enabled, it is the responsibility of
the user to ensure that the data in the file is valid for the constraint.
Before Version 7.1, a data area can be created to enable a constraint without checking. When
Change PF Constraint (CHGPFCST) is run, DB2 searches for a data area in QTEMP called
QDB_CHGPFCST. If the data area is found and its length is exactly nine characters and
contains the value 'UNCHECKED', DB2 enables the constraint without validation.
Before this enhancement, a significant amount of processing was run during the cancel to
allow the Reorganize Physical File Member (RGZPFM) to be restarted later and to return as
much storage to the system as possible.
With this enhancement, the amount of time processing that is run at cancel time is minimized,
allowing the Reorganize Physical File Member (RGZPFM) to be canceled in a reasonable
amount of time. The processing that is bypassed is run later when the Reorganize Physical
File Member (RGZPFM) is restarted.
Note: The *PRVRGZ value is ignored if the reorganize is continued from a previously
canceled reorganize. If *PRVRGZ is specified, ALWCANCEL(*YES) must be specified
and either KEYFILE(*RPLDLTRCD) or KEYFILE(*NONE) must be specified.
5.3.27 QJOSJRNE API option to force journal entries without sending an entry
This enhancement provides a new option to force the journal receiver without sending an
entry. If key 4 (FORCE) has a value of 2, the journal receiver is forced without sending an
entry. If option 2 is specified, then key 4 must be the only key specified and the length of the
entry data must be zero.
A force journal entry is an entry where the journal receiver is forced to auxiliary storage after
the user entry is written to it. Possible values are:
0 The journal receiver is not forced to the auxiliary storage. This value is
the default value if the key is not specified.
1 The journal receiver is forced to the auxiliary storage.
2 The journal receiver is forced to the auxiliary storage, but no journal
entry is sent. When this value is specified, key 4 can be the only key
specified and zero must be specified for the length of entry data.
Specifying any other keys or a value other than zero for the length of
entry data results in an error.
The QDBRTVSN() API now finds the short name in most cases without enqueuing a request
to the database cross-reference.
The Override with Data Base File (OVRDBF) command can be used to tune sequential
read-only and write-only applications. A specific byte count can be supplied, or the
*BUF32KB, *BUF64KB, *BUF128KB, *BUF256KB special values can be specified.
Example 5-26 shows overriding a table to use 256K blocking for sequential processing.
Example 5-26 Overriding a table to use 256K blocking for sequential processing
CALL QSYS2.OVERRIDE_TABLE('CORPDATA', 'EMP', '*BUF256KB');
Before the JTOpen 7.9 version of the Toolbox JDBC driver, the DB2 engine only fetched rows
in block for the TYPE_FORWARD_ONLY and TYPE_SCROLL_INSENSITIVE types when
asensitive was specified for the cursor sensitivity connection property.
This enhancement in JTOpen 7.9 allows the Toolbox JDBC driver to use block fetches with
the TYPE_SCROLL_SENSITIVE ResultSet type.
The following is a comparison of the different JDBC cursor ResultSet type settings:
TYPE_FORWARD_ONLY: Result set can be read only in the forward direction.
TYPE_SCROLL_INSENSITIVE: Defines the result set as scrollable that allows data to be
read from the cursor in any order. The insensitive result set type indicates that recent
changes to the rows in the underlying tables should not be visible as the query is
executed. The DB2 engine often ensures the insensitive nature of the result set by making
a copy of the data before it is provided to the JDBC client. Making a copy of the data can
affect performance.
The cursor sensitivity setting of asensitive allows DB2 to choose the best performing method
when implementing the specified cursor definition. The resulting cursor implementation is
either sensitive or insensitive.
In the JTOpen 7.9 version of the toolbox JDBC driver, rows for asensitive cursors are fetched
in blocks regardless of the value that is specified for the cursor ResultSet type. This
enhancement ensures that when the cursor sensitivity setting of asensitive is specified, both
the DB2 engine and the toolbox JDBC driver can use implementations that deliver the best
performance.
JTOpen Lite: JTOpen Lite does not support scrollable cursors, so this enhancement does
not apply to JTOpen Lite applications.
Statement s =
connection.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_READ_ONLY);
Using this JTOpen enhancement, IBM i Navigator and Navigator for i performance was
improved when working with large data sets within the On Demand Performance Center:
Data is blocked when client communicates with IBM i host.
Ordering of data occurs on the host instead of on the client.
Object lists within a schema are also improved.
The new and existing fields contain the total number of times the specific operation occurred
within the job during the Collection Services time interval.
The call in Example 5-30 finds indexes that are created by ACT_ON_INDEX_ADVICE that are at
least 7 days old. For any index that was used less than 500 times by the Query engine, drop
the index.
5.3.33 Improved SQE statistics for INSERT, UPDATE, and DELETE statements
The SQL Query Engine statistics processing now includes a proactive response to data
changes as they happen to a database file, rather than just when the file is queried by SQE.
The query engine checks for stale statistics during file inserts, updates, or deletes, including
INSERT, UPDATE, or DELETE SQL statements.
When stale statistics are detected, a background statistics refresh is initiated, and statistics
are refreshed before subsequent query processing, avoiding performance degradation that
might occur because of stale statistics being used during query optimization.
This improvement is most beneficial in batched data change environments, such as a data
warehouse, where many data change operations occur at one time and are followed by the
execution of performance critical SQL queries.
These counts are also zeroed by the CHGOBJD command, but the command requires an
exclusive lock. This procedure does not require an exclusive lock.
The procedure writes information that is related to any index processed into an SQL global
temporary table.
The following query displays the results of the last call to the procedure:
select * from session.SQL_index_reset;
In Example 5-31, calls are made to zero the statistics for all indexes over a table, followed by
a call to zero the statistics for all indexes over a table, starting with CAT and using the
wildcard %.
The following call will zero the statistics for all indexes over any table in
schema STATST whose name starts with the letters CAT
call QSYS2.Reset_Table_Index_Statistics ('STATST', 'CAT%')
Before this enhancement, the more row locks that were acquired on a table, the slower each
additional row lock was acquired.
50
12
0
The Display Job (DSPJOB) command allows you to return the locks that are held by a job. If
more records are held than can be displayed, a CPF9898 message is sent that indicates the
number of record locks that are held by the job.
When a job holds more than 100,000 record locks, both of these commands run for a long
time before they fail. The enhancement quickly recognizes the existence of a great number of
record locks and returns the record lock count.
Starting the Investigate Performance Data action from System i Navigator or IBM Navigator
for i displays the new graphical interface for SQL Performance monitors, as shown in
Figure 5-15.
Statement Summary
203
3.014
1.276
32
38
Call Statements Select Statements Update Statements Insert Statements Delete Statements
Data Definition Statements Other Statements
To see a break-down of I/O activity by program name, analyze a SQL performance monitor
and select the program summary. Then, look for the following new columns:
Synchronous Database Reads
Synchronous Database Writes
Asynchronous Database Reads
Asynchronous Database Writes
Call Statements Select Statements Update Statements Insert Statements Delete Statements Data Definition
Other Statements Statements
An advanced form of the SQL Plan cache statements filter is populated by IBM i Navigator, as
shown in Figure 5-20.
Figure 5-20 An advanced form of the SQL Plan Cache Statements filter populated by IBM i Navigator
Also, the SQL Plan Cache is improved to recognize the cases where the temporary table is
reused with an identical table format. The plan cache plan and statistics management is
improved to retain and reuse plans for temporary tables.
5.4.1 QSYS2.SYSCOLUMNS2
QSYS2.SYSCOLUMNS2 is a view that is based on a table function that returns more
information that is not available in SYSCOLUMNS (such as the allocated length of a varying
length column). Because it is based on a table function, it typically returns results faster if a
specific table is specified when querying it.
Example 5-32 shows the return allocation information for DB2 tables and physical files
in MJATST.
Example 5-32 Return allocation information for DB2 tables and physical files in MJATST
SELECT MAX(table_schema) AS table_schema, MAX(table_name) AS table_name,
MAX(table_partition) AS table_partition,
SUM(CASE WHEN unit_type = 1 THEN unit_space_used ELSE null END) AS ssd_space,
SUM(CASE WHEN unit_type = 0 THEN unit_space_used ELSE null END) AS non_ssd_space
FROM qsys2.syspartitiondisk a
WHERE system_table_schema = 'MJATST'
GROUP BY a.table_schema, a.table_name, table_partition
ORDER BY 1,2,3;
Example 5-33 shows the return allocation information for DB2 indexes (keyed files, constraint,
and SQL indexes) in MJATST.
Example 5-33 Return allocation information for DB2 indexes (keyed files, constraint, and SQL indexes) in MJATST
SELECT index_schema, index_name, index_member, index_type,
SUM(CASE unit_type WHEN 1 THEN unit_space_used ELSE 0 END)/COUNT(*) AS ssd_space,
SUM(CASE unit_type WHEN 0 THEN unit_space_used ELSE 0 END)/COUNT(*) AS nonssd_space
FROM qsys2.syspartitionindexdisk b
WHERE system_table_schema = 'MJATST'
GROUP BY index_schema, index_name, index_member, index_type;
Where:
library-name is a character or graphic string expression that identifies the name of a
library. It can be either a long or short library name.
object-type-list is a character or graphic string expression that contains one or more
system object types separated by either a blank or a comma. The object types can include
or exclude the leading * character. For example, either FILE or *FILE can be specified.
The result of the function is a table that contains a row for each object with the format shown
in Table 5-4. All the columns are nullable.
DAYS_USED_COUNT INTEGER Number of days an object has been used on the system.
LAST_RESET_TIMESTAMP TIMESTAMP Date when the days used count was last reset to zero.
IASP_NUMBER SMALLINT Auxiliary storage pool (ASP) where storage is allocated for the
object.
This field applies only if the SQL statement is dynamic (QQC12= 'D').
If the system trigger runs the SIGNAL statement and sends an escape message to its caller,
the SQL INSERT, UPDATE, or DELETE statement fails with MSGSQL0438 (SQLCODE=-438) instead
of MSGSQL0443.
The SQLSTATE, MSG, and other values within the SQL diagnostics area or SQLCA contain the
values that are passed into the SIGNAL statement.
The website contains recommendations for native trigger programs. Here is an example:
The SIGNAL SQL statement provides the SQL linkage between the native trigger and the
application that causes the trigger to be fired by using SQL.
For more information, see the IBM i 7.1 Knowledge Center at the following websites:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/sqlp/rbaf
yrecursivequeries.htmand
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/db2/rbafz
hierquery.htm
SELECT LEVEL,
CAST(SPACE((LEVEL - 1) * 4) || '/' || DEPTNAME AS VARCHAR(40)) AS DEPTNAME
FROM DEPARTMENT
START WITH DEPTNO = 'A00'
CONNECT BY NOCYCLE PRIOR DEPTNO = ADMRDEPT
LEVEL DEPTNAME
1 /SPIFFY COMPUTER SERVICE DIV.
2 /SUPPORT SERVICES
3 /BRANCH OFFICE J2
3 /BRANCH OFFICE I2
3 /BRANCH OFFICE H2
3 /BRANCH OFFICE G2
3 /BRANCH OFFICE F2
3 /SOFTWARE SUPPORT
3 /OPERATIONS
2 /DEVELOPMENT CENTER
3 /ADMINISTRATION SYSTEMS
3 /MANUFACTURING SYSTEMS
2 /INFORMATION CENTER
2 /PLANNING
2 /SPIFFY COMPUTER SERVICE DIV.
Figure 5-23 Result of hierarchical query
The LAND, LOR, XOR, and TRANSLATE scalar functions were enhanced by removing similar
restrictions.
Example 5-37 shows the usage of PROGRAM NAME in the CREATE TRIGGER definition. If
a program name is not specified, then the system generates one, such as TR1_U00001 or
TR1_U00002.
5.4.9 Debug step supported for SQL procedures, functions, and triggers
SQL procedures, functions, and triggers created with SET OPTION DBGVIEW = *SOURCE can be
debugged using the following:
Using the Start Debug (STRDBG) command
Using IBM i Navigator System Debugger
When an SQL procedure, function, or trigger is built for debug, two debug views can be used:
SQL Object Processor Root View (default)
Underlying ILE C listing view
Before this enhancement, when the Step function (F10=Step when using STRDBG or
F11=Step Over when using System Debugger) was used within either of the IBM i debuggers
at the SQL view debug level, the Step action applied to the underlying ILE C listing view. It
normally takes many steps at the SQL debug view level to get to the next statement, making
the SQL debug view difficult to use.
After this enhancement is installed, the Step action applies at the SQL Statement view level.
This enhancement makes it much easier to debug SQL procedures, functions, and triggers.
For more information, see the IBM i 7.1 SQL CLI documentation at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/cli/rzadp
whatsnew.htm
If the user is using logical replication, you need the PTFs on the target and the source
systems.
After *SSD has been specified as the preferred storage media, the file data is asynchronously
moved to the SSD.
This enhancement applies to the following SQL and IBM i command interfaces:
ALTER TABLE STORE123.EMPLOYEE ALTER UNIT SSD
CHGPF FILE(STORE123/EMPLOYEE) UNIT(*SSD)
CHGLF FILE(STORE123/XEMP2) UNIT(*SSD)
It is the intention of IBM to add content dynamically to SYSTOOLS, either on base releases or
through PTFs for field releases. A preferred practice for customers who are interested in such
tools is to periodically review the contents of SYSTOOLS.
For more information, see the IBM i 7.1 Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzajq/rza
jqsystools.htm
The DB2 for i HTTP functions are defined in the SYSTOOLS schema (home of DB2 for i
supplied tools and examples) and are not covered by IBM Software Maintenance and
Support. These functions are ready for use and provide a fast start to building your own
applications.
Example 5-38 shows an example of the using the DB2 for i HTTP functions to consume
information from a blog.
The following steps show you how to use the DB2 for i HTTP functions to consume
information from a URL. In this example, information from the DB2 for i blog is consumed.
1. Build a utility function to manage the content time stamp:
CREATE OR REPLACE FUNCTION QGPL.RFC339_DATE_FORMAT(in_time TIMESTAMP)
RETURNS VARCHAR(26)
LANGUAGE SQL
RETURN CAST(DATE(in_time) AS CHAR(10)) || 'T' || CHAR(TIME(in_time), JIS)
2. Use XML features on DB2 for i 7.1 to query the blog content and return the blog posts for
the last 6 months. (order the rows by reader responses). See Example 5-38.
Example 5-38 Using DB2 for i HTTP functions to consume information from a blog
-- Blog Posts for the last 6 months, order by reader responses
SELECT published, updated, author, title, responses, url, author_bio,
html_content, url_atom
FROM
XMLTABLE(
XMLNAMESPACES(DEFAULT 'http://www.w3.org/2005/Atom',
'http://purl.org/syndication/thread/1.0' AS "thr"),
'feed/entry'
PASSING XMLPARSE(DOCUMENT
SYSTOOLS.HTTPGETBLOB(
-- URL --
'http://db2fori.blogspot.com/feeds/posts/default?published-min=' ||
SYSTOOLS.URLENCODE(QGPL.RFC339_DATE_FORMAT(CURRENT_TIMESTAMP - 6 MONTHS), 'UTF-8')
||
'&published-max=' || SYSTOOLS.URLENCODE(QGPL.RFC339_DATE_FORMAT(CURRENT_TIMESTAMP
+ 1 DAYS) , 'UTF-8') ,
-- header --
'<httpHeader> <header name="Accept" value="application/atom+xml"/> </httpHeader>'
) )
COLUMNS
published TIMESTAMP PATH 'published',
updated TIMESTAMP PATH 'updated',
author VARCHAR(15) CCSID 1208 PATH 'author/name',
title VARCHAR(100) CCSID 1208 PATH 'link[@rel="alternate" and
@type="text/html"]/@title',
responses INTEGER PATH 'thr:total',
author_bio VARCHAR(4096) CCSID 1208 PATH 'author/uri',
For more information about DB2 for i HTTP functions, see the following websites:
Accessing web services: Using IBM DB2 for i HTTP UDFs and UDTFs:
https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sys_wp_
access_web_service_db2_i_udf
Accessing HTTP and RESTful services from DB2: Introducing the REST user-defined
functions for DB2:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1105httprestdb2/
Note: This support does not cover when TELNET is used to form the connection.
Table 5-6 describes the columns in the ENV_SYS_INFO view. The schema is SYSIBMADM.
For more information about the QpzListPTF) API, see the List Program Temporary Fixes
(QpzListPTF) API topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fapis%2Fqpzlstf
x.htm
Table 5-7 describes the columns in the PTF_INFO view. The schema is QSYS2.
PTF_STATUS_TIMESTAMP STATTIME TIMESTAMP The date and time that the PTF
Nullable status was last changed. Contains
the null value when the status date
and time is not available.
PTF_CREATION_TIMESTAMP CRTTIME TIMESTAMP The date and time that the PTF was
Nullable created. Contains the null value
when the creation date and time
cannot be determined.
Example 5-39 and Example 5-40 provide some examples of how the PTF_INFO view can be
used. Example 5-39 shows an example when PTFs are impacted by the next IPL.
Example 5-39 Discovering which PTFs are impacted by the next IPL
SELECT PTF_IDENTIFIER, PTF_IPL_ACTION, A.*
FROM QSYS2.PTF_INFO A
WHERE PTF_IPL_ACTION <> 'NONE'
Example 5-40 shows an example when the PTFs are loaded but not applied.
Example 5-40 Discovering which PTFs are loaded but not applied
SELECT PTF_IDENTIFIER, PTF_IPL_REQUIRED, A.*
FROM QSYS2.PTF_INFO A
WHERE PTF_LOADED_STATUS = 'LOADED'
ORDER BY PTF_PRODUCT_ID
For example, the Technology Refresh (TR) level on your system can be determined by using
the view definition that is shown in Example 5-41.
To start the GET_JOB_INFO table function, the caller must have *JOBCTL user special
authority or QIBM_DB_SQLADM or QIBM_DB_SYSMON function usage authority.
The result of the GET_JOB_INFO function is a table that contains a single row with the format
shown in Table 5-8. All the columns are nullable.
V_ACTIVE_JOB _STATUS CHAR(4) To understand the values that are returned in this field, see
this reference and search on “Active job status”: Work
Management API Attribute Descriptions. For more
information, go to:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/ind
ex.jsp?topic=%2Fapis%2FWMAttrDesc.htm
V_RUN _PRIORITY INTEGER The highest run priority allowed for any thread within this job.
V_CPU_USED BIGINT The amount of CPU time (in milliseconds) that has been
used by this job.
V_AUX_IO_REQUESTED BIGINT The number of auxiliary I/O requests run by the job across
all routing steps. This includes both database and
nondatabase paging. This is an unsigned BINARY(8) value.
For more information about the BASE_TABLE function, see the SQL Reference
documentation:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fdb2%2Frbafzsca
basetable.htm
The BASE_TABLE function returns the object names and schema names of the object found
for an alias.
Example 5-42 shows an example of determining the base objects for all aliases within the
schemas that have “MJATST” somewhere in the schema name
Figure 5-32 shows the output for the number of partition keys.
Where:
Job_Name is a qualified job name or a special value of '*' to indicate the current job.
Library_Name is an optional library name for the procedure output.
Table_Name is an optional table name for the procedure output.
-- populate QGPL.SQLCSR1 table with open SQL cursors for a target job
call qsys2.DUMP_SQL_CURSORS('724695/QUSER/QZDASOINIT', '', '', 1);
An environment variable can be used by the customer to direct DB2 for i to avoid canceling
RLA access operations. Upon the first cancel request for a specific job, the environment
variable QIBM_SQL_NO_RLA_CANCEL is accessed. If the environment variable exists, the
cancel request is not honored when RLA is the only database work ongoing within the initial
thread at the time the cancel request is received.
The environment variable is the SQL Cancel operational switch. The variable can be created
at the job or system level. Creating it once at the system level affects how SQL Cancels are
processed for all jobs.
QSYS2.CANCEL_SQL procedure
The IBM supplied procedure, QSYS2.CANCEL_SQL(), can be called to request the cancellation
of an SQL statement for a target job.
SQL Cancel support provides an alternative to end job immediate when you deal with an
orphaned or runaway process. End job immediate is like a hammer, where SQL Cancel is
more like a tap on the shoulder. Before this improvement, the SQL Cancel support was only
available for ODBC, JDBC, and SQL CLI applications. The QSYS2.CANCEL_SQL() procedure
extends the SQL Cancel support to all application and interactive SQL environments.
When an SQL Cancel is requested, an asynchronous request is sent to the target job. If the
job is processing an interruptible, long-running system operation, analysis is done within the
job to determine whether it is safe to cancel the statement. When it is determined that it is
safe to cancel the statement, an SQL0952 escape message is sent, causing the statement to
end.
Procedure definition
The QSYS2.CANCEL_SQL procedure is defined as follows:
CREATE PROCEDURE QSYS2.CANCEL_SQL (
IN VARCHAR(28) )
LANGUAGE PLI
SPECIFIC QSYS2.CANCEL_SQL
NOT DETERMINISTIC
MODIFIES SQL DATA
CALLED ON NULL INPUT
EXTERNAL NAME 'QSYS/QSQSSUDF(CANCEL_SQL)'
PARAMETER STYLE SQL ;
Authorization
The QSYS2.CANCEL_SQL procedure requires that the authorization ID associated with the
statement have *JOBCTL special authority.
Description
The procedure has a single input parameter, that is, the qualified job name of the job that
should be canceled. The job name must be uppercase. If that job is running an interruptible
SQL statement or query, the statement is canceled. The application most likely receives an
SQLCODE = SQL0952 (-952) message. In some cases, the failure that is returned might be
SQL0901 or the SQL0952 might contain an incorrect reason code.
This procedure takes advantage of the same cancel technology that is used by the other SQL
cancel interfaces:
System i Navigator's Run SQL Scripts: Cancel Request button
SQL Call Level Interface (CLI): SQLCancel() API
JDBC method: Native Statement.cancel() and toolbox
com.ibm.as400.access.AS400JDBCStatement.cancel()
Extended Dynamic Remote SQL (EDRS): Cancel EDRS Request (QxdaCancelEDRS)
API
QSYS2.CANCEL_SQL() procedure
If the cancel request occurs during the act of committing or rolling back a commitment-control
transaction, the request is ignored.
Failures
The procedure fails with a descriptive SQL0443 failure if the target job is not found.
The procedure fails with SQL0443 and SQL0552 if the caller does not have *JOBCTL user
special authority.
If the target application is using transaction management, the SQL statement is running under
the umbrella of a transaction save point level. When those same long running INSERT, UPDATE,
or DELETE SQL statements are canceled, the changes that are made before cancellation are
rolled back.
In both cases, the application receives back control with an indication that the SQL statement
failed. It is up to the application to determine the next action.
Useful tool
The QSYS2.CANCEL_SQL() procedure provides a useful tool to database administrators for
IBM i systems. After you have the latest DB Group PTF installed, you can start calling this
procedure to stop long-running or expensive SQL statements.
QSYS2.FIND_AND_CANCEL_QSQSRVR_SQL procedure
The QSYS2.FIND_AND_CANCEL_QSQSRVR_SQL() procedure uses the QSYS2.FIND_QSQSRVR_JOBS
and QSYS2.CANCEL_SQL() procedures that are derived from the set of jobs that has active SQL
activity, given a target application job. Each job found is made a target of an SQL Cancel
request.
How is this procedure useful? When you have an important application instance (job) that
uses QSQSRVR jobs, it can be difficult to determine the “total system impact” of the
application. How many SQL Server Mode jobs are in use at that moment? Is this application
responsible for a QSQSRVR job that is consuming many processor cycles or holding onto
object locks? The FIND_QSQSRVR_JOBS() procedure provides some of these answers by tying
together the application and its SQL Server Mode job usage.
Procedure definition
QSYS2.FIND_QSQSRVR_JOBS is defined as follows:
CREATE PROCEDURE QSYS2.FIND_QSQSRVR_JOBS( JOB_NAME VARCHAR(28) )
NOT DETERMINISTIC
MODIFIES SQL DATA
CALLED ON NULL INPUT
DYNAMIC RESULT SETS 2
SPECIFIC FINDSRVR
EXTERNAL NAME 'QSYS/QSQSSUDF(FINDSRVR)'
LANGUAGE C PARAMETER STYLE SQL;
Authorization
On IBM i 6.1, to start QSYS2.FIND_QSQSRVR_JOBS, you need *JOBCTL special authority.
Usage
The procedure can be called from any environment. The input parameter is the application
qualified job name. When called from within System i Navigator's Run SQL Scripts, two
results sets are displayed. When called from Start SQL Interactive Session (STRSQL) or
elsewhere, you must query the temporary tables to see the data, as shown in Example 5-47.
The change affects only SQL triggers fired through native database I/O operations.
To enable the new function, an environment variable must exist before any SQL statements
are run within the client job. An easy way to deploy the environment variable is to define it at
the system level as follows:
ADDENVVAR ENVVAR(QIBM_DB2_MIXED_SERVER_MODE) LEVEL(*SYS)
The SQL triggers must not use statement level isolation level support to run statements using
commitment control.
The SQL triggers must not directly or indirectly use Java/JDBC or CLI.
If the client job is multi-threaded and triggers are fired in parallel over different threads, the
mixed-mode server mode solution serializes the execution of the triggers. Only one trigger is
allowed to run at a time.
The solution does not apply to native triggers, such as Add Physical File Trigger (ADDPFTRG),
built over programs that use SQL. The solution does not include SQL triggers that call
procedures, fire user-defined functions, or cause nested triggers to run.
The disable constraints indicator controls whether constraints that are added or changed as a
result of replaying a CT, AC, or GC journal entry should be automatically disabled. The
disable constraint indicator does not apply to unique constraints. It has two settings:
0 Do not disable constraints.
1 Disable constraints.
There are other functions to help you identify the cause of the failure:
SELECT * FROM qsys2.syscolumns WHERE TABLE_SCHEMA = 'QRECOVERY' and
TABLE_NAME = 'QSQ901S' ORDER BY ORDINAL_POSITION;
SELECT * FROM qsys2.syscolumns2 WHERE TABLE_SCHEMA = 'QRECOVERY'
and TABLE_NAME = 'QSQ901S' ORDER BY ORDINAL_POSITION;
These functions help you get information about the contents of the QRECOVERY.QSQ901S
table.
The records in the QRECOVERY.QSQ901S table likely show the internal failures inside
DB2 for i. Use the data from this table when you report a problem to IBM, which helps with
searching for PTFs for DB2 for i problems.
The SQL0901 logging file can be found in a different library when you use IASPs:
QRCYnnnnn/QSQ901S *FILE, where nnnnn is the iASP number.
To retrieve the library name, specify the SQL schema name for the input long object name
and blank for the input library name. The first 10 bytes of the output qualified object name
contains the short library name and the second 10 bytes are QSYS.
In Example 5-48, a schema is created and the methods of accessing the long name with a
short name and the short name with a long name using SQL by querying the
QSYS2.OBJECT_STATISTICS catalog is shown.
Using the SYSSCHEMAS view touches every library object. The following queries are
identical:
SELECT OBJLONGNAME FROM TABLE(QSYS2.OBJECT_STATISTICS('QSYS ','LIB
')) AS A WHERE OBJNAME LIKE 'CATH%';
SELECT SCHEMA_NAME FROM QSYS2.SYSSCHEMAS WHERE
SYSTEM_SCHEMA_NAME LIKE 'CATH%';
The message SQL0440 as in Example 5-49 is generated when you run the previous CREATE
PROCEDURE statements.
The XMLTABLE built-in table function can be used to retrieve the contents of an XML document
as a result set that can be referenced in SQL queries.
The addition of XMLTABLE support to DB2 for i users makes it easier for data centers to
balance and extract value from a hybrid data model where XML data and relational data
coexist.
SELECT X.*
FROM emp, XMLTABLE ('$d/dept/employee' passing doc as "d“
COLUMNS empID INTEGER PATH '@id',
firstname VARCHAR(20) PATH 'name/first',
lastname VARCHAR(25) PATH 'name/last') AS X
The output in Figure 5-34 shows an XML document along with a sample SQL to produce the
output in table format.
</Receipt> 2 1122.19
Figure 5-34 Query the total cost of all items that are purchased on each receipt
In other SQL interfaces, an SQL statement is limited to 2 MB in length. The limit on this
command is 5000 bytes.
The command has many parameters that are similar to the ones used with the RUNSQLSTM
command. RUNSQL runs SQL statements in the invoker's activation group. If RUNSQL is included
in a compiled CL program, the activation group of the program is used.
Two examples of using RUNSQL on the command line are shown in Example 5-51. The
example also shows how RUNSQL can be used within a CL program.
/* In a CL program, use the Receive File (RCVF) command to read the results of the
query */
RUNSQL SQL('CREATE TABLE QTEMP.WorkTable1 AS
(SELECT * FROM qsys2.systables WHERE table_schema = ''QSYS2'') WITH DATA')
COMMIT(*NONE) NAMING(*SQL)
Example 5-52 shows how RUNSQL can be used within a CL program. Use the RCVF command if
you must read the results of the query.
Because the native JDBC driver does not typically use a network connection, the
Connection.getNetworkTimeout() and Connection.setNetworkTimeout() methods are not
implemented.
With this enhancement, the QSQPRCED() SQLP0410 format has been extended to allow any
or all of the client special registers to be passed. The new fields are optional and are allowed
to vary from one QSQPRCED() call to the next. The values are not bound into the Extended
Dynamic SQL package (*SQLPKG). Each client special register value is character data type,
varying in length up to a maximum of 255.
...
/* CHAR[] @C5A*/
/*char Client_Info_Userid[];*/
/* CHAR[] @C9A*/
/*char Client_Info_Wrkstnname[];*/
/* CHAR[] @C9A*/
/*char Client_Info_Applname[];*/
/* CHAR[] @C9A*/
/*char Client_Info_Programid_[];*/
/* CHAR[] @C9A*/
/*char Client_Info_Acctstr[];*/
/* CHAR[] @C9A*/
}Qsq_SQLP0410_t;
These values are used in SQL performance monitors, SQL details for jobs, Visual Explain,
and elsewhere within the OnDemand Performance Center.
The Start Database Monitor (STRDBMON) command pre-filters can be used to target STRQMQRY
command usage.
Example 5-53 and Example 5-54 show examples of using the QCMDEXC procedure.
Example 5-53 Using SQL naming, adding a library to the library list
CALL QSYS2.QCMDEXC('ADDLIBLE PRODLIB2');
Example 5-54 Using SYSTEM naming, adding a library to the library list using an expression
DECLARE V_LIBRARY_NAME VARCHAR(10);
SET V_LIBRARY_NAME = 'PRODLIB2';
CALL QSYS2/QCMDEXC('ADDLIBLE ' CONCAT V_LIBRARY_NAME);
The usage of the ORDER BY(*ARRIVAL) parameter and ORDER BY(character-value) examples
are shown in Example 5-55, where the Stream File is copied and ordered according to the
ORDER BY parameter value.
In addition to the ORDER BY parameter, you can use the following parameters:
FETCH FIRST n ROWS
OPTIMIZE FOR n ROWS
FOR UPDATE
FOR READ ONLY
WITH <isolation-level>
SKIP LOCKED DATA
USE CURRENTLY COMMITED
WAIT FOR OUTCOME
Now, when you use system naming, both the slash(/) and dot(.) can be used for object
qualification. This change makes it easier to adapt to system naming, as the SQL statement
text does not need to be updated.
This enhancement makes it easier to use system naming. Object references can vary and
SQL UDFs can now be library qualified with a “.” when you use system naming.
NAMING(*SYS) can be used with (/) and (.), as shown in Example 5-56. However, if the
example is used with NAMING(*SQL), it fails.
5.4.39 Direct control of system names for tables, views, and indexes
The FOR SYSTEM NAME clause directly defines the system name for these objects, eliminating
the need to run a RENAME after the object is created to replace the system generated name.
The name provided in the FOR SYSTEM NAME clause must be a valid system name and cannot
be qualified. The first name that is provided for the object cannot be a valid system name.
The optional FOR SYSTEM NAME clause has been added to the following SQL statements:
CREATE TABLE
CREATE VIEW
CREATE INDEX
DECLARE GLOBAL TEMPORARY TABLE
Use the FOR SYSTEM NAME clause to achieve direct control over table, view, and index system
names, making it simpler to manage the database. This support eliminates the need to use
the RENAME SQL statement or the Rename Object (RNMOBJ) command after object creation.
Additionally, the Generate SQL / QSQGNDDL() interface uses this enhancement to produce
SQL DDL scripts that produce identical object names.
When QSQGNDDL() is called using the System_Name_Option = '1', whenever the table,
view, or index objects has a system name that differs from the SQL name, the FOR SYSTEM
NAME clause is generated. For IBM i Navigator users, you control the System Name Option by
selecting the System names for objects option as shown in Figure 5-35.
Example 5-57 COMP_12_11 *FILE object created instead of COMPA00001, COMPA00002, and so on
CREATE OR REPLACE VIEW
PRODLIB/COMPARE_YEARS_2012_AND_2011
FOR SYSTEM NAME COMP_12_11
AS SELECT …
Example 5-58 Generated table with system name of SALES, instead of generated name CUSTO00001
CREATE TABLE CUSTOMER_SALES FOR SYSTEM NAME SALES (CUSTNO BIGINT…
Before this change, triggers and functions were allowed to only reference global variables.
SQLCODE = -20430 and SQLSTATE = “428GX’ were returned to a trigger or function that
attempted to modify a global variable.
Global variables are modified as shown in Example 5-59, which shows the modification of a
global variable.
A multiple event trigger is a trigger that can handle INSERT, UPDATE, and DELETE
triggering events within a single SQL trigger program. The ability to handle more than one
event in a single program simplifies management of triggers. In the body of the trigger, the
INSERTING, UPDATING, and DELETING predicates can be used to distinguish between the
events that cause the trigger to fire. These predicates can be specified in control statements
(like IF) or within any SQL statement that accepts a predicate (like SELECT or UPDATE).
For more information about multiple events supported in a single SQL trigger, see the
following resources:
SQL Programming Guide
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fsqlp%2Frbaf
ymultisql.htm
CREATE TRIGGER SQL statement
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fdb2%2Frbafz
hctrigger.htm
Trigger Event Predicates
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fdb2%2Frbafz
trigeventpred.htm
The SQL in Example 5-61 produces the output that is shown in Figure 5-36, which clearly
shows the privileges by GRANTEE over the selected schema and table.
DB2 for Linux, UNIX, and Windows, IBM DB2 Universal Driver for SQLJ and JDBC, ODBC,
CLI, IBM DB2 Connect™, and other application requesters rely upon SQL package
(*SQLPKG) objects within the NULLID collection.
Before you use these clients to access data on IBM i, you must create IBM i SQL packages
for these application programs.
Before you exit the method, the method returns the connection to the default value. Failure to
do so might cause unexpected behavior in other Java stored procedure and Java
user-defined functions.
Example 5-63 shows an example of how system naming is enabled in a Java stored
procedure.
Example 5-63 How system naming can be enabled in a Java stored procedure
---------------------------------------------------------------------------------
Parameter style DB2GENERAL:
---------------------------------------------------------------------------------
DB2Connection connection = (DB2Connection) getConnection();
connection.setUseSystemNaming(true);
....
.... do work using the connection
....
connection.setUseSystemNaming(false);
---------------------------------------------------------------------------------
Parameter style JAVA:
---------------------------------------------------------------------------------
DB2Connection connection = (DB2Connection)
DriverManager.getConnection("jdbc:default:connection");
connection.setUseSystemNaming(true);
....
.... do work using the connection
....
connection.setUseSystemNaming(false);
Before you exit the method, the method returns the connection to the default value. Failure to
do so might cause unexpected behavior in other Java stored procedure and Java
user-defined functions
default-clause:
DEFAULT NULL
constant
special-register
global-variable
( expression )
The following commands (and their API counterparts) were changed to keep the catalogs in
sync with the executable object for procedures and functions:
Create Duplicate Object (CRTDUPOBJ): The routine catalog information is duplicated and the
SYSROUTINE EXTERNAL_NAME column points to the newly duplicated executable
object.
Copy Library (CPYLIB): The routine catalog information is duplicated and the
SYSROUTINE EXTERNAL_NAME column points to the newly duplicated executable
object.
Rename Object (RNMOBJ): The routine catalog information is modified with the
SYSROUTINE EXTERNAL_NAME column, which points to the renamed executable
object.
Move Object (MOVOBJ): The routine catalog information is modified with the SYSROUTINE
EXTERNAL_NAME column, which points to the moved executable object
There is coverage for Librarian APIs or other operations that are built upon these commands.
The changed behavior can be partially disabled by adding an environment variable. If this
environment variable exists, Move Object and Rename Object operations do not update the
catalogs. The environment variable has no effect on the CPYLIB and CRTDUPOBJ commands.
Example 5-65 Setting the environment variable to partially disable the function
ADDENVVAR
ENVVAR(QIBM_SQL_NO_CATALOG_UPDATE)
LEVEL(*SYS)
SQE is enhanced to make Grouping Set queries aware of EVI INCLUDE as an optimization
possibility.
Defining an EVI with INCLUDE is shown in the Example 5-66, which also shows an example
SQL to query the EVI.
Fast Index-Only access is possible for CUBE(), ROLLUP(), and GROUPING SETS().
5.4.52 Navigator: System Name column added to show related and all objects
When you use System i Navigator to observe the objects that are related to a table, only the
SQL names of those objects appear. The same situation is true when you use the All Objects
view under the Schemas folder.
The System i Navigator 7.1 client is enhanced to include the “System Name” column. This
information is useful when long names are used and the system name is not obvious, as
shown in Figure 5-40.
This detail is helpful to fully identify long running index builds for partitioned indexes and
constraints. See Figure 5-41.
This enhancement enables the query statistics engine to detect data skew in the partitioned
table if accurate column statistics are available for the partitions. Allowing column level
statistics to be automatically collected is the default behavior and a preferred practice.
This enhancement might improve the performance of joins when multiple partitions are
queried. An example of multiple partition joins is shown in Example 5-67.
An example of how to configure your client to show that these new column are shown in
Figure 5-42.
Figure 5-42 Configure your client to show the new columns for tables under the Schemas folder
You can explicitly increase the size of the SQL Plan Cache to allow more plans to be saved in
the plan cache. This action can improve performance for customers that have many unique
queries.
During an IPL, the SQL Plan Cache is deleted and re-created. Before this enhancement,
when the plan cache was re-created, it was re-created with the default size of 512 MB, even if
After the latest DB2 Group PTFs are installed, you must change the plan cache size one
more time (even if it is changed to the same size as its current size) for the size to be
persistently saved.
This CHANGE_PLAN_CACHE_SIZE procedure can be used to change the size of the plan cache.
The procedure accepts a single input parameter, that is, the wanted SQL Plan Cache size in
megabytes. If the value passed in is zero, the plan cache is reset to its default value. To use
the procedure, run the command that is shown in Example 5-68.
It also possible to get information about Plan Cache properties by using the procedure shown
in Example 5-69.
Figure 5-43 SQL plan cache properties and job scoped plans
Note: If the plan cache threshold is explicitly set by the user, autosizing is disabled and
is indicated as such with the keyword *DISABLED.
Slowest runs information. For each plan in the plan cache, the database retains
information for up to three of the slowest runs of the plan. This value is now externalized
through the properties and can be adjusted higher or lower.
Plan cache activity thresholds. This section shows the highest point for various metrics
tracked for either plan cache activity or query activity. These thresholds can be reset (to
zero) to restart the threshold tracking. Each threshold has both the high point value and
the time stamp when that high point occurred.
Default values. Default values now show as *DEFAULT or *AUTO, clarifying whether the
plan cache threshold is system managed or has been overridden by the user.
Temporary object storage information. Besides storing the SQL query plans, the plan
cache is also used to cache runtime objects so that they can be used across jobs. These
runtime objects provided both the executable code for queries and the storage for runtime
objects such as hash tables and sorted results. When one job is finished using a runtime
object, it is placed in the cache so that another job can pick it up and use it. Two properties
are provided which show both the number of these runtime objects cached and the total
size of all runtime objects cached in the plan cache.
Before this enhancement, if a job was canceled while it was opening a delayed maintenance
index, the entire index was invalidated and had to be rebuilt from scratch. On large indexes,
this operation can be a lengthy operation. This enhancement ensures that in this case that the
cancel does not cause the entire index to be invalidated.
If you query the UNIT_TYPE field, you can identify information about installed SSD media, as
shown in Example 5-70, which shows the relevant SQL to query information for all disks or
just for the SSDs.
Example 5-70 Usage of the QSYS2/DISKSTAT catalog for analysis of disk usage
Query information for all disks.
SELECT * FROM QSYS2.SYSDISKSTAT
Here the columns from the monitor that include this information:
QQI8 (SQLCODE)
QQC81 (SQLSTATE)
This enhancement enables the use of the STRDBMON FTRSQLCODE() pre-filter as a technique to
isolate application fetch-time failures or warnings.
The monitor support is enhanced to return result rows information for other statements.
Here are some considerations about the number of result rows that are returned for QQI2:
SQL DML statements (INSERT, UPDATE, DELETE, or MERGE) show the total number of rows
that are changed.
CREATE TABLE AS and DECLARE GLOBAL TEMPORARY TABLE with the WITH DATA parameter
shows the total number of rows that are inserted into the table.
Any other query-based statement shows the estimated number of rows for the resulting
query.
All remaining statements show either -1 or 0.
Figure 5-45 Two new result sets returned by QUSRJOBI() displayed in SQL details for jobs
Example 5-71 shows an example of how to run the filter by client program pre-filter.
Example 5-71 Examples of using STRDBMON with the filter by client to identify QUERY/400 usage
IBM i 6.1 Example:
STRDBMON OUTFILE(LIBAARON/QRY400mon)
JOB(*ALL)
COMMENT('FTRCLTPGM(RUNQRY)')
The SQL shown in Example 5-72 places the temporary table on the SSD. The existence of
this table can be confirmed by using the SQL in Example 5-73. The actual preferences for the
table can be identified by the SQL in Example 5-74.
The QSYS2.PARTITION_DISKS table can be queried to determine which media the queried
table is on (Example 5-73).
Table 5-10 shows the new column definitions for the MTIs.
LAST_MTI_USED_FOR_STATS TIMESTAMP The time stamp that represents the last time
that this specific MTI was used by the
optimizer to obtain statistics for a query.
When the DUMP_PLAN_CACHE procedure is called to create an SQL Plan Cache snapshot, a
procedure interface is needed to later remove the snapshot file object and any entries for the
file object from the System i Navigator tracking table.
SQL Plan Cache snapshots can be maintained by using the CALL statement, as shown in
Example 5-75.
Example 5-75 Removing the SQL Plan Cache snapshot and Performance Monitor file
CALL QSYS2.DUMP_PLAN_CACHE(‘CACHELIB',‘NOV2011')
CALL QSYS2.REMOVE_PERFORMANCE_MONITOR(‘CACHELIB',‘MAY2010')
The QQI1 column contains this reason code when QQRID=1000 and QQC21='DL'.
The QQI1 values are documented in the IBM i Database Performance and Query
Optimization document:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzajq/rzajq.pdf
Automatic tracking of important system limits is a new health touchpoint on IBM i. The system
instrumentation for automated tracking focuses on a subset of the system limits. As those
limits are reached, tracking information is registered in a DB2 for i system table called
QSYS2/SYSLIMTBL. A view called QSYS2/SYSLIMITS is built over the SYSLIMTBL physical
file and provides a wealth of contextual information about the rows in the table.
Example 5-76 provides an example of examining active jobs over time and determining how
close you might be coming to the maximum active jobs limit.
LAST_CHANGE_TIMESTAMP LASTCHG TIMESTAMP Timestamp when this row was inserted into the
QSYS2/SYSLIMTBL table.
USER_NAME CURUSER VARCHAR(10) The name of the user in effect when the instance
of System Limits detail was logged.
CURRENT_VALUE CURVAL BIGINT The current value of the System Limits detail.
SYSTEM_SCHEMA_NAME SYS_NAME VARCHAR(10) The library name for this instance of System
Limits detail, otherwise this is set to NULL.
SYSTEM_OBJECT_NAME SYS_ONAME VARCHAR(30) The object name for this instance of System
Limits detail, otherwise this is set to NULL.
SYSTEM_TABLE_MEMBER SYS_MNAME VARCHAR(10) The member name for an object limit specific to
database members, otherwise this is set to
NULL.
OBJECT_TYPE OBJTYPE VARCHAR(7 This is the IBM i object type when an object
name has been logged under the
SYSTEM_SCHEMA_NAME and
SYSTEM_OBJECT_NAME columns. When no
object name is specified, this column is set to
NULL.
PTF Services
Security Services
QSYS2.SET_COLUMN_ATTRIBUTE() Procedure 2.4.2, “Database Monitor and Plan Cache variable values
masking” on page 22
TCP/IP Services
Storage Services
Object Services
Journal Services
Application Services
QSYS2.SYSTEM_VALUE_INFO view
The SYSTEM_VALUE_INFO view returns the names of system values and their values. The
list of system values can be found in the Retrieve System Values (QWCRSVAL) API. For
more information about the QWCRSVAL API, see the following website:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fapis%2Fqwcrsva
l.htm
You must have *ALLOBJ or *AUDIT special authority to retrieve the values for QAUDCTL,
QAUDENDACN, QAUDFRCLVL, QAUDLVL, QAUDLVL2, and QCRTOBJAUD. The current
value column contains ‘*NOTAVL’ or -1 if accessed by an unauthorized user.
Table 5-13 describes the columns in the view. The schema is QSYS2.
The following statement examines the system values that are related to maximums:
SELECT * FROM SYSTEM_VALUE_INFO
WHERE SYSTEM_VALUE_NAME LIKE '%MAX%'
QSYS2.USER_STORAGE view
The USER_STORAGE view contains details about storage by user profile. The user storage
consumption detail is determined by using the Retrieve User Information (QSYRUSRI) API.
For more information about the QSYRUSRI API, see the following website:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fapis%2Fqsyrusr
i.htm
Table 5-14 describes the columns in the view. The schema is QSYS2.
The following example shows determining how much storage user SCOTTF has consumed:
SELECT * FROM QSYS2/USER_STORAGE
WHERE USER_NAME = ‘SCOTTF’
QSYS2/DISPLAY_JOURNAL is a new table function that allows the user to view entries in a
journal by running a query. There are many input parameters of the table function that can be
used for best performance to return only those journal entries that are of interest.
For more information about the special values, see the Retrieve Journal Entries
(QjoRetrieveJournalEntries ) API in the IBM i 7.1 Knowledge Center. Unlike many other
UDTFs in QSYS2, this one has no DB2 for i provided view.
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fapis%2FQJORJRN
E.htm
Example 5-78 shows an example of querying a data journal using filtering criteria to find
changes made by SUPERUSER against the PRODDATA/SALES table.
Following are enhancements to improve the ability to mine the security audit journal
(QAUDJRN):
New default columns displayed
New columns to identify the object
Search capability for object names based on names in the Entry Specific Data Object type
Generic library name
Generic file name
Search capability for IFS names (or any other column) available in the Additional filters box
Figure 5-51 Using IBM i Navigator to view security audit journal (QAUDJRN) data
Figure 5-53 Journal data dynamic filtering and improved journal data segregation
On the Journal Viewer window, click Columns to see the columns available for display and to
be able to add new columns. See Figure 5-54.
An advanced drill-down using the new look of IBM Navigator for i is shown in Figure 5-55.
Figure 5-55 Enhanced drill-down using the new look of IBM Navigator for i
Deliver reports in formats such as PDF, spreadsheet, or other PC file formats and automate
report distribution through an email distribution list.
The web services allow web applications to authenticate users, view domains and folders,
determine report parameters, run DB2 Web Query reports, and more. Simplify the
programming effort by using the application extension, now part of the SDK. This extension
can eliminate the need for programming to the web services and allow you to create a URL
interface to report execution that you can embed in an existing or new application.
When you develop using the SDK, the DB2 Web Query BASE product is required and
Developer Workbench feature is recommended. Deployment (runtime) environments require
the BASE product and the Runtime User Enablement feature of DB2 Web Query.
A text search collection describes one or more sets of system objects that have their
associated text data indexed and searched. For example, a collection might contain an object
set of all spool files in output queue QUSRSYS/QEZJOBLOG, or an object set for all stream
files in the /home/alice/text_data directory.
The text search collection referred to in this documentation should not be confused with a
DB2 schema (sometimes also referred to as a collection), or a Lucene collection (part of the
internal structure of a DB2 text search index).
When a text search collection is created, several DB2 objects are created on the system:
SQL schema with the same name as the collection
Catalogs for tracking the collection’s configuration
Catalogs for tracking the objects that are indexed
SQL Stored procedures to administer and search the collection
A DB2 text search index for indexing the associated text
Administration of the collection is provided with stored procedures, most of which are created
in the schema.
5.7.1 OmniFind for IBM i: Searching Multiple Member source physical files
The OmniFind Text Search Server for DB2 for i product (5733-OMF) for IBM i 7.1 is enhanced
to include more SQL programmable interfaces that extend its support beyond traditional DB2
tables.
Multiple Member source physical files are added one at a time to the OmniFind collection.
The members from source physical file are retrieved and treated as separate objects.
During the OmniFind update processing, new, changed, or removed members are recognized
and processed appropriately.
Two types of instrumentation are available:
<collection>.ADD_SRCPF_OBJECT_SET (IN SRCPF_LIB VARCHAR(10) CCSID 1208, IN
SRCPF_NAME VARCHAR(10) CCSID 1208, OUT SETID INTEGER) CALL UPDATE
<collection>.ADD_SRCPF_OBJECT_SET (IN SRCPF_LIBVARCHAR(10) CCSID 1208, IN
SRCPF_NAME VARCHAR(10) CCSID 1208) CALL UPDATE
For more information about this topic and the OmniFind for i product, see the topic “Searching
Spool Files and IFS Stream Files” at developerWorks at:
https://www.ibm.com/developerworks/ibmi/library/i-omnifind/omnifind.html
The implementation described here provides a set of scalar functions and table functions to
provide the integration with DB2.
Scalar functions
The MQREAD function returns a message in a VARCHAR variable from a specified WebSphere
MQ location, which is specified by receive-service, using the policy that is defined in
service-policy, starting at the beginning of the queue but without removing the message from
the queue. If no messages are available to be returned, a null value is returned.
Example 5-79 reads the first message with a correlation ID that matches 1234 from the head
of the queue that is specified by the MYSERVICE service using the MYPOLICY policy.
The MQREADCLOB function returns a message in a CLOB variable from a specified WebSphere
MQ location, which is specified by receive-service, using the policy that is defined in
Example 5-80 reads the first message with a correlation ID that matches 1234 from the head
of the queue that is specified by the MYSERVICE service using the MYPOLICY policy.
The MQRECEIVE function returns a message in a VARCHAR variable from a specified WebSphere
MQ location, which is specified by receive-service, using the policy that is defined in
service-policy. This operation removes the message from the queue. If a correlation-id is
specified, the first message with a matching correlation identifier is returned. If a correlation-id
is not specified, the message at the beginning of queue is returned. If no messages are
available to be returned, a null value is returned.
Example 5-81 receives the first message with a correlation-id that matches 1234 from the
head of the queue that is specified by the MYSERVICE service using the MYPOLICY policy.
Example 5-82 receives the first message with a correlation-id that matches 1234 from the
head of the queue that is specified by the MYSERVICE service using the MYPOLICY policy.
If for all of the previously mentioned scalars the receive-service is not specified or the null
value is used, the DB2.DEFAULT.SERVICE is used.
The MQSEND function sends the data in a VARCHAR or CLOB variable msg-data to the WebSphere
MQ location specified by send-service, using the policy that is defined in service-policy. An
optional user-defined message correlation identifier can be specified by correlation-id. The
return value is 1 if successful, or 0 if not successful. If the send-service is not specified or the
null value is used, the DB2.DEFAULT.SERVICE is used.
On all of these functions, you can specify a correlation-id (correl-id) expression. The value of
the expression specifies the correlation identifier that is associated with this message. A
correlation identifier is often specified in request-and-reply scenarios to associate requests
with replies. The first message with a matching correlation identifier is returned.
Example 5-83 reads the head of the queue that is specified by the default service
(DB2.DEFAULT.SERVICE) using the default policy (DB2.DEFAULT.POLICY). Only messages
with a CORRELID of 1234 are returned. All columns are returned.
The MQREADALLCLOB function returns a table that contains the messages and message
metadata in CLOB variables from the WebSphere MQ location that is specified by
receive-service, using the policy that is defined in service-policy. This operation does not
remove the messages from the queue. If num-rows is specified, a maximum of num-rows
messages is returned. If num-rows is not specified, all available messages are returned.
Example 5-84 receives the first 10 messages from the head of the queue that is specified by
the default service (DB2.DEFAULT.SERVICE), using the default policy
(DB2.DEFAULT.POLICY). All columns are returned.
The MQRECEIVEALL function returns a table that contains the messages and message
metadata in VARCHAR variables from the WebSphere MQ location that is specified by
receive-service, using the policy that is defined in service-policy. This operation removes the
messages from the queue. If a correlation-id is specified, only those messages with a
matching correlation identifier are returned. If a correlation-id is not specified, all available
messages are returned. If num-rows is specified, a maximum of num-rows messages is
returned. If num-rows is not specified, all available messages are returned.
Example 5-85 receives all the messages from the head of the queue that is specified by the
service MYSERVICE, using the default policy (DB2.DEFAULT.POLICY). Only the MSG and
CORRELID columns are returned.
The MQRECEIVEALLCLOB function returns a table that contains the messages and message
metadata in CLOB variables from the WebSphere MQ location that is specified by
receive-service, using the policy that is defined in service-policy. This operation removes the
messages from the queue. If a correlation-id is specified, only those messages with a
matching correlation identifier are returned. If correlation-id is not specified, all available
messages are returned. If num-rows is specified, a maximum of num-rows messages is
returned. If num-rows is not specified, all available messages are returned.
If for all of the previously mentioned table functions the receive-service is not specified or the
null value is used, the DB2.DEFAULT.SERVICE is used.
The msg-data parameter on the MQSEND function is in the job CCSID. If a string is passed for
msg-data, it is converted to the job CCSID. For example, if a string is passed for msg-data
that has a CCSID 1200, it is converted to the job CCSID before the message data is passed
to WebSphere MQ. If the string is defined to be bit data or the CCSID of the string is the
CCSID of the job, no conversion occurs.
WebSphere MQ does not run CCSID conversions of the message data when MQSEND is run.
The message data that is passed from DB2 is sent unchanged along with a CCSID that
informs the receiver of the message and how to interpret the message data. The CCSID that
is sent depends on the value that is specified for the CODEDCHARSETID of the service that
is used on the MQSEND function. The default for CODEDCHARSETID is -3, which indicates that
the CCSID passed is the job default CCSID. If a value other than -3 is used for
CODEDCHARSETID, the invoker must ensure that the message data passed to MQSEND is not
converted to the job CCSID by DB2, and that the string is encoded in that specified CCSID.
If the specified service has a value for CODEDCHARSETID of -3, DB2 instructs WebSphere
MQ to convert any message that is read or received into the job CCSID. If a value other than
-3 is used for CODEDCHARSETID, DB2 instructs WebSphere MQ to convert any message
that is read or received into that CCSID. Specifying something other than -3 for
CODEDCHARSETID in a service that is used to read or receive messages is not a preferred
practice because the msg-data return parameter and MSG result column are defined by DB2
in job default CCSID.
SET PATH = *LIBL can be used to reset the path for system naming use. This support is
implemented using a special package flow that is begun at run time.
For more information, see the Call level interface (CLI) driver enhancements topic in the DB2
Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.wn.doc
/doc/c0055321.html
> sqlallocstmt 1 1
Programming Notes: The sqlsetenvattr() API is called before a connection is made, and
the sqlsetconnectattr() API can be called before or after a connection is made
Example 5-88 shows an example of finding employees with less than two years of tenure.
Example 5-88 Finding employees with less than two years of tenure
sqlsetconnectattr 1 SQL_ATTR_DATE_FMT SQL_IBMi_FMT_EUR
sqlsetconnectattr 1 SQL_ATTR_DATE_SEP SQL_SEP_PERIOD
sqlexecdirect 1 “SELECT EMPNO FROM CORPDATA.EMP
WHERE HIREDATE > ‘09.23.2013‘ – 2 YEARS
ORDER BY LASTNAME, FIRSTNME, MIDINIT" -3
Important: Do not confuse the term performance tools with the licensed product
5770-PT1 Performance Tools.
With these functions, you can set up practices for monitoring and managing your system
performance to ensure that your IT infrastructure is aligned with the changing demands of
your business.
This chapter describes how the Collection Services and the Analysis Tooling changed.
Requirement: To take advantage of all the Performance Tools enhancements that are
described in this chapter, the system must have the latest levels of PTFs installed.
You can use data from all of these collectors and combine it to allow for an in-depth analysis
of jobs and processes and how they use system resources.
Operating system functionality: All of the functions that allow the configuration of data
collection, to start and end data collection, and to manage the collection objects, are part
of the operating system.
Figure 6-2 is an example of how you can configure the Custom collection profile
IBM i 7.1 TR6 and PTF SI47870 are required to collect these new data.
The following metrics are being added to the job performance data *JOBMI category of
Collection Services.
SQL clock time (total time in SQ and below) per thread (microseconds)
SQL unscaled CPU per thread (microseconds)
SQL scaled CPU per thread (microseconds)
SQL synchronous database reads per thread
SQL synchronous nondatabase reads per thread
SQL synchronous database writes per thread
SQL synchronous nondatabase writes per thread
SQL asynchronous database reads per thread
SQL asynchronous nondatabase reads per thread
SQL asynchronous database writes per thread
SQL asynchronous nondatabase writes per thread
Number of high-level SQL statements per thread
See the SI47594 PTF Cover letter special instructions at the following website:
http://www-912.ibm.com/systems/electronic/support/a_dir/as4ptf.nsf/ALLPTFS/SI47594
The DSCAT field indicates whether the disk unit has some special characteristics that might
require a special interpretation of its performance data. Each bit in this field has an
independent meaning:
X'00' = No special category applies.
X'01' = This disk unit is in external storage media. This can also be determined by
examining the device type and model for this disk unit.
X'02' = Data on this disk unit is encrypted.
X'04' = This is a virtual disk unit. This can also be determined by examining the device
type and model for this disk unit.
Collection Services supports data for AMS using the file QAPMSHRMP.
AMS was later enhanced to support deduplication of active memory. Collection Services
supports deduplication metrics in QAPMSHRMP using the following existing reserved fields:
SMFIELD1: Partition logical memory deduplicated. The amount of the partition's logical
memory (in bytes) mapped to a smaller set of physical pages in the shared memory pool
because it was identical to other pages in the shared memory pool.
SMFIELD4: Pool physical memory deduplicated. The amount of physical memory (in
bytes) within the shared memory pool that logical pages of memory from the partitions
sharing the pool have been mapped to because of deduplication.
MPFIELD1: Unscaled deduplication time. The amount of processing time, in
microseconds, spent deduplicating logical partition memory within the shared memory
pool.
MPFIELD2: Scaled deduplication time. The amount of scaled processing time, in
microseconds, spent deduplicating logical partition memory within the shared memory
pool.
For more information about DMPMEMINF, see the Dump Main Memory Information
(DMPMEMINF) topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fcl%2Fdmpmeminf
.htm
The new and existing fields contain the total number of times the specific operation occurred
within the job during the Collection Services time interval:
New field: JBNUS
The number of native database (non-SQL) files and SQL cursors that have been fully
opened. Subtracting the value within field JBLBO from JBNUS yields the number of
non-SQL full opens.
Collection Services support for Ethernet link aggregation was added in 7.1 PTF SI43661.
With this PTF applied, the protocol files contain one record per interval for each port that is
associated with a line. Therefore, multiple records for the same line occur each interval if
Ethernet link aggregation is used. Each record reports data unique to activity on that port.
For more information about Ethernet link aggregation, see 9.9, “Ethernet link aggregation” on
page 438.
Data is collected and the CRTPFRDTA function exports the new data to a new file called
QAPMBUSINT.
Support for P7IOC data was built on top of the 12X support. P7IOC support was added to 7.1
in PTF SI43661 and has the following parts:
Data for the internal bus. These data were added to QAPMBUSINT with a new value for
bus type (BUTYPE field) to identify these records.
Hardware data are available for PCI buses that are attached to a P7IOC. These new PCI
data are provided in the QAPMBUS file.
Workload groups
Collection Services added the support to report system-wide usage data for workload groups
and thread-level data to help understand performance issues that are related to workload
grouping. This support was added in 7.1 PTF SI39804.
The *JOBMI data category and QAPMJOBMI file supports more metrics that identify the
group that a thread was associated with at sample time along with how much time that thread
was not able to run due to workload grouping constraints. Descriptions of the QAPMJOBMI
fields JBFLDR2 and JBFLDR3 can be found in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahx%2Frzahxq
apmjobmi.htm
TLBIE metrics
Collection Services added support to capture statistics for the TLBIE instructions that are
frequently needed by IBM support to help investigate performance issues. Support was
added in PTFs MF56871 and SI49418. After the PTFs are applied, file QAPMSYSINT
contains additional data in record type 3. Field descriptions for this file are not available in the
IBM i 7.1 Knowledge Center, but are described in “QAPMSYSINT” on page 315.
Performance data is a set of information about the operation of a system (or network of
systems) that can be used to understand response time and throughput. You can use
performance data to adjust programs, system attributes, and operations. These adjustments
can improve response times and throughputs. Adjustments can also help you to predict the
effects of certain changes to the system, operation, or program.
The following sections discuss some of the Collection Services data files.
For more information about the Collection Services data files, see the IBM i 7.1 Knowledge
Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahx%2Frzahxp
erfdatafiles1a.htm
QAPMBUS
This database file contains data for external system buses.
Support for a particular bus and what metrics are supported for that bus depends on the type
of bus, how that bus connects to the system, and whether it is assigned to the partition.
Historically, records were produced for all PCI buses even though data was instrumented
only for bus usage within the collecting partition. For that situation now, data is captured only
for those buses that have activity within the collecting partition.
For newer technologies, the hardware might provide more instrumentation. Hardware metrics
represent bus usage by all partitions. The collecting partition must be authorized to obtain
these data (see the “Allow performance information collection” option within the HMC partition
configuration). If the collecting partition is authorized, buses that support hardware metrics
are reported independent of partition assignments.
INTNUM Interval number: The nth sample database interval that is based on the start time in the
Create Performance Data (CRTPFRDTA) command.
DATETIME Interval date (mmddyy) and time (hhmmss): The date and time of the sample interval.
INTSEC Elapsed interval seconds: The number of seconds since the last sample interval.
BUIOPB System bus number: Bus numbering begins with one. Before V5R4, bus numbering
began with zero.
BUNRDR Reserved.
BUTIMO Reserved.
BUBNAS Reserved.
BUCAT Bus category. This field indicates if this bus record has some special characteristics,
which might require a special interpretation of its performance data. Each bit in this field
has an independent meaning:
X'00' = No special category applies.
X'01' = This bus is attached to an I/O hub.
(Note: The following might be used depending on what happens with switches.)
X’02’= This record represents a switch. The data that is reported is the sum of all
buses under the switch. The bus number that is reported is the first bus under the
switch.
BUHUB Hub number. If this bus is associated with an I/O hub, this number is the number of that
hub. (Note: An I/O hub can be embedded in the backplane.)
BUMAXRATE Maximum byte rate. When available from hardware, this rate is the estimated maximum
rate that data might be both sent and received in bytes per second through the hardware
port.
BUCBSND Command bytes sent. When available from hardware, this number is the number of
command bytes sent through the hardware port.
BUDBSBD Data bytes sent. When available from hardware, this number is the number of data bytes
sent through the hardware port.
BUCBRCV Command bytes received. When available from hardware, this number is the number of
command bytes received through the hardware port.
BUDBRCV Data bytes received. When available from hardware, this number is the number of data
bytes received through the hardware port.
The metrics that are supported depend on the instrumentation within the hardware chips.
Support for a particular bus depends on both the type of bus and the chip family.
There might be one or more records for each interval for a reported bus. The number of
records and the metrics that are supported depend on both the bus type and chip type.
These metrics are instrumented in the hardware and represent bus usage by all partitions.
The collecting partition must be authorized to obtain these data (see “Allow performance
information collection” option within the HMC partition configuration.
BUNBR The hardware assigned number that is associated with the bus or hub.
INTNUM Interval number: The nth sample database interval based on the start time specified in
the Create Performance Data (CRTPFRDTA) command.
DTETIM Interval date and time. The date and time of the sample interval.
INTSEC Elapsed interval seconds: The number of seconds since the last sample interval.
BUNBR Bus number. The hardware assigned number that is associated with the bus or hub.
BUDFMT Bus data format. This field is provided to help you understand what data are instrumented
by the hardware components of the bus if there are future differences.
BUATTR1 Bus attribute 1. The meaning of this field depends on the bus type. One row is present
for each bus type (BUTYPE) field:
Type 4: Port identifier. One record is present for each supported port.
– 0 = even port
– 1 = odd port
Type 6: Category.
– 0 = Topside port
BUMAXRATE Maximum byte rate. The estimated maximum rate that data can be both sent and
received in bytes per second.
BUDATA1 The meaning of this field depends on the type (BUTYPE) field:
Type 4: Reserved.
Type 6: Command bytes sent.
BUDATA2 The meaning of this field depends on the type (BUTYPE) field:
Type 4: Reserved.
Type 6: Command bytes received.
QAPMDISK
You find new entries in this table that detail, per path, the total read and write operations and
worldwide node names for external disks. Table 6-4 shows the added columns.
DSPTROP The path total read operations reports the number of read requests that are received by
internal machine functions, which is not the same as the device read operations reported
in the DSDROP field.
DSPTWOP The path total write operations reports the number of write requests that are received by
internal machine functions, which is not the same as the device write operations that are
reported in the DSDWOP field.
DSWWNN The worldwide node name is a unique identifier that represents the external storage
subsystem that the disk belongs to. This value is null for non-external disks.
QAPMDISKRB
Up to release 6.1, the QAPMDISK table contained a detailed set of data about the
performance of the disk unit. This design was kept, but complemented with a new table
(QAPMDISKRB) that contains only the disk operations per interval. At the same time, it
increases the number of bucket definition boundaries reported from 6 to 11, separates the
read and write operations in different counters, and reports the bucket definition boundaries in
microseconds instead of in milliseconds. These changes apply to all disks, internal or
external. Each entry in the QAPMDISKRB table contains the number of I/O operations, the
response time, and the service time. The associated disk response time boundaries (in
microseconds) are reported in the QAPMCONF file in GKEY fields G1–GA, for which there is
no interface to change them.
You can find the breakouts for those buckets in Table 6-5. Both QAPMDISK and
QAPMDISKRB tables carry the same columns for each row (interval number and device
resource name), so they can be joined for analysis.
Table 6-5 Boundaries per bucket in the QAPMDISKRB and QAPMDISK tables
QAPMDISKRB (microseconds) QAPMDISK (milliseconds)
1 0 15
2 15 250 1 >0 1
3 250 1,000
4 1,000 4,000
5 4,000 8,000 2 2 16
6 8,000 16,000
7 16,000 64,000 3 16 64
9 256,000 500,000
5 256 1024
10 500,000 1,024,000
11 1,024,000 6 1024
QAPMETHP
This database file includes the physical port Ethernet protocol statistics for active Ethernet
line descriptions that are associated with an Ethernet port on a Single Root I/O Virtualization
(SR-IOV) adapter (Table 6-6).
Physical port data is reported only if the collecting partition has been authorized to obtain it.
This authorization is a partition configuration attribute set on the Hardware Management
Console (HMC).
There is one record per interval per port. Port resource name can be used to uniquely
associate records across intervals and to join with the records that contain the virtual port
Ethernet protocol statistics in the QAPMETH file.
INTNUM Interval number: The nth sample database interval based on the start time specified in
the Create Performance Data (CRTPFRDTA) command.
DATETIME Interval date (yymmdd) and time (hhmmss): The date and time of the sample interval.
INTSEC Elapsed interval seconds: The number of seconds since the last sample interval.
ETMEXR More than 16 retries: Frame unsuccessfully transmitted due to excessive retries.
ETMOWC Out of window collisions: Collision occurred after slot time of channel elapsed.
JBINDCPU The amount of unscaled processor time (in µs) that represents the work that is done
solely within this thread without regard for how server task work is charged.
ETMALE Alignment error: Inbound frame contained non-integer number of bytes and a CRC error.
ETMCRL Carrier loss: Carrier input to the chipset on the IO adapters is false during transmission.
ETMDIF Discarded inbound frames: Receiver discarded frame due to lack of AIF entries.
ETMROV Receive overruns: Receiver has lost all or part of an incoming frame due to buffer
shortage.
ETMMEE Memory error: The chipset on the IO adapters is the bus master and did not receive ready
signal within 25.6 microseconds of asserting the address on the DAL** lines.
ETMSQE Signal quality error: Signal indicating the transmit is successfully complete did not arrive
within 2 microseconds of successful transmission.
ETMM1R More than one retry to transmit: Frame required more than one retry for successful
transmission.
ETM1R Exactly one retry to transmit: Frame required one retry for successful transmission.
ETMDCN Deferred conditions: The chipset on the IO adapters deferred transmission due to busy
channel
ETMBRV Total MAC bytes received ok: This contains a count of bytes in frames that are
successfully received. It includes bytes from received multicast and broadcast frames.
This number includes everything from destination address up to but excluding FCS.
Source address, destination address, length or type, and pad are included.
ETMBTR Total MAC bytes transmitted ok: Total number of bytes transmitted successfully. This
number includes everything from destination address up to but excluding FCS. Source
address, destination address, length or type, and pad are included.
ETMUPF Unsupported protocol frames: Number of frames that were discarded because they
specified an unsupported protocol. This count is included in the discarded inbound
frames counter.
QAPMJOBMI
The QAPMJOBMI table now has information about lock counts on a thread basis, providing
details about which locks are held (seizes, process scoped locks, thread scoped locks,
process scoped database record locks, and thread scoped database record locks held). It
also holds information about the resource affinity status changes of a thread or process.
Cache memory access: On Power Systems, all of the processor cores on any chip can
access any of the cache memory in the entire system. The management of the relationship
between the processor or “node” where a task runs and the “nodal” location where that
task finds its data is called Memory Resource Affinity.
Collection Services now includes physical I/O breakdown by SQL activity. These metrics are
included in the *JOBMI collection category and reported by Collection Services in the
QAPMJOBMI file. An updated template file for QAPMJOBMI is included with QSYS with the
additional fields. This template file is used only when you create QAPMJOBMI file in libraries
where it does not exist. This enhancement enables improved native database versus SQL
database performance isolation.
JBNFHN The identifier of a resource affinity domain that this software thread or task is associated
with. A thread or task is associated with the resource affinity domain at a create time, but
the operating system can decide to move it to another resource affinity domain later.
JBNFLVL The resource affinity level specifies the relative strength of the binding between a thread
and the internal machine resources with which it has affinity (processors and main
storage). The strength is expressed as:
X’00’ = Processor normal, main storage normal
X’01’ = Processor normal, main storage high
X‘10’ = Processor high, main storage normal
X’11’ = Processor high, main storage high
X’03’ = Processor normal, main storage none
X’20’ = Processor low, main storage normal
JBNFGRP The identifier of a resources affinity group or resource affinity domain. This identifier
specifies how threads or tasks are related to other threads or tasks in their use of internal
machine processing resources, or how they are related to specific resource affinity
domains.
JBNFHNC The amount of processor time that is used by the thread on the resource affinity domain
that this thread is associated with. The time is reported in internal model-independent
units. This time is called the Local Dispatch Time.
JBNFFNC The amount of processor time that is used by the thread on resource affinity domains
other than the one that this thread is associated with, but within the same group. The time
is reported in internal model-independent units. This time is called the Non-Local Dispatch
Time.
JBNFHNP The number of 4-K page frames that are allocated for this thread during this interval from
the resource affinity domain that this thread is associated with. These frames are called
Local page frames.
JBNFFNP The number of 4-K page frames that are allocated for this thread during this interval from
resource affinity domains other than the one that this thread is associated with, but within
the same group. These frames are called Non-local page frames.
JBTNAME Identifies the name of secondary thread at sample time. The field is blank for primary
threads, tasks, and unnamed secondary threads.
JBSLTCNT If the short lifespan entry count is greater than zero, the entry does not represent a
particular task or secondary thread. Instead, it is a special record that is used to report
data that is accumulated for tasks and threads whose lifespan was shorter than the
reporting threshold that was in effect when the collection started. Short lifespan tasks are
reported for the processor node that they were associated with and short lifespan
secondary threads are reported for the job to which they belong.
JBSACPU The accumulated job scaled processor time that is charged (in microseconds). The
accumulated scaled interval processor time that is charged for all threads of the job since
the job started. This field is provided for primary threads only.
JBINDCPU The amount of unscaled processor time (in µs) that represents the work that is done
solely within this thread without regard for how server task work is charged.
JBSINDCPU Thread scaled processor time that is used (in microseconds). The amount of scaled
processor (in µs) time that represents the work that is done solely within this thread
without regard for how server task work is charged.
JBCPUWC The elapsed processor time (in µs) that a task runs.
JBVPDLY The elapsed delay time (in microseconds) because virtualization for a task when it was
running. The virtual processor delay time includes virtual processor thread wait event
time, virtual processor thread wait ready time, and virtual processor thread dispatch
latency.
JBSEIZECNT The number of seizes held by this thread at the time that the data was sampled.
JBPSLCKCNT The number of process scoped locks that are held by this thread at the time that the data
was sampled.
JBTSLCKCNT The number of thread scoped locks that are held by this thread at the time that the data
was sampled.
JBTSRCDLCK The number of thread scoped database record locks held by this thread at the time that
the data was sampled.
JBNFOGDT The amount of processor time that is used by the thread in a resource affinity group other
than the one that this thread is associated with. The time is reported in internal
model-dependent units.
JBNFOGMA The number of 4-K page frames that are allocated for this thread during this interval from
a resource affinity group other than the one that this thread is associated with.
JBFLDR2 Workload capping group delay time (in microseconds). The amount of time that this
thread could not be dispatched because of workload capping.
JBFLDR3 Workload capping group. The identifier for the workload capping group that this thread
belonged to at the time these data were sampled. A value of zero is reported when no
group was assigned.
JBSQLCLK SQL clock time. The amount of clock time (in microseconds) this thread has spent
running work that was done on behalf of an SQL operation.
JBSQLCPU Thread unscaled SQL CPU time used. The amount of unscaled processor time (in
microseconds) this thread has used running work that was done on behalf of an SQL
operation
JBSQLSCPU Thread scaled SQL processor time used. The amount of scaled processor time (in
microseconds) that this thread used running work that was done on behalf of an SQL
operation.
JBSQLDBR SQL synchronous database reads. The total number of physical synchronous database
read operations that are done on behalf of an SQL operation.
JBSQLNDBR SQL synchronous nondatabase reads. The total number of physical synchronous
non-database read operations that are done on behalf of an SQL operation.
JBSQLDBW SQL synchronous database writes. The total number of physical synchronous database
write operations that are done on behalf of an SQL operation.
JBSQLNDBW SQL synchronous nondatabase writes. The total number of physical synchronous
non-database write operations that are done on behalf of an SQL operation.
JBSQLADBR SQL asynchronous database reads. The total number of physical asynchronous
database read operations that are done on behalf of an SQL operation.
JBSQLANDBR SQL asynchronous nondatabase reads. The total number of physical asynchronous
non-database read operations that are done on behalf of an SQL operation.
JBSQLADBW SQL asynchronous database writes. The total number of physical asynchronous
database write operations that are done on behalf of an SQL operation.
JBSQLANDBW SQL asynchronous nondatabase writes. The total number of physical asynchronous
non-database write operations that are done on behalf of an SQL operation.
JBHSQLSTMT Number of high-level SQL statements. The number of high-level SQL statements that run
during the Collection Services time interval. This count includes only initial invocation of
the independent SQL statements. It does not include dependent SQL statements started
from within another SQL statement. This count also includes initial invocation of
independent SQL statements that failed to run successfully.
QAPMJOBSR
This file contains data for jobs that run save or restore operations. It contains one record per
job for each operation type that is run.
If you click Collection Services → Collection Services Database Files, and then select
QAPMJOBSR in Performance Data Investigator (PDI), you see an overview of the data that
looks like Figure 6-4.
QAPMSHRMP
The QAPMSHRMP table reports shared memory pool data (refer to as Active Memory
Sharing in PowerVM). This data is generated only when a partition is defined to use a shared
memory pool. Data is reported for both the partition's use of the pool and pool metrics that are
the sum of activity that is caused by all partitions using the pool. You must have a POWER6
system and firmware level xx340_075 or later for this data to be available. See Table 6-8 for
the data that is kept in this table, which shows the interval number (INTNUM), date and time
(DTETIM), and the seconds in the interval (INTSEC) columns.
SMPOOLID Shared memory pool identifier. The identifier of the shared memory pool that this partition
is using.
SMWEIGHT Memory weight. Indicates the variable memory capacity weight that is assigned to the
partition. Valid values are hex 0 - 255. The larger the value, the less likely this partition is
to lose memory.
SMREALUSE Physical real memory used. The amount of shared physical real memory, in bytes, that
was being used by the partition memory at the sample time.
SMACCDLY Real memory access delays. The number of partition processor waits that occurred
because of page faults on logical real memory.
SMACCWAIT Real memory access wait time. The amount of time, in milliseconds, that partition
processors waited for real memory page faults to be satisfied.
SMENTIOC Entitled memory capacity for I/O. The amount of memory, in bytes, assigned to the
partition for usage by I/O requests.
SMMINIOC Minimum entitled memory capacity for I/O. The minimum amount of entitled memory, in
bytes, needed to function with the current I/O configuration.
SMOPTIOC Optimal entitled memory capacity for I/O. The amount of entitled memory, in bytes, that
allow the current I/O configuration to function without any I/O memory mapping delays.
SMIOCUSE Current I/O memory capacity in use. The amount of I/O memory, in bytes, currently
mapped by I/O requests.
SMIOCMAX Maximum I/O memory capacity used. The maximum amount of I/O memory, in bytes, that
was mapped by I/O requests since the partition last had an IPL or the value was reset by
an explicit request.
SMIOMDLY I/O memory mapping delays. The cumulative number of delays that occurred because
insufficient entitled memory was available to map an I/O request since the partition last
underwent an IPL.
MPACCDLY Pool real memory access delays. The number of virtual partition memory page faults
within the shared memory pool for all partitions.
MPACCWAIT Pool real memory access wait time. The amount of time, in milliseconds, that all partitions
processors spent waiting for page faults to be satisfied within the shared memory pool.
MPPHYMEM Pool physical memory. The total amount of physical memory, in bytes, assigned to the
shared memory pool.
MPLOGMEM Pool logical memory. The summation, in bytes, of the logical real memory of all active
partition active partitions that are served by the shared memory pool.
MPENTIOC Pool entitled I/O memory. The summation, in bytes, of the I/O entitlement of all active
partitions that are served by the shared memory pool.
MPIOCUSE Pool entitled I/O memory in use. The summation, in bytes, of I/O memory that is mapped
by I/O requests from all active partitions that are served by the shared memory pool.
QAPMSYSTEM
The QAPMSYSTEM reports system-wide performance data. In IBM i 7.1, columns are added,
as shown in Table 6-9.
INTNUM Interval number. The nth sample database interval that is based on the start time that is specified
in the Create Performance Data (CRTPFRDTA) command.
DATETIME Interval date and time. The date and time of the sample interval.
INTSEC Elapsed interval seconds. The number of seconds since the last sample interval.
SWGNAME Group Name. The name that is assigned to the workload group when it is allocated by License
Management.
SWPRCASN Processors assigned. The maximum number of processors that can be used concurrently by all
threads of all processes that are associated with the workload group. This value is the value that
is associated with the group at the time data was sampled.
SWPRCAVL Processor time available (in microseconds). The amount of processor time that this group had
available to it based on the number of processors that are assigned to the group over time.
SWPRCUSE Processor unscaled time used (in microseconds). The amount of unscaled processor time that is
used within the threads that are assigned to this group. This value does not include the time that
charged to a thread by server tasks.
SWSPRCUSE Processor scaled time that is used (in microseconds). The amount of scaled processor time that
is used within threads that are assigned to this group. This value does not include the time that is
charged to a thread by server tasks.
SWDELAY Dispatch latency time. The amount of time ready to run threads could not be dispatched because
of the group's maximum concurrent processor limit.
SWPRCADD Processes added. The number of process instances that became associated with this group
during the interval.
SWPRCRMV Processes removed. The number of process instances that were disassociated from this group
during the interval.
QAPMTAPE
The QAPMTAPE table contains the tape device data that is collected in the Removable
storage (*RMVSTG) collection category. It contains one record per interval per tape device
that is connected to the system. Besides the data about the interval, it contains the columns
in Table 6-11.
TPWREQ Time spent waiting for a request from the client (in milliseconds)
TPWRESP Time spent waiting for a response from the drive (in milliseconds)
QAPMXSTGD
In IBM i 7.1, the QAPMXSTGD table, was added with performance data of external storage
systems (DS8000 and DS6000 storage servers). These data can be analyzed with iDoctor -
Collection Services Investigator. The table contains mainly volume and LUN-oriented
statistics and can also get advanced Logsensestats from those storage servers. The support
for *EXTSTG is disabled when sent. For more information, see the Memo to Users and APAR
SE41825 for PTF information at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzaq9/rzaq9.pdf
http://www-912.ibm.com/n_dir/nas4apar.NSF/c79815e083182fec862564c00079d117/810d72
fc51f14ed48625770c004b9964?OpenDocument
For more information, see Chapter 8, “Storage and solid-state drives” on page 373.
QAPMSYSINT
This database file contains data for IBM support use when investigating performance on IBM
POWER7 Systems™. With PTFs MF56871 and SI49418, support was added to view metrics
for the TLBIE instruction on Power Systems. The data in this file varies based on the record
type. Record type 3 contain the two types of TLBIE records needed to present all the data.
The two types are distinguished by the SIDID field 1 and 2.
Table 6-12 shows the contents of the QAPMSYSINT table for record type 3, SIDID 1.
Table 6-12 Contents of the QAPMSYSINT table for record type 3, SIDID 1
Column name Description
INTNUM Interval number. The nth sample database interval that is based on the start time in the Create
Performance Data (CRTPFRDTA) command.
DATETIME Interval date (mmddyy) and time (hhmmss). The date and time of the sample interval.
SITYPE Record type. Always 3 for the values shown in this table.
SIDID Internal record identifier. Always 1 for the values shown in this table.
SIDATA02 The number of ticks of the time base special purpose register spent processing TLBIEs this interval.
On POWER7, there are 512 ticks in a microsecond. Divide the value in this record by 512 to get the
total time spent processing TLBIEs in microseconds.
Note: SIDATA02 / SIDATA01 = “Average TLBIE time”
SIDATA06 Total TLBIEs with a duration between 10 and < 1000 microseconds.
SIDATA08 Total TLBIEs with a duration between 10 and < 100 milliseconds.
SIDATA10 Average time that is spent processing TLBIEs (in ticks) in the last 10 milliseconds. Divide by 512 to
get this value in microseconds.
SIDATA11 Average time that is spent processing TLBIEs (in ticks) in the last 100 milliseconds. Divide by 512 to
get this value in microseconds.
SIDATA12 Average time that is spent processing TLBIEs (in ticks) in the last 1 second. Divide by 512 to get this
value in microseconds.
SIDATA13 Average time that is spent processing TLBIEs (in ticks) in the last 10 seconds. Divide by 512 to get
this value in microseconds.
SIDATA14 Average time that is spent processing TLBIEs (in ticks) in the last 100 seconds. Divide by 512 to get
this value in microseconds.
SIDATA15 Average time that is spent processing TLBIEs (in ticks) in the last 1000 seconds. Divide by 512 to get
this value in microseconds.
Table 6-13 shows the contents of the QAPMSYSINT table for record type 3, SIDID 2.
Table 6-13 Contents of the QAPMSYSINT table for record type 3, SIDID 2
Column name Description
INTNUM Interval number. The nth sample database interval that is based on the start time in the Create
Performance Data (CRTPFRDTA) command.
DATETIME Interval date (mmddyy) and time (hhmmss). The date and time of the sample interval.
INTSEC Elapsed interval seconds. The number of seconds since the last sample interval.
SITYPE Record type. Always 3 for the values that are shown in this table.
SIDID Internal record identifier. Always 2 for the values that are shown in this table.
SIDATA10 Maximum paced spin wait time. Divide by 512 to get this value in microseconds.
SIDATA11 Total paced spin wait time. Divide by 512 to get this value in microseconds.
Note: SIDATA11 / SIDATA12 = “Average paced spin wait time”
SIDATA13 Maximum paced time. Divide by 512 to get this value in microseconds.
SIDATA14 Total paced time. Divide by 512 to get this value in microseconds.
Important: Major enhancements have been made to IBM Navigator for i and the
Performance Data Investigator, so ensure that you have the latest group PTFs installed for
the following groups:
HTTP Server group, PTF SF99368
Java group, PTF SF99572
Database group, PTF SF99701
Performance Tools group, PTF SF99145
For a full overview of all the functions of the Performance Investigator and Collection
management interface, see Chapter 17, “IBM Navigator for i 7.1” on page 667.
For more information about IBM iDoctor for IBM i, see Appendix A, “IBM i Doctor for IBM i” on
page 861.
Chapter 7. Virtualization
This chapter describes the following topics:
PowerVM enhancements
More OS level combinations of server and client logical partitions
Hardware Management Console virtual device information
IBM i hosting IBM i - iVirtualization
Virtual Partition Manager enhancements
Partition suspend and resume
HEA Daughter cards
10 Gb FCoE PCIe Dual Port Adapter
Live Partition Mobility
When a shared memory partition needs more memory than the current amount of unused
memory in the shared memory pool, the hypervisor stores a portion of the memory that
belongs to the shared memory partition in an auxiliary storage space that is known as a
paging space device. Access to the paging space device is provided by a Virtual I/O Server
(VIOS) logical partition that is known as the paging service partition. When the operating
system of a shared memory partition accesses data that is in a paging space device, the
hypervisor directs the paging service partition to retrieve the data from the paging space
device. The partition then writes it to the shared memory pool so that the operating system
can access the data.
Server
Shared Shared Shared Shared
Paging Paging memory memory memory memory
service service partition 1 partition 2 partition 3 partition 4
partition 1 partition 2
Hypervisor
Paging space
device 4
The PowerVM Active Memory Sharing technology is available with the PowerVM Enterprise
Edition hardware feature, which also includes the license for the VIOS software.
Paging service partitions must be VIOS. Logical partitions that provide virtual I/O resources to
other logical partitions can be VIOS or IBM i. They must be dedicated memory partitions, but
their client partitions are shared memory partitions.
Important: Logical partitions that have dedicated physical resources cannot be shared
memory partitions.
Server
Shared Shared Shared Shared
Paging Paging memory memory memory memory
service service partition 1 partition 2 partition 3 partition 4
partition 1 partition 2
Hypervisor
Shared memory
pool
For IBM i client partitions where the disk storage is virtualized using VIOS partitions and
storage area network (SAN) Disk Storage, NPIV and multipath I/O support is available with
IBM i 6.1.1 or later. For more information about NPIV, see 7.1.3, “PowerVM Virtualization and
I/O enhanced with NPIV” on page 326. For multipath I/O for IBM i client partitions, see 8.3.2,
“Multipathing for virtual I/O” on page 396.
Requirement: When you use redundant paging service partitions, common paging space
devices must be on SAN Disk Storage to enable symmetrical access from both paging
service partitions.
Solid-state disk usage: A solid-state disk (SSD) on VIOS can be used as a shared
memory pool paging space device. For more information, see 8.4, “SSD storage
management enhancements” on page 407.
For more detailed information about AMS, see IBM PowerVM Virtualization Active Memory
Sharing, REDP-4470.
A significant enhancement for IBM i 7.1 includes connectivity to the IBM Storwize Family and
SAN Volume Controller as shown in Figure 7-4.
Figure 7-4 New enhancements available for IBM i connectivity to Storwize and SAN Volume Controller
For more information about storage area networks and IBM i, see Chapter 8, “Storage and
solid-state drives” on page 373.
Support availability: NPIV support has been expanded in IBM i 7.1. For more information
about NPIV, see 7.1.3, “PowerVM Virtualization and I/O enhanced with NPIV” on page 326.
For an overview of IBM i System Storage solutions, see IBM i Virtualization and Open Storage
read-me first, found at:
http://www-03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf
For more information about IBM i and supported connectivity methods to various types of IBM
external storage, including available SAN Storage solutions for Power Systems and IBM i, see
the System Storage Interoperation Center at:
http://www-03.ibm.com/systems/support/storage/config/ssic/
POWER Hypervisor
POWER Hypervisor
FC adapter FC adapter
NPIV-capable
SAN Switch
IBM DS8000
Figure 7-6 Comparing PowerVM storage virtualization with VSCSI and NPIV
Physical fibre
channel adapter Hypervisor
Server virtual fibre
channel adapter
Physical
storage 1
Storage
Area Physical
Network storage 2
Physical
storage 3
Figure 7-7 VIOS VFC server adapter and IBM i VFC client adapter
Two unique worldwide port names (WWPNs) are generated for the VFC client adapter, which
is available on the SAN so that storage can be mapped to them as you can to any other FC
ports. The following issues must be considered when you use NPIV:
There is one VFC client adapter per physical port per partition, to avoid a single point of
failure.
A maximum of 64 active VFC client adapters are permitted per physical port. This number
can be less because of other VIOS resource constraints.
There can be only 32,000 unique WWPN pairs per system platform.
– Removing an adapter does not reclaim WWPNs. Can be manually reclaimed through
the CLI (mksyscfg, chhwres, and so on) or through the “virtual_fc_adapters” attribute.
– If the capacity is exhausted, you must purchase an activation code for more capacity.
Important: Only one of the two WWPN ports is used (port 0). The second WWPN
port is not used.
Figure 7-8 SAN resources as seen by IBM i client partitions when you use NPIV
The 6B25-001 shows a single port (0). The worldwide port name is how the SAN recognizes
the Virtual IOA, as shown in Figure 7-9.
LTO6 support in IBM i 7.1 requires PTFS MF55886 and MF55967, and if using BRMS,
SI47039 or its superseding PTF.
Note: The devices cannot be directly attached. They must be attached through an
NPIV-capable switch. Plan for a performance degradation of about 10% or more for
devices that are attached using NPIV compared to the same devices attached in a native
IOP-less configuration.
7.1.4 Expanded HBA and switch support for NPIV on Power Blades
Power Blades running PowerVM VIOS 2.2.0 with IBM i 7.1 partitions support the QLogic 8 Gb
Blade HBAs to attach DS8100, DS8300, and DS8700 storage systems through NPIV. This
support allows easy migration from existing DS8100, DS8300, and DS8700 storage to a
blade environment. Full PowerHA support is also available with virtual Fibre Channel and the
DS8100, DS8300, and DS8700, which includes Metro Mirroring, Global Mirroring, FlashCopy,
and LUN level switching.
For compatibility information, consult the Storage Systems Interoperability Center at:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
For more information about N_Port ID Virtualization (NPIV) for IBM i, see the IBM i
Virtualization and Open Storage read-me first topic, found at:
http://www-03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf
For more information about SAN Storage solutions for Power Systems and IBM i, see the
System Storage Interoperation Center at:
http://www-03.ibm.com/systems/support/storage/config/ssic/
Mirror M Mirror M
Mirror M
POWER6
IBM i
Client IASP SYSBAS
VFC
adapters
Server
VFC
adapters
VIOS VIOS
Physical FC
connections
For more information about Redundant VIOS partitions, see the IBM i Virtualization and Open
Storage read-me first topic, found at:
http://www-03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf
Prerequisites
A shared storage pool requires the following prerequisites:
POWER6 (and later) based servers (including blades).
PowerVM Standard Edition or PowerVM Enterprise Edition.
Virtual I/O Server requirements:
– Version 2.2.0.11, Fix Pack 24, Service Pack 1, or later.
– Processor entitlement of at least one physical processor.
– At least 4 GB memory.
Client partition operating system requirements:
– IBM AIX 5L™ V5.3 or later.
– IBM i 6.1.1 or later with the latest PTF.
Local or DNS TCP/IP name resolution for all Virtual I/O Servers in the cluster.
Minimum storage requirements for the shared storage pool:
– One Fibre Channel attached disk that acts as a repository, with at least 20 GB of disk
space.
– At least one Fibre Channel attached disk for shared storage pool data. Each disk must
have at least 20 GB of disk space.
All physical volumes for the repository and the shared storage pool must have redundancy
at the storage level.
The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT
technology. The cluster for the shared storage pool is an RSCT Peer Domain cluster.
Therefore, a network connection is needed between all the Virtual I/O servers that are part of
the shared storage pool.
Each Virtual I/O Server in the cluster requires at least one physical volume for the repository
that is used by the CAA sub system and one or more physical volumes for the storage pool.
All cluster nodes in a cluster can see all the disks. Therefore, the disks must be zoned to all
the cluster nodes that are part of the shared storage pools. All nodes can read and write to
the shared storage pool. The cluster uses a distributed lock manager to manage access to
the storage.
The Virtual I/O Servers in the cluster communicate with each other using Ethernet
connections. They share the repository disk and the disks for the storage pool through the
SAN.
Ethernet
SAN
Storage
Repository
Pool
Storage
Pool
The physical volumes in the shared storage pool are managed as an aggregation of physical
blocks and user data is stored in these blocks. These physical blocks are managed by a
metadata area on the physical volumes. Therefore, the physical volumes in the shared
storage pool consist of physical blocks and have a physical block address space.
The translation from a virtual block address to a physical block address is done by the Virtual
Address Translation Lookaside (VATL).
The system reserves a small amount of each physical volume in the shared storage pool to
record metadata. The remainder of the shared storage pool capacity is available for client
partition user data. Therefore, not all of the space of physical volumes in the shared storage
pool can be used for user data.
Thin provisioning
A thin-provisioned device represents a larger image than the actual physical disk space it is
using. It is not fully backed by physical storage if the blocks are not in use.
For an overview of thin provisioning of a shared storage pool, see Figure 7-13.
POWER Hypervisor
VIOS
VIOS VIOS
IBM i 5.4.5
POWER6
iSCSI
VIOS VIOS
New! IBM i 6.1 IBM i 7.1 IBM i 6.1
POWER6 & POWER7
iSCSI
VIOS VIOS
New! IBM i 7.1 IBM i 7.1 IBM i 6.1
POWER6 & POWER7
iSCSI
VIOS
New! VIOS IBM i 7.1 IBM i 6.1
POWER6 & POWER7
iSCSI
Figure 7-14 OS level combinations of server and client for IBM i and VIOS
IBM i 6.1 server partition can provide virtual I/O resources to the following elements:
– IBM i 6.1 and 7.1 or later client partitions
– AIX V5.2, V5.3, and V6.1, and SLES and Red Hat Linux client partitions
– iSCSI-attached IBM System x and BladeCenter
IBM i 7.1 server partition can provide virtual I/O resources to the following elements:
– IBM i 6.1 and 7.1 or later client partitions
– AIX V5.2, V5.3, and V6.1, and SLES and Red Hat Linux client partitions
– iSCSI attached System x and BladeCenter
PowerVM VIOS 2.1.3 server partition can provide virtual I/O resources to the following
elements:
– IBM i 6.1 and 7.1 or later client partitions
– AIX and Linux client partitions
For more information about PowerVM, see IBM PowerVM Virtualization Introduction and
Configuration, SG24-7940.
For more information about IBM i client partitions, see the IBM i Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i/welcome
The following list describes the information that is displayed in the HMC:
Virtual Adapter. This column displays the name of each virtual server SCSI adapter.
Backing Device. This column displays the name of the storage device whose storage
capacity can be used through a virtual SCSI connection to this virtual server SCSI
adapter. This storage device is on the same logical partition as the virtual server SCSI
adapter.
Remote Partition. This column displays the partition name and partition ID (in
parentheses) of the logical partition to which each virtual server SCSI adapter is set to
connect. If this column is blank, then the virtual server SCSI adapter is set to connect to
any logical partition.
Remote Adapter. This column displays the virtual slot ID of the virtual client SCSI adapter
to which each virtual server SCSI adapter is set to connect. If this column contains none,
then the virtual server SCSI adapter is set to connect to any virtual client SCSI adapter.
Remote Backing Device. This column displays the name of the virtual disks (or logical
volumes) that display on the logical partition with the virtual client SCSI adapter when a
virtual SCSI connection exists. The logical partition with the virtual client SCSI adapter can
use these virtual disks to store information about the storage device that is owned by the
logical partition with the virtual server SCSI adapter. This column contains a value only if
the virtual server SCSI adapter is connected to a virtual client SCSI adapter.
Consideration: You can create virtual server SCSI adapters only for Virtual I/O Server
and IBM i logical partitions. This window is always blank for AIX and Linux logical
partitions.
The following list details the requirements for virtual device information:
POWER6 or later rack / tower systems
BladeCenter H
System firmware level 350_038 or later
HMC V7.3.5 or later
VIOS V2.1.2 (FP 22.1) or later
IBM i 6.1.1 or later (+latest fixes)
Example 7-1 lists the Virtual SCSI Adapter attributes in the form of a slash delimited list.
Example 7-2 lists the Virtual Fibre Channel Adapters attributes for each logical partition in the
form of a slash delimited list.
For more information about the lshwres command, go to the Hardware Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphcg/ls
hwres.htm
DDxx
DDxx
GC 5294
NWSSTGs
Virtual SCSI
connection
EXP24 OPTxx
DVD
OPTxx
DVD
CMNxx
Virtual LAN
DS8000 IVE connection
IBM i hosting IBM i (iVirtualization) uses an existing function of the system firmware, or IBM
Power Hypervisor, which can create VSCSI and Ethernet adapters in a partition.
iVirtualization: The term being used for describing the hosting of IBM i client partitions
with IBM i serving as the host is iVirtualization. The term might be used interchangeably
with the phrase “IBM i hosting IBM i” in documentation and websites. It is important to note
that iVirtualization is not the same as IBM i with its storage hosted from VIOS.
Virtual adapters are created for each partition in the Hardware Management Console (HMC)
or virtual server in the Systems Director Management Console (SDMC). VSCSI adapters are
used for storage and optical virtualization, virtual Ethernet adapters are used for network
virtualization. POWER6 or later and IBM i 6.1 or later is required to support IBM i client
partitions.
Tip: VIOS server partitions can also virtualize a natively attached storage device to IBM i
6.1 or later client partitions. For more information, see IBM PowerVM Virtualization
Managing and Monitoring, SG24-7590.
This VSCSI adapter pair allows the client partition to send read and write I/O operations to the
host partition. More than one VSCSI pair can exist for the same client partition in this
environment.
To create Virtual SCSI adapters for the IBM i host partition and IBM i client partition, complete
the following steps:
1. Use the managing HMC to create a VSCSI server adapter on the IBM i host partition:
a. In the navigation pane, click Systems Management → Servers, and click the
managed system on which the server IBM i host partition is on.
b. Select the IBM i host partition, click Tasks, and click Dynamic Logical Partitioning →
Virtual Adapters.
c. Click Actions and click Create → SCSI Adapter.
d. Use the default VSCSI adapter number or provide your own number. Write down the
VSCSI adapter number, as you need it in a later step.
In Figure 7-17, the number 31 was provided as the Virtual SCSI adapter number. In the
Type of adapter field, select Server, and click OK.
e. Save the current configuration for the IBM i host partition so that the VSCSI adapter
continues to exist after you restart the partition.
No additional configuration is required in IBM i in the virtual client partition. In the host
partition, the minimum required IBM i setup consists of the following requirements:
One network server description (NWSD) object
One network server storage space (NWSSTG) object
The NWSD object associates a VSCSI server adapter in IBM i (which in turn is connected to
a VSCSI client adapter in the HMC/SDMC) with host storage resources. At least one NWSD
object must be created on the host for each client, although more are supported. One or more
NWSSTG objects can be linked to the NWSD, where the NWSSTG objects represent virtual
disks that are provided to the client IBM i partition. They are created from available physical
storage on the host partition. In the client, they are recognized and managed as standard
DDxx disk devices (with a different type and model). The IBM i CL commands WRKNWSSTG and
CRTNWSSTG can be used to manage or create the NWSSTG.
2. Create an NWS Storage Space in the IBM i host partition by running the Create NWS
Storage Space (CRTNWSSTG) command as shown in Figure 7-20.
Storage spaces for an IBM i client partition do not have to match physical disk sizes; they can
be created from 160 MB - 1 TB, if there is available storage on the host. The 160 MB
minimum size is a requirement from the storage management Licensed Internal Code (LIC)
on the client partition. For an IBM i client partition, up to 16 NWSSTGs can be linked to a
single NWSD, and therefore to a single VSCSI connection. Up to 32 outstanding I/O
operations from the client to each storage space are supported for IBM i clients. Storage
spaces can be created in any existing auxiliary storage pool (ASP) on the host, including
Independent ASPs. Through the usage of NWSSTGs, any physical storage that is supported
in the IBM i host partition on a POWER6 based system can be virtualized to a client partition.
For performance reasons, you might consider creating multiple storage spaces that are
associated with multiple NWSDs. The rule of thumb is 6 - 8 storage spaces for each client
partition. This setup implies that you are also creating multiple sets of VSCSI adapter pairs
between the hosting partition and the client partition. Associate each hosting partition’s server
VSCSI adapter with a separate NWSD by referencing the VSCSI adapter’s resource name in
the NWSD, and then link storage spaces to the NWSDs. This action supplies multiple disk
arms for the client partition to use.
A virtualized optical drive on the host partition can be used for a D-mode Initial Program Load
(IPL) and installation of the client partition, as well as for installing Program Temporary Fixes
(PTFs) or upgrades to applications. If the optical drive is writable, the client partition can write
to the physical media in the drive.
Also, any optical resources shared with client partitions should be in the VARIED ON state
on the server (host) partition.
To locate the optical device in the IBM i client partition, enter the WRKHDWRSC *STG IBM i CL
command and complete the following steps:
1. Enter option 7 to display the resource details next to each of the CMBxx resources that are
listed, as shown in Figure 7-22.
Bottom
F3=Exit F5=Refresh F6=Print F12=Cancel
2. Look at the last digits for the location code Cxx, where xx corresponds to the virtual
adapter number, as shown in Figure 7-23.
Location : U8233.E8B.100417P-V131-C31
Logical address:
SPD bus:
System bus 255
System board 128
More...
Press Enter to continue.
Bottom
F3=Exit F5=Refresh F6=Print F12=Cancel
The optical device that is provided by the IBM i server partition is shown in the IBM i client
partition, as shown in Figure 7-25.
Figure 7-25 Virtualized optical device that is shown on IBM i client partition as type 632C-002
For more information about image catalog, search for “Virtual optical storage” in the IBM i
Knowledge Center at:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i/welcome
By default, an NWSD makes all eligible physical TAPxx drives on the host available to the
client, where they are presented as TAPxx devices. The NWSD parameter Restricted
device resources can be used to specify which tape devices on the host a client partition
cannot access.
For more information about which tape drives are eligible for virtualization, refer to the Client
virtual devices (optical and tape) topic in the IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzam4/rzam4clientvirtde
vices.htm
Image catalog based virtual tape devices (TAPVRTxx) cannot be virtualized to IBM i client
partitions.
Note: A particular tape resource on the server should only be presented once to the client
partition. By default, an NWSD allows all eligible optical and tape resources to be
virtualized (Restricted device resources = *NONE). Therefore, if multiple NWSDs are used
for a single client, all the ‘secondary’ NWSDs should be changed to prevent optical and
tape resources from being virtualized multiple times (Restricted device resources = *ALL).
Also, any tape resources shared with client partitions should be in the VARIED OFF state
on the server (host) partition.
To be on the same VLAN, the two virtual Ethernet adapters must have the same Port Virtual
LAN ID (PVID). This type of adapter is recognized by IBM i as a communications port
(CMNxx) with a different type (268C). In the host partition, the virtual Ethernet adapter is then
associated with the physical network adapter through a routing configuration, either Ethernet
Level-2 Bridging or network address translation (NAT). This routing configuration allows the
client partition to send network packets through the VLAN and the physical adapter to the
outside LAN. The physical adapter can be any network adapter that is supported by IBM i 6.1
and later, including Integrated Virtual Ethernet (IVE) ports, also known as Host Ethernet
Adapter (HEA) ports.
For more information about Ethernet Level-2 Bridging, see 7.5.1, “Ethernet Layer-2 bridging”
on page 349. For more information about network address translation (NAT), see the
IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzai2/rzai2nat.htm
If only the system ASP exists on the host partition, NWSSTG objects are created on the same
physical disk units as all other objects. If the host partition is running production applications
in addition to providing virtual storage to client partitions, there is disk I/O contention as both
client partitions and IBM i workloads on the host send I/O requests to those disk units. To
minimize disk I/O contention, create storage space objects in a separate ASP on the host
(Independent ASPs are supported). Performance on the clients then depends on the disk
adapter and disk configuration that is used for that ASP. If the host partition is providing virtual
storage to more than one client partition, consider using separate ASPs for the storage space
objects for each client. Weigh this preferred practice against the concern of ending up with too
few physical disk arms in each ASP to provide good performance.
Disk contention from IBM i workloads on the host partition and virtual client partitions can be
eliminated if a separate IBM i partition is used just for hosting client partitions. Another benefit
of this configuration is the fact that an application or OS problem that is stemming from a
different workload on the host cannot negatively affect client partitions. Weigh these benefits
against the following items:
The license cost that is associated with a separate IBM i partition
The maintenance time that is required for another partition, such as applying Program
Temporary Fixes (PTFs)
The ability to create well-performing physical disk configurations in both partitions that
meet the requirements of their workloads
If the host partition runs a heavy-I/O workload and the client partitions also have high disk
response requirements, consider using a separate hosting partition, unless separate ASPs on
the host are used for storage space objects. If the host partition’s workload ranges from light
to moderate regarding disk requirements and the client partitions are used mostly for
development, test or quality assurance (QA), it is acceptable to use one IBM i partition for
both tasks.
Note: Be careful when adding disk units to the client ASP configuration in order to get the
wanted level of mirrored protection across servers. The system currently does not
distinguish between client adapters that are associated with different server (host)
partitions and client adapters that are associated with a single server (host) partition.
Therefore, simply mirroring at the adapter/IOP level might result in virtual disks being
mirrored to disks from the same server (host) partition, which is not the wanted pairing.
With this enhancement to IBM i 7.1, the ability to create up to four IBM i partitions is enabled
in VPM. Client IBM i partitions, which are created with VPM, use virtual I/O to connect back to
the IBM i I/O server partition to access the physical disk and network. VPM in the IBM i I/O
server partition is used to create the virtual SCSI and virtual Ethernet adapters for the client
partitions. You can then use Network Storage Spaces (NWSSTG) and Network Storage
Descriptions (NWSD) in the IBM i I/O server partition to define the storage for the client
partitions. Tape, disk, and optical can be virtualized to the client partitions. The client IBM i
partitions can be IBM i 7.1 or IBM i 6.1 with either 6.1 or 6.1.1 machine code.
This situation puts two Ethernet adapters (one physical and one virtual) into a mode where
they can receive traffic that is not destined for their address. It selectively sends those frames
onto the other network according to the IEEE 802.1D standard (“bridging” the frames).
Frames that are transmitted by virtual Ethernet adapters on the same VLAN as the bridging
virtual Ethernet adapter can be sent to the physical network, and frames from the physical
network can be received by adapters on the virtual network.
Create an Ethernet line description for the selected virtual Ethernet resource, and set its
Bridge identifier (BRIDGE) to the same bridge name.
When both line descriptions are varied on, traffic is bridged between the two networks. Any
other partitions with virtual Ethernet adapters on the same VLAN as the new virtual Ethernet
resource are able to access the same network as the physical Ethernet resource.
The selected virtual Ethernet resource must be marked as allowing access to the external
network. If an incorrect virtual Ethernet resource is selected, an error is returned when you try
to vary on its line description, indicating that the selected resource cannot enable
promiscuous mode. Create a virtual Ethernet resource that can be used to access the
external network.
Remember: In IBM i V7R1, an Ethernet line description's bridge identifier is not visible
from DSPLIND. Use the CHGLINETH command and prompt to see the bridge identifier for an
Ethernet line description.
The following considerations apply for IBM i logical partitions that are enabled for suspension:
You cannot activate the logical partition with a partition profile that has a virtual SCSI
server adapter.
You cannot activate the logical partition with a partition profile that has a virtual SCSI client
adapter that is hosted by another IBM i logical partition.
You cannot dynamically add any virtual SCSI server adapter.
You cannot dynamically add any virtual SCSI client adapter that is hosted by another IBM
i logical partition.
You cannot dynamically add any physical I/O adapters.
You cannot suspend an IBM i logical partition with a varied NPIV attached tape device.
All IBM i virtual disks must be backed by physical volumes.
For the latest information about prerequisites, see IBM Prerequisites at:
https://www-912.ibm.com/e_dir/eserverprereq.nsf
IBM Power Systems servers are designed to offer the highest stand-alone availability in the
industry. Enterprises must occasionally restructure their infrastructure to meet new IT
requirements. By allowing you move your running production applications from one physical
server to another, LPM allows for nondisruptive maintenance or modification to a system
without your users noticing anything. LPM mitigates the impact on partitions and applications
that was formerly caused by the occasional need to shut down a system.
Even small IBM Power Systems servers frequently host many logical partitions. As the
number of hosted partitions increases, finding a maintenance window acceptable to all
becomes increasingly difficult. You can use LPM to move partitions around so that you can
run previously disruptive operations on the system at your convenience, rather than when it
causes the least inconvenience to the users.
IBM i Client 1
M M M M M M M
A LUN1
DC01 CMN01
HMC
VLAN VLAN
Hypervisor Hypervisor
Storage
Subsystem
LPM helps you meet increasingly stringent service-level agreements (SLAs) because you can
proactively move running partitions and applications from one server to another server.
The ability to move running partitions from one server to another server means that you can
balance workloads and resources. If a key application’s resource requirements peak
unexpectedly to a point where there is contention for server resources, you might move it to a
more powerful server or move other, less critical, partitions to different servers, and use the
freed resources to absorb the peak.
LPM can also be used as a mechanism for server consolidation because it provides an easy
path to move applications from individual, stand-alone servers to consolidation servers. If you
have partitions with workloads that have widely fluctuating resource requirements over time
(for example, with a peak workload at the end of the month or the end of the quarter), you can
use LPM to consolidate partitions to a single server during the off-peak period so that you can
turn off unused servers. Then, move the partitions to their own, adequately configured
servers just before the peak. This approach also offers energy savings by reducing the power
to run systems and the power to keep them cool during off-peak periods.
LPM is the next step in the IBM PowerVM continuum. It can be combined with other
virtualization technologies to provide a fully virtualized computing platform that offers the
degree of system and infrastructure flexibility that is required by today’s production data
centers.
3. Ensure that Resource Monitoring and Control (RMC) connections are established
between both the source and destination VIOS logical partitions and HMC.
Sign on HMC with the correct authority and run lspartition -dlpar, as shown in
Example 7-4, to check the RMC connection between the HMC and VIOS.
Example 7-4 Check the RMC connection between the HMC and VIOS
#commands:
lspartition -dlpar
#Results:
<#23> Partition:<5*8205-E6C*06523ER, , 172.16.26.99>
Active:<1>, OS:<AIX, 6.1, 6100-07-04-1216>, DCaps:<0x4f9f>,
CmdCaps:<0x1b, 0x1b>, PinnedMem:<799>
Example 7-5 Verify that the physical volumes on external storage are set correctly
List attributes of a physical volume, use the following command:
#lsdev -dev hdiskX -attr
4. Select the destination system, specify Destination profile name and Wait time (in min),
and then click Validate, as shown in Figure 7-31.
Figure 7-32 Validation window with readiness for Live Partition Mobility
3. Check the migration information in the Partition Migration wizard, as shown in Figure 7-34.
7. Check errors or warnings in the Partition Validation Errors/Warnings window and eliminate
any errors. This step was skipped in the example because there were no errors or
warnings.
14.The Migration status and Progress are shown in the Partition Migration Status window, as
shown in Figure 7-44.
15.When the Partition Migration Status window indicates that the migration is 100% complete,
you will find that the mobile partition is running in the destination system.
Selection
F3=Exit F12=Cancel
Figure 8-1 IBM i SST Work with Removing Units from Configuration panel
Serial Resource
OPT Unit ASP Number Type Model Name Status
2 1 21-DC78C 4328 072 DD006 RAID 5/Active
3 1 21-DD464 4328 072 DD004 RAID 5/Active
4 1 21-E72DE 4328 072 DD008 RAID 5/Active
5 1 21-E7A8D 4328 072 DD005 RAID 5/Active
6 1 21-E7CB9 4328 072 DD007 RAID 5/Active
7 1 21-DCA21 4328 072 DD003 RAID 5/Active
8 1 21-E7B11 4328 072 DD011 RAID 5/Active
9 1 21-DD3DA 4328 074 DD012 RAID 5/Active
10 1 21-E7046 4328 074 DD010 RAID 5/Active
4 11 1 21-E7557 4328 074 DD009 RAID 5/Active
12 2 21-E786C 4328 074 DD002 RAID 5/Active
This new disk unit removal function, as with the previously available add disk unit function,
works for both SYSBAS and independent ASPs, even if the IASP is varied on.
The remove function does not allow removal if the remaining capacity can result in an
exceeded ASP threshold. Media preferences for SSDs are respected by the remove function
(for example, DB2 or UDFS media preferences; for more information, see 8.4, “SSD storage
management enhancements” on page 407) and are honored if there is remaining capacity on
the corresponding media type.
Only one remove operation for one or more disk units of a single system can be started,
paused, or canceled at any time. The pause operation prevents further data allocations on the
disk units that are selected for removal, similar to the *ENDALC option in STRASPBAL.
Important: The disk unit remove function in System Service Tools, which supports
concurrent disk unit removal with applications by using the ASP, does not allow removal of
all the disk units from the ASP. An IPL to DST is required to delete the ASP.
For a RAID configuration, the hot spare disk unit is used as a replacement for similar or lower
capacity drives. For mirroring, the capacity requirement is more stringent. The hot spare must
be the same size or bigger (within 25 GB).
When a disk unit is configured as a hot spare, as shown in Figure 8-3, it is no longer visible as
a non-configured or configured disk unit in the System Service Tools → Work with disk
units panels. However, it still shows up in the Hardware Service Manager under the disk IOA
as a unique model 51 representing a hot spare disk unit.
Selecting a disk unit will start using the selected disk unit
as a hot spare.
The disk IOA does not control mirror protection, so when a mirror protected disk unit fails, the
System Licensed Internal Code (SLIC) detects that the failed drive and completes the
following recovery steps (not apparent to the user):
1. SLIC tells IOA to disable the hot spare.
2. The hot spare becomes non-configured.
3. The replace configured unit function is run to replace the failed drive with the now
non-configured previous hot spare.
4. The failed drive becomes non-configured for safe physical replacement.
Note: VIOS 2.2.3.0 is required. For boot device support, FW780 is also required.
For more information about Adapter Performance Boost with VIOS, see the IBM Hardware
Announcement letter 113-171 on the following website:
http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/EN
US113-171/index.html&lang=en&request_locale=null
PCIe2 1.8 GB Cache RAID SAS Adapter Tri-Port 6 Gb sales feature #5913
The PCIe2 1.8 GB Cache RAID SAS Adapter Tri-Port 6 Gb (#5913) is a large-cache PCIe
SAS adapter that provides high-performance capabilities for large quantities of solid-state
drives (SSD) or hard disk drives (HDD). Although this adapter is supported on IBM i in a
VIOS configuration, caution is recommended with workloads that are heavy on writes.
The new dual SAS adapter support provides adapter redundancy with an active and passive
I/O path per RAID set, or a mirrored side in a two pair (four adapters) dual SAS adapter
configuration with IBM i mirroring. Read and write disk I/O operations are sent by the system
only down the active path. The passive path is used only after controller failovers (for
example, if the active path fails). Dual SAS adapters are redundantly interconnected through
a SAS adapter-to-adapter (AA) cable that connects the top ports of the SAS adapters, and a
SAS X cable that attaches to the disk expansion drawer, as illustrated in Figure 8-4.
Dual SAS Adapter with RAID Dual SAS Adapter with IBM i Mirroring
RA
RA
ID
ID
Set
Set
1
0
Remember: For IBM i mirroring configurations, the disk units that are attached to a dual
SAS adapter are each treated as a one-drive parity set.
For a dual SAS adapter pair, there are primary and secondary adapter roles. Only the primary
adapter can perform disk management functions (such as creating a RAID array). If the
primary adapter becomes unavailable, an automatic failover to the secondary adapter occurs,
which becomes the primary adapter. There is no fallback to the original primary adapter when
it comes back operational. The current role of a SAS adapter (as the primary or secondary
adapter) can be seen by navigating to System Service Tools → Start a service tool →
Hardware service manager → Logical hardware resources from the panel that shows the
details for a dual SAS storage IOA. Select F14=Dual Storage IOA Configuration, as shown in
Figure 8-5.
1. Availability
2. Balance
3. Capacity
4. Performance
Selection
4
F3=Exit F12=Cancel
Figure 8-7 IBM i parity optimization selection menu
For more information about IBM i dual SAS adapter support, see the “Dual storage IOA
configurations” topic in the IBM Systems Hardware Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/ared5/ar
ed5dualioaconfig.htm
Selection
F3=Exit F12=Cancel
Figure 8-8 IBM i work with encryption menu
Do not change the data encrypion key for basic ASPs again until
this operation has completed. Do not stop encryption on basic
ASPs until this operation has completed.
F3=Exit F12=Cancel
Figure 8-9 IBM i change data encryption key confirmation panel
1 0 < 1 ms
2 1 ms < 16 ms
3 16 ms < 64 ms
4 64 ms < 256 ms
6 >= 1024 ms
1 0 < 15 us
2 15 us < 250 us
11 >= 1024000 us
The Performance Data Investigator in IBM Systems Director Navigator for i and the Collection
Services Investigator in IBM iDoctor for IBM i are enhanced with new collection services disk
response time graphs for the new buckets in IBM i 7.1.
Figure 8-10 IBM Systems Director Navigator disk response time buckets graph
For more information about the new disk response time buckets in QAPMDISKRB, see the
IBM i 7.1 Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahx%2
Frzahxqapmdiskrb.htm
When you start mirroring, the operating system considers the CPC node under which the
disks are, and attempts to place the two subunits of a mirror protected pair under different
CPC nodes. This action allows concurrent maintenance of a CPC node because the two
subunits of each mirrored disk unit pair are under a different CPC node. This configuration
allows at least one subunit of each mirrored disk unit pair to remain operational during the
maintenance operation.
After you install the PTF Group that contains this function, you might want to consider ending
and restarting mirroring to recalculate the mirror protected pairs. You can use an Advanced
Analysis macro named LEVELOFPROTECTION, which is accessible through SST or DST, to
verify the level of protection for each mirrored pair.
In Figure 8-13, the -UNIT parameter is chosen and disk unit 12 is entered.
The line at the bottom of the display in the box indicates the level of disk protection, which
in this case is CecNodeLevelOfProtection.
The #5887 drawer has double the number of drives than the EXP12S I/O drawer (#5886), and
the SFF drives provide significant energy savings compared to the EXP12S 3.5-inch drives.
For more information, see IBM Hardware Announcement letter 111-065 at:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS111-065
For more information, see IBM Hardware Announcement letter 113-171 at the following
website:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS113-171#h2-techinfx
For more information, see IBM Hardware Announcement letter 113-171 at the following
website:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS113-171#h2-techinfx
For more information, see IBM Hardware Announcement letter 113-171 at the following
website:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS113-171#h2-techinfx
The tape performance statistics data are stored in the QAPMTAPE structured database file,
including physical tape performance I/O statistics counts, such as number of reads and
writes, bytes read and written, and number of tape marks and blocks spaced. These data are
tracked by the IBM i tape code when you send requests to the tape device driver. Currently,
for reviewing the data that are collected in QAPMTAPE, either a user-defined SQL query or a
GUI, such as the Systems Director Navigator for i with its Investigate Data function must be
used.
For more information about the structured QAPMTAPE database file performance data, see the
IBM i 7.1 Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahx%2
Frzahxqapmtape.htm
8.1.14 Tape library unreadable barcode changes for IOP-less IOA attachment
Before IBM i 7.1, if, at varyon of the tape library, tape cartridges with unreadable barcodes are
found, each of these tape cartridges is loaded into a drive to read the volume ID. The volume
ID is used to generate a corresponding cartridge ID for the unreadable barcode.
This method ensures, for IBM standard labeled (VOL1) tapes, that the volume ID matches the
cartridge ID, which is a requirement for IBM i to allow write operations to a tape cartridge. The
downside of this approach is the time that is required to load and read each cartridge,
especially if the library barcode reader itself failed. Also, problems with the barcode label or
barcode reader are not made apparent to the user.
With IBM i 7.1 and IOP-less IOA attached tape libraries, if a tape cartridge with an unreadable
or missing barcode is manually added, a cartridge ID with a format of UNKXXX is fabricated,
with XXX being a sequential decimal number that starts with UNK001. If a cartridge is found
in a storage slot with an unreadable barcode, a cartridge ID is fabricated with the format of
U@XXXX, with XXXX reflecting the SCSI element address when the tape device driver
discovers an unreadable barcode in a slot.
This handling of unreadable barcodes in IBM i 7.1 reveals barcode problems and allows you
to read from tapes without barcode labels (which are removed from the library again) quicker,
without requiring a tape drive for generating cartridge IDs.
Consideration: With the IBM i 7.1 IOP-less IOA tape library attachment, you should not
use cartridges without barcode labels if they are supposed to remain in the library. To write
or append to a standard labeled cartridge in a library, a barcode label that matches the
volume ID must be stuck on the cartridge.
8.1.15 DVD / Tape SAS External Storage Unit for Power 795 CPC Rack
The #5274 DVD / Tape SAS External Storage Unit for Power 795 CPC Rack is a 1U storage
unit that can hold HH DAT160 drives, the #5638 1.5 TB / 3.0 TB LTO-5 SAS Tape Drive, or
slimline DVD drives.
For more information, see IBM Hardware Announcement letter 111-065 found at:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS111-065
For a USB attach to a Version 7.1 system, you can either use F/C #EU03 (USB) or #EU04
(USB). For a Version 6.1.1 system, you must use F/C #EU07 (SATA).
The RDX dock is available in either 5.25-inch internal (SATA or USB) format or external USB
format. The dock supports all RDX cartridges, which have 30+ years of reliability and are
rugged.
For more information, see IBM Hardware Announcement letter 113-006 at the following
website:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=8
97/ENUS113-006
8.1.18 VIOS support for RDX USB docking station for removable disk cartridge
Native and iVirtualization support has been offered since late 2012 for RDX technology. The
USB RDX technology is also being supported in VIOS configurations for the same USB
hardware that was supported natively. The virtual RDX device is shown in an IBM i partition
as an optical device, so the same command set applies. This virtual support is useful for
virtual client partition back-up, save/restore, install, and so on.
Note: VIOS 2.2.3.0 is required. For boot device support, FW780 or FW770.30 is also
required.
Support is for same devices as native-attached USB RDX (#EU03 and #EU04).
For more information about RDX removable disk drives, see the following link in the IIBM
Power Systems Hardware documentation:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=%2Fp7hdt%2Ffc110
3.htm&resultof%3D%2522%2545%2555%2530%2533%2522%2520%2522%2565%2575%2530%2533%2522
%2520
Although RDX is a tape replacement, when you configure or operate RDX on IBM i, think of it
as a DVD. The devices do not show up as TAP devices. Instead, look for a Removable Mass
Storage (RMS) device.
Note: IBM i 7.1 can virtualize RMS devices to other IBM i partitions. VIOS does not
virtualize RDX.
Flash drives, also referred to as memory keys or thumb drives, are small pluggable devices
that do not have removable media. The intent is to provide generic support for a USB 2.0
device (up to 32 GB in capacity) so that the USB flash vendor of choice can be used. A single
flash drive can hold a large amount of data that would otherwise have needed multiple DVDs,
and can typically access the data much faster.
On IBM i, these are “optical class” devices whose main purpose is data movement such as
IFS copy, save/restore operations directly to/from the device, or D-mode IPL when the server
is HMC managed. Due to lack of reliability of flash drives in general, they are not
recommended as backup devices, but they are useful for the following types of operations:
For more information about USB attached flash drives, see the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/IBM%20i%20Te
chnology%20Updates/page/IBM%20i%20IO%20Support%20Details
When a USB flash drive is inserted into the operator panel USB port or the Flexible Service
Processor (FSP) USB ports, a storage resource type of 63BC is created. Use the WRKHDWRSC
*STG command to see the created resource as shown in Figure 8-16.
When a flash drive is initialized (INZOPT) on IBM i, it is initialized with the UDF file system. If
a flash drive is inserted that has files on it, but displays on the system as an unknown format,
it is most likely formatted with a file system that IBM i does not recognize (for example,
Microsoft NTFS).
Save/restore operations are run as with a DVD, for example SAVLIB DEV(RMS01). Because it is
an optical class device, you can also use IFS to manage data by using the command WRKLNK
OBJ('/QOPT/RDXVOL'). IBM i commands are the normal optical storage commands (INZOPT,
WRKOPTVOL, and so on).
I/O
Disk, SAN, tape
WAN, LAN
Crypto, Optical
Although this virtualization is easily done for disk, SAN, Tape, LAN, and optical devices, there
is no virtualization support for WAN or crypto cards.
The latest generation of IBM Serial-Attached SCSI (SAS) adapters are optimized for disk
operations with data aligned on 4096-byte boundaries. This decision ended up producing a
noticeable degradation of the performance of disk requests with data that was not aligned on
such a boundary. Specifically, virtual disk units provided to an IBM i client partition from a
VIOS server using SAS hdisks as the backing storage devices. In this configuration, the 4608
byte-aligned I/O requests initiated by the IBM i are passed to the SAS adapter with no change
in alignment, resulting in less than optimal performance.
IBM has addressed this performance issue by enabling the VIOS to provide a 520-byte sector
format virtual disk when backed by SAS hardware that is capable of supporting that format,
and enabling the IBM i operating system to format and use these virtual disks like they would
other 520-byte sector disks. The SAS adapter then optimizes the I/O as though it was
attached directly to the IBM i partition instead of being attached to a VIOS.
8.3.1 IBM SAN Volume Controller and IBM Storwize storage systems
IBM SAN Volume Controller and IBM Storwize V7000, IBM Storwize V3700, and IBM
Storwize V3500 storage systems are supported for both fabric and direct attached
configurations. Loadsource device support is included, as is the full use of PowerHA for i,
including Logical Unit (LUN) level switching.
For more information about PowerHA support, see the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/IBM%20PowerH
A%20SystemMirror%20for%20i/page/PowerHA%20SystemMirror%20Technology%20Updates
Support is for all models of the IBM SAN Volume Controller and IBM Storwize storage
systems that have IBM SAN Volume Controller code level 6.4.1.4, or later.
For compatibility and availability information about IBM SAN Volume Controller, see the IBM
System Storage SAN Volume Controller website:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/
With IBM i 6.1.1 or later, a redundant VIOS configuration (Figure 8-18) is supported by IBM i
multipathing across two or more VIOS on the same IBM Power Systems server for protection
against VIOS outages because of VIOS updates.
POWER Hypervisor
FC FC FC FC
Adapter Adapter Adapter Adapter
Figure 8-18 IBM i multipathing with a redundant Virtual I/O Server configuration
This new IBM i multipathing support for virtual I/O eliminates the need to use IBM i mirroring
for a redundant VIOS configuration, which required duplicate storage capacity.
For further IBM i virtualization enhancements, such as Active Memory Sharing or N_port ID
virtualization support, see Chapter 7, “Virtualization” on page 319.
Figure 8-20 shows how the native attached DS5000 LUNs, created for the IBM i host, report
on an IBM i host as device type D818.
Serial Resource
Number Type Model Name Capacity Status
Y2103LQ0WGLC 433B 050 DPH001 69793 Non-configured
Y2103LQ1J064 433B 050 DPH002 69793 Non-configured
Y2103LQ1J06H 433B 050 DPH003 69793 Non-configured
Y2103LQ0P0BE 433B 050 DPH004 69793 Non-configured
Y2103LQ1HV0C 433B 050 DPH005 69793 Non-configured
Y2103LQ1J6M8 433B 050 DPH006 69793 Non-configured
Y0C44AC5B4F6 D818 099 DPH007 265333 Non-configured
Y0C14AC5A32B D818 099 DPH008 265333 Non-configured
The built-in IBM i multipathing in System Licensed Internal Code (SLIC) adheres to the
DS5000 active / passive controller concept. Under normal working conditions, I/O is driven
across only the active paths to a disk unit (to the controller designated for the LUN as the
preferred controller) when the passive paths for a disk unit are used at DS5000 controller
failover conditions. Figure 8-21 shows the active and passive path for disk units from a native
attached DS5000 after they are added to an ASP. You can access this panel by navigating to
System Service Tools → Work with disk units → Display disk configuration → Display
disk path status.
From an IBM i disk I/O performance perspective, the following preferred practices should be
followed:
To balance workload across both DS5000 controllers, LUNs should be evenly assigned
regarding preferred controller affinity to controller A and B.
The LUN size for IBM i IOP-less Fibre Channel of 70 GB applies for DS5000 native
attachment.
A DS5000 segment size of 128 KB is generally a good compromise for both IBM i
transaction and save / restore workload.
For more information about the IBM System Storage DS5000 series, see the following IBM
Redbooks publications:
IBM Midrange System Storage Hardware Guide, SG24-7676
IBM System Storage DS Storage Manager Copy Services Guide, SG24-7822
For more information about IBM support statements about DS5000 Copy Services support
with IBM i native attached DS5000, see IBM i Virtualization and Open Storage read-me first,
found at:
http://www-03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf
IBM STG Lab Services developed a Copy Services Tool Kit offering Advanced Copy Services
for PowerHA - DS5000 Edition for DS5000 native-attachment to support IASP storage-based
replication solutions with FlashCopy / VolumeCopy and Enhanced Remote Mirroring. For
more information about this Copy Services Tool Kit offering for DS5000, see IBM STG Lab
Services at:
http://www-03.ibm.com/systems/services/labservices
Serial Resource
ASP Unit Number Type Model Name Protection
F3=Exit F5=Refresh
F11=Display disk configuration status F12=Cancel
Figure 8-22 IBM i protection level reporting for multipath disk units
8.3.5 Library control paths for IOP-less Fibre Channel IOA tape attachment
Tape library devices that are attached to a dual-port Fibre Channel I/O adapter with IBM i 7.1
require at least one control path drive to be attached to each port. This configuration is
required because the design changed from an adapter-centric to a port-centric control path
architecture.
The tape device driver ensures that, from a user perspective, only one library resource per
Fibre Channel IOA port is presented for the same logical library, even if multiple control paths
are defined. IBM i pools these libraries so all the TAPxx resources for the library are in one
TAPMLBxx device description.
Requirement: For IBM i 7.1, a second library control path must be added, preferably
before the upgrade to IBM i 7.1, for the second port of a dual-port IOP-less Fibre Channel
IOA. Otherwise, the tape drives on the second port can become stand-alone devices
without library capability.
Before IBM i 7.1, only one control path drive was required per Fibre Channel IOA for drives in
the same logical library. Only one library resource per Fibre Channel IOA is presented for the
same logical library, even if multiple control paths are defined.
For more information about these DS8000 external storage performance data collection
requirements, see the IBM i Memo to Users 7.1 at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzaq9/rzaq9.pdf
This new QAPMXSTGD database file contains DS8000 or DS6000 external storage subsystem
performance data, including Fibre Channel link statistics and rank (RAID array) statistics. The
QAPMXSTGV database file that was introduced in IBM i 6.1.1 and part of the *DISK category
included in all default collection profiles contains volume level (that is, logical unit (LUN))
cache statistics performance data.
Both the QAPMXSTGD and QAPMXSTGV files store vendor-specific SCSI Log Sense page data in
unstructured large data fields. Access to at least a single IBM i LUN on the DS8000 or
DS6000 storage system is required to retrieve this log sense data from it when the SCSI Log
Sense command is issued against a LUN.
The new iDoctor Collection Services Investigator functions for analyzing the external storage
performance log sense data that is stored in QAPMXSTGV (Log sense page 0x32) and
QAPMXSTGD (Log sense pages 0x33 and 0x34) are shown in Figure 8-23.
Newly available external storage cache statistics data are shown in Figure 8-24 from the
report that is generated by clicking External storage cache statistics → by time interval →
IO rates totals with cache hits. The read cache hit% information was available from the
QAPMDISK data, but the newly reported write cache hit% from QAPMXSTGV data can check
for any potential storage subsystem write cache overruns. These overruns are indicated by
write cache hits% < 100%, and might warrant changes in the workload schedule or a cache
size upgrade.
For example, potential rank overuse issues can easily be visualized and analyzed by using a
ranking view of the rank IDs based on total I/O. To do so, click Rank graphs → By rank ID →
Ranks IO rates totals. Then, from this view, select one or more ranks with a high I/O rate for
a more detailed analysis by selecting Selected Ranks → Ranks IO rates from the right-click
menu, as shown in Figure 8-25.
For more information about the IBM iDoctor for IBM i powerful suite of performance tools, go
to the iDoctor website, which offers a 45-day trial version, at:
https://www-912.ibm.com/i_dir/idoctor.nsf/iDoctor.html
8.3.7 Thin provisioning for DS8700, DS8800, and VIOS shared storage pools
Thin provisioning for DS8700 and DS8800 storage servers, and for VIOS shared storage
pools, allows configurations to be set up with a small amount of real disk storage. This
storage can be increased later without changing the partition's view of the storage LUN.
Before this enhancement, the full amount of configured storage was allocated at LUN
initialization time.
The integrated hierarchical storage management functions for SSDs in IBM i, such as the
DB2 for i and UDFS media preferences or the ASP balancer enhancements for SSDs, allow
for an easy and efficient implementation of SSDs on the IBM i platform.
SSDs based on flash memory are considered a revolutionary technology for disk I/O
performance and energy efficiency compared to traditional spinning disk drives. SSD I/O
response times can be over 200 faster than for spinning disk drives. SSDs are supported in
IBM i 6.1.1 for internal storage plus PTF MF47377 or later if used in IBM System Storage
DS8000 series with R4.3 code or later.
For more information about the benefits and usage of SSDs with IBM i, see Performance
Value of Solid State Drives using IBM i, which is available at the following website:
http://www-03.ibm.com/systems/resources/ssd_ibmi.pdf
The SSD Analyzer Tool for IBM i is a good tool to use for a first analysis about whether SSDs
can help improve performance for a particular IBM i system. The tool queries existing
Collection Services performance data for retrieving the average system and optional job level
disk read I/O response times to characterize whether the workload is a good candidate for
SSDs. It can be downloaded as an IBM i save file from the following website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3780
For a reference about the IBM i PTF requirements for SSDs, see the IBM i Software
Knowledge Base topic “Requirements for Solid State Drives (SSD)”, which is available at the
following website (search for KBS document number 534676318):
http://www-912.ibm.com/s_dir/slkbase.nsf/slkbase
Parameter usage:
The UNIT parameter for the SQL statements is supported by IBM i 6.1 or later.
For a partitioned SQL table, the ALTER TABLE statement can be used to set a media
preference on a partition (member) level.
Figure 8-28 shows the new preferred storage unit parameter (UNIT keyword) for the CHGPF
command.
For releases before IBM i 7.1, the following PTFs are required for a dynamic move of physical
or logical database files after you change their media preference attribute. Otherwise, a save
and restore of those changed database files is required to make the media preference
change effective.
IBM i 6.1.0 PTFs MF47888, MF47892, and MF47879
IBM i 6.1.1 PTFs MF47889, MF47893, and MF47877
To query the RANDOM_READS counter for database files, a SQL query against
QSYS2/SYSPARTITIONSTAT for physical file statistics or SYSINDEXSTAT for keyed logical
file statistics (Example 8-1) or the System i Navigator’s Health Center activity tab
(Figure 8-29) can be used. Save the query results and use the View History function to
compare the results that are retrieved for the start and the end of the critical time period.
Example 8-1 SQL query for physical database file random reads
SELECT table_name, logical_reads, random_reads, sequential_reads FROM
QSYS2.SYSPARTITIONSTAT WHERE logical_reads > 0 ORDER BY random_reads DESC
In Example 8-3, the smgetstayoffssd macro is used to reset the storage allocation setting
back to the default for a specific independent ASP. For IASPs, the ASP number in hex is
required on the smsetstayoffssd macro.
For more information about the new SYSPARTITIONDISK view and function, see the topic
“IBM DB2 for i Statistical View for Solid State Drive Storage Usage Reporting” at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105463
Typically, the ASP balancer tracing function TRCASPBAL is run over a critical I/O workload
period, such as a batch processing window, which is optimized for performance when using
SSDs. Afterward, the ASP balancer HSM function is started to migrate both the cold data
from SSDs and the hot data to SSDs. TRCASPBAL can be accumulative. Users might clear data
at the start of the week, collect the trace across the nightly batch work load window for the
week, and balance on the weekend.
Example 8-4 illustrates a typical usage of the ASP balancer tracing and migration functions by
clearing the trace statistics first, collecting new trace statistics, starting the migration, and
monitoring its completion with the CHKASPBAL command.
The initial ASP balancer accounting only for the extent read I/O counts is enhanced with a
more efficient migration algorithm in the weighted ASP balancer version and more functions
regarding SSD media management.
With IBM i 6.1 plus supersede PTF MF49399, IBM i 6.1.1 plus supersede PTF MF48544, and
with IBM i 7.1 base code, the ASP balancer’s decision for moving hot or cold data to and from
SSDs is now based on a weighted disk read I/O count for the 1 MB auxiliary storage
segments to be moved. Not only is the amount of read I/O accesses to a segment counted as
before, but its read service time is considered for the migration decision.
This weighted ASP balancer enhancement accounting for the read service times provides
more efficient data media placement. For example, frequently accessed data that is derived
mainly from read cache hits can no longer be prioritized for migration to SSDs, as it cannot
benefit from being placed on SSDs.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-31 IBM i ASP balancer migration priority
Also, the STRASPBAL command syntax has changed in IBM i 7.1 to include a new subtype
parameter that, for the *HSM balance type, now allows data migration between up to three
storage tiers. Tiered storage is the assignment of different categories of data to different types
of storage media to reduce total storage cost. You can have the following types of data
migration:
With subtype *SSD, you can have data migration between SSDs and high performance
HDDs.
With subtype *HDD, you can have data migration between high performance HDDs and
low performance (compressed) HDDs.
Unless an ASP has disk units from all three storage tiers, the default subtype *CALC can be
used.
Data migration with the *HSM balance type is run in two phases, with cold data moved off
from SSDs first, and then hot data moved to SSDs.
For earlier releases, this media preference sweeper function is available with the following
SST Advanced Analysis interface macros in IBM i 6.1.1 through PTF MF49299 and in IBM i
6.1.0 through PTF MF49371:
movemediapreference asp_num priority [L M H] (The default is low.)
This macro moves data that is marked with a media preference attribute to the SSDs and
non-media preference data off the SSDs.
movemediapreferencetossd asp_num priority [L M H] (The default is low.)
This macro moves data that is marked with a media preference attribute to the SSDs.
movemediapreferenceoffssd asp_num priority [L M H] (The default is low.)
This macro moves data that does not have the media preference attribute off the SSDs.
movemediapreferencestatus asp_num
This macro sets the status of the sweeping.
movemediapreferencestop asp_num
This macro ends the sweeping.
The ASP number in the asp_num variable must be specified in hex format.
A scenario for using the media preference sweeper function is after disk units are added to an
ASP, then choosing the add and balance option, which does not respect the media
preference. It can also be used when disk units are removed from the configuration because
of media type capacity constraints within an ASP. The sweeper function can be used to
correct these media preference issues after the capacity constraints are solved.
For the ASP balancer SSD enhancements, run TRCASPBAL for the period of the critical
workload, such as a batch window that is to be optimized by using SSDs. The provided CL
script might be an alternative if no specific time frame can be identified for optimization.
Additional Parameters
The following considerations apply when you specify a storage media preference for a UDFS:
Specifying a media preference does not ensure that storage for objects is allocated from
the preferred storage media.
The preferred storage media attribute of a UDFS cannot be changed.
All objects in a particular UDFS have the same preferred storage media.
You can display or retrieve only the storage media preference of a user-defined file
system, not the individual objects within a file system.
Objects that are copied or restored into a UDFS are assigned the preferred storage media
of the UDFS, regardless of the original object's preferred storage media.
When you restore a new UDFS to a system, the original storage media preference of the
UDFS is retained.
This option is supported on Power 710, 720, 730, 740, 750, 755, 770, 780, and 795 models.
For more information, see IBM Hardware Announcement letter 111-132 at:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i
Source&supplier=897&letternum=ENUS111-132
The Disk Sanitizer is accessed through a macro interface from either the Dedicated Service
Tools (DST) menu or the System Service Tools (SST) menu. To access the Disk Sanitizer,
complete the following steps:
1. From DST or SST, select 'Start a service tool'.
2. Select 'Display/Alter/Dump'.
3. Select 1 - 'Display/Alter storage'.
4. Select 2 - 'Licensed Internal Code (LIC) data'.
5. Select 14- 'Advanced Analysis' (you must scroll down to see this option).
7. Press the Enter key twice and a help panel is displayed, as shown in Figure 8-34.
The sanitizing SSD units function is nearly identical to sanitizing HDD units from a user
interface perspective.
IBM i 7.1 extends this support by adding IPv6 for the following applications:
DHCP Server
DHCP Client
SNMP
SMTP
PPP
The ISC-based server has several advantages. In addition to supporting IPv4, it also supports
IPv6 and DHCP server failover. The DHCP server attributes can be set to run either an IPv4
or IPv6 server or both. There is no GUI support for managing the ISC DHCP server
configuration files or for monitoring leases, such as with the old DHCP server. Therefore, by
default, the old DHCP server is used.
If you want use the ISC DHCP server, you must add the QIBM_ISC_DHCP environment
variable, as described in “Using the ISC DHCP IPv6 server on IBM i” on page 423. Then, stop
your DHCP server by running the ENDTCPSVR command (if it is running) and start the ISC
DHCP server with the STRTCPSVR command. The IBM i activation code attempts to migrate the
old configuration file to the new ISC configuration file the first time that DHCP-related code is
run (through CHGDHCPA or STRTCPSVR). The old configuration file is left unchanged after the
migration. Any changes that are made to the old configuration file are not moved to the new
one after the initial migration. The new configuration file might require editing to operate
properly. The current leases file is also migrated to the ISC leases file. The migration is just a
way to get started with the new server. Certain functions that are provided by the old server
are not available with the ISC server, so you must weigh the benefits and differences between
these two servers and choose which one is best for your environment.
Access to the ISC DHCP server: A graphical interface is not provided for managing
the ISC DHCP server and monitoring the leases that it manages. All associated
configuration files must be edited manually.
There are several considerations to make when migrating from the existing IBM i DHCP
server to the ISC DHCP server. For example, IBM Navigator for i does not provide an
interface for configuring the ISC DHCP server in IBM i 7.1. To configure the ISC DHCP IPv6
server, edit the /QIBM/UserData/OS400/DHCP/ETC/DHCPD6.CONF configuration files manually.
Example 9-1 shows an example.
subnet6 1ffe:31::/64 {
default-lease-time 120;
max-lease-time 86400;
range6 1ffe:31::d0:ca1 1ffe:31::d0:cef;
}
Copy the above into /QIBM/UserData/OS400/DHCP/ETC/DHCPD6.CONF.
Make sure that you have at least one line that is enabled for IPv6 on your system
and configured with an IPv6 address, for example something like: 1ffe:31::d0:ccc
so that the line description of the address can be listened and that subnet6 would
not be ignored.
Two more files might need to be configured depending on your configuration requirements:
/QIBM/UserData/OS400/DHCP/ETC/DHCRELAY6.CONF
/QIBM/UserData/OS400/DHCP/ETC/DHCPD6.LEASES
The system tries to acquire only IPv6 addresses through DHCPv6 if an IPv6 router on the link
tells the system (by turning on the 'M' bit in the Router Advertisement flags) to use the
managed configuration to obtain IP addresses. The DHCPv6 client sends multicast
messages to find a DHCPv6 server and to request IPv6 address assignment. The DHCPv6
server sends a reply with the addresses assigned. IP addresses obtained from the DHCPv6
server have a preferred and valid lifetime, just like stateless auto configured addresses.
Before the preferred lifetime expires, the DHCPv6 client renews the addresses. When the
*IP6SAC interface is ended, any DHCP addresses are released.
If the Privacy Extension parameter is enabled on the *IP6SAC interface, you also request
temporary addresses from the DHCPv6 server. The request for temporary addresses is sent
separately from the request for non-temporary addresses. Temporary addresses are never
renewed; when the preferred lifetime is about to be reached, you request new temporary
addresses. The old temporary addresses remain until either their valid lifetime is reached or
the *IP6SAC interface is ended. The preferred and valid lifetime of DHCP temporary
addresses is limited by the IPv6 temporary address valid and preferred lifetimes that are
configured through CHGTCPA.
To identify itself to the DHCPv6 server, the client uses a DHCP Unique Identifier (DUID). This
DUID is generated automatically from a MAC address on the system and a time stamp, and is
saved by the TCP/IP configuration. This identifier is a system-wide identifier; the same DUID
is used by DHCP on all lines. To identify separate lines, the DHCP message also contains an
identity association identifier (IAID), which is a unique value for each separate line (generated
and saved by the TCP/IP configuration). The current DUID can be viewed by using the
CHGTCPA command. The value cannot be changed by the user, but the user can force
generation of a new DUID if necessary, by using the *GEN option.
As with the DHCPv4 client, more configuration information can be obtained from the DHCPv6
server beyond just addresses. For DHCPv6, it supports the DNS Server List and Domain
Search List options and adds received DNS servers and domains to the configuration when
the DHCPv6 client is active.
Support added: IBM i 6.1 added DHCPv4 client support for IPv4 with PTF SI31800.
Supported functionality: The SNMP agent is still able to receive and handle packets
and requests from older versions of SNMP v1 even after you change the SNMP
attributes to specify ALWSNMPV3(*YES).
3. Check the engine identifier that is supplied by the SNMP Agent after it is started for the
first time after ALWSNMPV3(*YES) is set.
In most cases, this engine identifier does not need to be changed. If the generated engine
ID must be changed, do so by running CHGSNMPA. However, there are caveats. The engine
identifier is created using a vendor-specific formula and incorporates the IP address of the
agent. Any engine identifier that is consistent with the snmpEngineID definition in RFC
3411 and that is also unique within the administrative domain can be specified.
For example, the identifier 80000002010A010203 is a valid engine ID for an IBM i agent
with an IP address of 10.1.2.3. The first byte, '80'X, indicates that the engine ID complies
with the architecture defined in RFC 3411. The next four bytes, '00000002'X, indicate the
private enterprise number for IBM as assigned by the Internet Assigned Numbers
Authority (IANA). The next byte, '01'X, indicates that the remaining portion of the engine ID
is an IPv4 address. The last four bytes, '0A010203'X, is the hexadecimal representation of
the IP address. The CHGSNMPA SNMPENGID('80000002010A010203') command is run to
specify the engine ID.
Important: Another new SNMPv3 parameter, SNMPENGB, was added to the CHGSNMPA
command, and is the SNMP engine boots counter. Do not manually change this
parameter unless you must reset it to a value of zero. This parameter indicates the
number of times that the SNMP engine (agent) was started. Each time the STRTCPSVR
*SNMP command is successfully run, this value increments automatically. Changing the
SNMPENGB parameter when the agent is active can cause SNMPv3 authentication
failures.
Support of functionality: The IBM i 7.1 SNMP manager APIs snmpGet, snmpSet, and
snmpGetnext do not support SNMPv3, so a non-native manager such as a PC-based
manager must be used. There are a number of these managers available for download,
including both no-cost and for-purchase options.
9.1.5 SMTP
IPv6 support was added in IBM i 7.1. Currently, there is no IPv6 standard for Real-time Black
holes Lists (RBL). The RBL works only for IPv4 addresses. SMTP uses the getaddrinfo() API
to look up email DNS records. They are looked up first as IPv6 and then as IPv4, which is
different from what RFC 3974 recommends. Parts of the DNS resolver were fixed in IBM i 7.1
to be more correct.
The MAILROUTER feature before IBM i 7.1 can, in instances, forward all mail to the mail
router even if the email address can be resolved. In IBM i 7.1, MAILROUTER correctly
forwards to the mail router only when the email address does not resolve.
The FWDMAILHUB feature was added in IBM i 6.1, and allowed the forwarding of email to a
single address. FWDMAILHUB always forwards the email and does not attempt a resolve.
MAILROUTER supports only A and AAAA records, when FWDMAILHUB supports MX,
CNAME, AAAA, and A.
The IBM i 2-port communications adapter (#2893/#2894 with CCIN 576C) remains available
to enable bisync support for those clients who still use this older protocol and to support MES
orders for POWER5/POWER6 servers.
Communication ports are not virtualized by IBM i. IBM i 7.1 does not virtualize the adapter for
other IBM i partitions, nor does VIOS virtualize async ports for other partitions.
Figure 9-1 highlights the configuration changes that are required to enable IPv6 for a
connection profile.
Feature availability: PPP configuration enhancements for IPv6 are only available using
IBM Navigator for i. It is not available using the PC-based client System i Navigator.
Remote Worker
192.168.1.11
System i with PPP services
192.168.1.1
Network Ring
192.168.1.0
LAN-attached PC LAN-attached PC
192.168.1.6 192.168.1.5
If you want your remote workers to use IPv6 to access the company network, you must
enable IPv6 in the connection profile. You do not need to assign a specific IPv6 address.
However, if you want the remote workers to have more than the default link-local IPv6 address
assigned, you must either configure an IPv6 address prefix or set the appropriate options if a
DHCPv6 server is available in the company network.
Click Network → Remote Access Servers → Receiver Connection Profiles, and in the
right pane, click Action → New Profile.
Select Protocol type, Connection type, and Link configuration for the new profile, and
click OK, as shown in Figure 9-3.
Figure 9-4 PPP - Create Receiver Connection profile window with the IPv6 option
To advertise an address prefix of 2001:DBA::, a default route, and that a DHCPv6 server in
your network can provide IP addresses, configure a global IPv6 address in the connection
profile as follows (see Figure 9-5 on page 431):
1. Select Enable IPv6.
2. Specify a global IPv6 address for Fixed local IP address. This address must be compatible
with the DHCPv6 server configuration for distributing IPv6 addresses. For this example,
click None.
3. Click Generate for the Interface identifier field.
4. Select Yes for the Allow remote system to access other networks (IP forwarding)
check box.
5. Set the Address prefix to 2001:DBA::.
6. Select Advertise IPv6 default route.
7. Select Advertise DHCPv6 and Managed address configuration.
8. Click OK to complete the profile.
Software updates that enable FastCGI PHP Processing in IBM i 6.1 were also included in the
HTTP Group PTF package for January 2010.
The required components and PTF information for 6.1 is shown in 9.2.1, “IBM i 6.1 required
components” on page 431.
The PORT parameter of the TELNET command prompt was moved to a new location in the
parameter string, and a new parameter, Secure Connection (SSL), was added to the
command format. If the environment variable was set up for a secure connection, or the
SSL(*YES) parameter is selected, the target port number defaults to 992.
If you want all telnet client users on your system to use SSL, set the
QIBM_TELNET_CLIENT_SSL as a system level environment variable.
Encryption is provided by using either SSL or Transport Layer Security (TLS) based on
negotiation between the Telnet client and the server.
The TELNET client must be assigned an appropriate certificate in the Digital Certificate
Manager (DCM) or the connection fails. See Figure 9-6.
9.4 System SSL support for transport layer security version 1.2
IBM i 7.1 Secure Sockets Layer (SSL) now supports the latest industry standards of Transport
Layer Security version 1.2 (TLSv1.2) and Transport Layer Security version 1.1 (TLSv1.1)
protocols. The TLSv1.2 protocol uses SHA2 hashing algorithms. System SSL also supports
the Online Certificate Status Protocol (OCSP) during the certificate validation process. OCSP
is used for checking the revocation status of end entity certificates.
Digital Certificate Manager (DCM) options on the Application Definition configuration panels
allow many of the core IBM networking applications (Telnet, FTP, and so on) to use these new
protocols and to enable OCSP. Applications using a system SSL programming interface or
the Global Secure Toolkit (GSKit) system SSL programming interface can switch to the new
protocols by making changes to the code and recompiling.
New TLSv1.1 and TLSv1.2 support allows set up by changing the QSSLPCL system variable;
then applications must be configured in DCM to use specific versions of TLS and ciphers
suites.
Also, see the DCM Application Definitions topic in the IBM i Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzain%2Frzaina
ppdefs.htm
QIBM_QSO_ACCEPT Enables a custom exit program to allow or deny incoming connections that are
based on the restrictions that are set by the programs.
QIBM_QSO_CONNECT Enables a custom exit program to allow or deny outgoing connections that are
based on the restrictions that are set by the programs.
QIBM_QSO_LISTEN Enables a custom exit program to allow or deny a socket the ability to listen for
connections that are based on the restrictions that are set by the programs.
The program in Example 9-3 rejects all incoming connections to the Telnet server that come
from a particular remote IP address between the hours of 12 a.m. - 4 a.m. The program
determines whether the incoming connection can be accepted by the socket API accepting
connections or will be rejected.
Example 9-3 Socket program example using the QIBM_QSO_ACCEPT user exit
/******************************************************************/
/* System i - Sample User Exit Program for QIBM_QSO_ACCEPT */
/* */
/* Exit Point Name : QIBM_QSO_ACCEPT */
/* */
/* Description : The following ILE C language program */
/* will reject all incoming connections to */
/* the telnet server (port 23) coming from */
/* the remote IP address of '1.2.3.4' between */
/* the hours of 12 A.M. and 4 A.M. */
/******************************************************************/
#include stdio.h
/****************************************************************/
/* Initialize the address to compare to 1.2.3.4 */
/****************************************************************/
addr.s_addr = 0x01020304;
/****************************************************************/
/* By default allow the connection. */
/****************************************************************/
return_code = '0';
/****************************************************************/
/* Copy format parameter to local storage. */
/****************************************************************/
memcpy(&input, (Qso_ACPT0100_Format *) argv[1],
sizeof(Qso_ACPT0100_Format));
/****************************************************************/
/* If the local port is the telnet server */
/****************************************************************/
if((input.Local_Incoming_Address_Length == sizeof(sockaddr_in) &&
input.Local_Incoming_Address.sinstruct.sin_port == 23) ||
(input.Local_Incoming_Address_Length == sizeof(sockaddr_in6) &&
input.Local_Incoming_Address.sin6struct.sin6_port == 23))
{
/**************************************************************/
/* And the incoming connection is from 1.2.3.4 */
/**************************************************************/
if(input.Remote_Address_Length == sizeof(sockaddr_in) &&
(memcmp(&input.Remote_Address.sinstruct.sin_addr,
addr, sizeof(struct in_addr)) == 0))
{
/************************************************************/
/* And the time is between 12 A.M. and 4 A.M. */
/* Reject the connection. */
/************************************************************/
if(IsTimeBetweenMidnightAnd4AM())
return_code = '1';
}
}
*argv[2] = return_code;
return 0;
}
Important: By using the example that is shown in Example 9-3 on page 434, you agree to
the terms of the code license and disclaimer information that is available at:
https://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzahg
/legalnoticesSW.htm
Data policy
Note the following authentication items for the data policy:
AES-XCBC-MAC
HMAC-SHA-256
For more information and configuration details, see the Virtual Private Networking, available
at:
https://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzaja/rz
ajagetstart.htm
This website is updated as PTFs are made available for more applications or protocols. As of
this publication, the following list identifies IBM i 6.1 applications and protocols that support
IPv6:
IBM Online Help and Eclipse Knowledge Center (IBMHELP) - PTF SI31014
INETD - PTF SI29701
SNTP - PTF SI30112
TFTP - PTF SI30868
LPD - PTF SI31015
Remote Journal - PTF SI31713
Remote Journal - PTF MF44589
IPP printer driver - PTF SI31910
LPR and Remote output queues - PTF SI31363
Enterprise Extender 1 (MSCP) - PTF MF44318
Enterprise Extender 2 (HPR) - PTF MF44355
Enterprise Extender 3 (HPR) - PTF MF44356
Enterprise Extender 4 (DC) - PTF SI31250
Enterprise Extender 5 (SW) - PTF SI31223
Enterprise Extender 6 (Comm Trace) - PTF SI30790
Management Central - PTF SI31888
Management Central - PTF SI31892
Management Central - PTF SI32720
Management Central - PTF SI32721
The IBM i 7.1 Knowledge Center topic “Migrating from IBM AnyNet to Enterprise Extender”
provides detailed migration considerations and requirements, and is available at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzajt/rza
jtanytoee.htm
With the Ethernet link aggregation function available in IBM i 7.1, up to eight Ethernet links
can be bound together in a single-line description.
Figure 9-8 Creating an Ethernet line description with Link Aggregation Control Protocol
1. The Create Line Desc (Ethernet) (CRTLINETH) and Change Line Desc (Ethernet)
(CHGLINETH) commands are used to manage Ethernet line descriptions, including
aggregate line descriptions (indicated by Resource name (RSRCNAME) *AGG). For an
aggregate line description, the Aggregate policy (AGGPCY) has two elements:
– Standard, which controls negotiation with the link partner, usually a switch
– Policy type, which controls which Ethernet port is used to send each outgoing frame
IBM i Partition
AGGRSCL = Note:
CMN14 "ETHLINE" is a
Ports CMN08 user-chosen
configured CMN17 line name, not
in an CMN08 a special value.
aggregate CMN11
CMN11
In the example, four links and IBM i communication resources (CMN14, CMN17, CMN08, and
CMN11) are aggregated together with one line description named ETHLINE.
The command that is shown in Figure 9-10 creates the line description for the aggregated
configuration.
CRTLINETH LIND(ETHLINE)
RSRCNAME(*AGG)
AGGPCY(*ETHCHL *RNDRBN)
AGGRSCL(CMN14 CMN17 CMN08 CMN11)
LINESPEED(1G)
DUPLEX(*FULL)
TEXT('Four link aggregated line')
Figure 9-10 Example CRTLINETH command for four aggregated links
For more information about configuring Ethernet resources and link aggregation, see the
IBM i Knowledge Center at the following website. For Ethernet requirements, see the
hardware requirements section.
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i/welcome
One answer to this dilemma is the new Ethernet Layer-2 bridging function in IBM i 7.1.
Although similar in concept to the Shared Ethernet Adapter (SEA) support provided by a
Power Systems Virtual I/O Server (VIOS) partition, this IBM i function enables a single
physical LAN connection to be shared by multiple logical partitions on a physical system
without using Virtual I/O Server (VIOS).
With IBM i 7.1, an IBM i partition can bridge a physical Ethernet port to the virtual LAN. This
function reduces costs in the following ways:
Sharing an Ethernet port means fewer Ethernet cards on the server.
Fewer ports are needed at the network switch and fewer cables are required.
There might be reduced administration costs because there are fewer physical resources
to manage.
Complexity might be reduced because no Virtual I/O Server partition is needed to manage
the port sharing.
One physical
Ethernet L
Internal Virtual I
connection Virtual Ethernet N LPAR 2
Ethernet D
Through a
Layer 2 bridge
L
From the system's Virtual I
Ethernet N LPAR 3
internal virtual
D
Ethernet
Tip: Use the selected Ethernet resources only for Layer-2 bridging and not for IBM i
TCP/IP configuration, as there is a significant increase in processor usage for any host
traffic that uses bridged resources.
Figure 9-14 Virtual Partition Manager with virtual Ethernet ID1 activated
2. On the IBM i partition with the physical adapter, create two Ethernet line descriptions:
a. Create a one line description for the Ethernet link (physical communications resource
CMN09) connected to the physical network, as shown in Figure 9-15.
b. Create a one line description for the new virtual Ethernet adapter (virtual resource
CMN14), as shown in Figure 9-16.
The resource name for a virtual adapter is found by selecting a CMNnn resource with
type of 268C. Communications resources can be displayed through the Work with
Hardware Resources (WRKHDWRSC) command by specifying the TYPE(*CMN) parameter.
For more information about configuring Ethernet resources and Layer-2 bridging, see the
IBM i Knowledge Center at the following website address. For Ethernet requirements, see the
hardware requirements section.
http://www.ibm.com/systems/i/infocenter/
For IBM i 7.1 and IBM i 6.1, the most current versions are listed along with their respective
PTFs in Table 9-2.
Choosing and implementing a printing and presentation solution requires you to be familiar
with both your organization's requirements and resources, and the capabilities that are
provided by IBM i.
IBM i has both Basic Printing and Advanced Function Presentation (AFP). AFP is an
architecture-based system of hardware and software for creating, formatting, viewing,
retrieving, printing, and distributing information using a wide variety of printer and display
devices. AFP is the original, integrated data stream on IBM i for generating fully composed
pages of data.
The following list offers a high-level overview of the IBM i printing process:
1. The printing process starts when an application program runs. The application program
creates output data. The output data is based on the application program and information
that is contained in the printer file.
2. If print spooling is selected, the output data is placed in a spooled file and the spooled file
is placed in an output queue. If direct printing is selected, the output data is sent directly to
the printer.
3. The destination of the output data is based on values that are stored in several printing
elements, such as job description, user profile, workstation description, printer file, and
system values. Output queues are used to manage spooled files.
4. Spooled files in output queues can be used in the following ways:
– Printed
– Kept as records
– Used as input to other applications
– Transferred to other output queues
– Sent as email
– Used to create PDF files
5. The printer writer program interacts between the output queue and the printer and can be
used to convert the printer data stream.
6. The printer writer program included in IBM i supports various printer data streams. IBM
Print Services Facility™ for IBM i provides an additional function that provides support for
the Advanced Function Presentation (AFP) Intelligent Printer Data Stream (IPDS).
Each printer must have a printer device description. The printer device description
contains a configuration description of the printer. Printers can be attached by using
various attachment methods.
7. You can use a remote writer to route spooled files from an output queue on your system to
another system.
Output
Output
data
Output
data
data
Spooled
file
Print
writer
Data
stream
Device description
For more information about the IBM i 6.1 print enhancements, see the IBM i 6.1 Technical
Overview, SG24-7713.
Because this function is implemented by PTF, there is no online or prompter help for the new
parameters and the new *TOSTMF value.
Figure 10-2 Copy spooled file command prompt - TOFILE parameter with *TOSTMF value
PTFs and licensed programs: This function requires PTF SI43471 for IBM i 7.1 and the
5770TS1 IBM Transform Services for i and Transforms – AFP to PDF Transform licensed
program.
Spooled file security is enhanced through the addition of a spooled file security exit point. This
exit point can be used with a spooled file security exit program to allow more granular access
to individual spooled files based on the operation to be run.
This feature is another way to access spool files for the Copy Spooled File (CPYSPLF), Display
Spooled File (DSPSPLF), and Send Network Spooled File (SNDNETSPLF) commands. If
DSPDTA(*YES) is specified when the output queue is created, any user with *USE authority to
the output queue is allowed to copy, display, send, or move spooled files.
If the user is authorized to control the file by one of the ways that are already listed, using
DSPDTA(*NO) when you create the output queue does not restrict the user from displaying,
copying, or sending the file. DSPDTA authority is checked only if the user is not otherwise
authorized to the file. All of the previous access methods override DSPDTA(*NO).
More details about the exit program format names, formats, and parameters are available in
the IBM i 7.1 Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp
An example of creating an exit program and using the QIBM_SP_SECURITY exit point is
available in IBM Software Technical Document 560810071 - “QIBM_QSP_SECURITY Exit
Point: Let's See How it Works” at:
http://www-912.ibm.com/s_dir/SLKBase.nsf/1ac66549a21402188625680b0002037e/4dce2d7d
f8415e9c862577230076acdd?OpenDocument
PDF encryption: PDF encryption is a feature of the Infoprint Server license program.
There is a new PDFENCRYPT parameter for the user-defined data USRDFNDTA parameter, which
is used to specify whether to encrypt an output PDF stream file or spooled file and whether to
send it as email. There are several ways to specify the USRDFNDTA parameter with the
PDFENCRYPT parameter:
It can be specified for an existing spool file using the Change Spooled File Attributes
(CHGSPLFA) command.
It can be specified in a printer file using the Create Printer File (CRTPRTF) command.
It can be specified by using the Override Printer File (OVRPRTF) command.
This parameter is specified within a PSF configuration object that is either created through the
Create PSF Configuration (CRTPSFCFG) command or specified by using the Change PSF
Configuration (CHGPSFCFG) command.
If you specify a value for PDDTAAUT that is not supported, PSF issues PQT0038 with reason
code 5 and ends. Message PQT0038 is: Printer writer ended because of an error.
Reason code 5 is: Value not recognized.
This parameter is specified within a PSF configuration object that is either created through the
Create PSF Configuration (CRTPSFCFG) command or specified by using the Change PSF
Configuration (CHGPSFCFG) command.
Example 10-1 Command to create a data area to force a PSFTRACE spool file
CRTDTAARA DTAARA(library/printer_device_name)
TYPE(*CHAR)
LEN(40)
AUT(*ALL)
VALUE(X'E6E6D7C4E3D9C8D98000000000000800000000000000000032000000000000000000000000
000000')
The data area must be created before you start the printer writer, must be created in the
QGPL library, and the name must match the printer device description name. When the
PSFTRACE file is no longer needed, delete the data area with the Delete Data Area (DLTDTAARA)
command.
For more information about PSFTRACE and interpreting the data within it, see the
“Troubleshooting Mapping Problems” section of the Advanced Function Presentation PDF at
the IBM i 7.1 Knowledge Center, which can be found at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzau6/rzau6.pdf
The following error codes were added to the PQT4151 (Incorrect data was returned by
mapping program) error message to support new function:
43: Value for PDF Email Comma Delimiter must be '0' or X'00' when SNDDST is the
mail server.
44: Encryption of stream file or spooled file requested but encryption settings
not specified.
45: Value for Encrypt PDF stream file must be '0' or '1'.
46: Value for Encrypt PDF spooled file must be '0' or '1'.
In IBM i 7.1, the user can now generate PDF output from a spooled file. If the job ends after
the spooled file is generated and closed, the user can generate the PDF from the spooled file,
regardless of whether the spool file was closed before the job ended.
This capability is covered in more detail in 10.8, “Host Print Transform enhancements” on
page 471.
A CPD6DF0 diagnostic message, bar code data did not print correctly due to errors,
is logged if invalid data or parameters are specified.
The transform continues to revert to the PDF standard font references if font resources are
not available in font libraries and the library list. The text is limited to ANSI characters.
Improved globalization
Eastern European languages require embedded fonts to display all the characters.
Non-Latin1 character identifiers (CHRIDs) are now automatically mapped to the appropriate
AFP font resources.
Where possible, font attributes such font size, bold fonts, italic fonts, and so on, are honored.
Font mapping can be customized through a workstation customization (WSCST) object.
For these languages and character sets, the following products might be required:
5648-B45 AFP Font Collection for i V3.1
5648-E77 InfoPrint Fonts for Multi-platform
Both the “View as PDF” and “Export as PDF to client” desktop tasks use Transform Services
for the AFPDS and SCS conversions to PDF.
Requirement: The “Export as PDF” to IFS, to an output queue, and to an email require the
5722IP1 Infoprint Server for iSeries licensed program.
Figure 10-4 shows the navigation to access the Printer Output function. In the IBM i
Management list, the first arrow points to the Basic Operations link. When that link is
selected, the Basic Operations menu opens.
Figure 10-4 Going to the printer output list in IBM Navigator for i
Figure 10-5 Printer Output list with menu in IBM Navigator for i
Select a file and right-click, or click the Actions menu, and select View as PDF.
When the View PDF task is selected, you see the output as a PDF, as shown in Figure 10-6.
Figure 10-6 PDF displayed from View PDF in IBM Navigator for i
Requirement: For the latter three options, the Infoprint Server licensed program
(5722-IP1) is required. Users can use the native IBM Transform Services for i
(5770-TS1) licensed program to export to the IFS, but they must map a network drive to
the IFS and then select the first option.
The option to use the Infoprint Server licensed program to convert spooled files to
PDF remains.
The navigation to a specific printer output file is identical to what is shown in Figure 10-4 on
page 463. Select a file and right click, or click the Actions menu. The menu opens, as shown
in Figure 10-7.
Figure 10-7 Printer output list with Export PDF options shown
Select Export as. A menu with PDF options opens, as shown in Figure 10-7. Click PDF
using Transform Services or PDF using Infoprint Server. In the next menu that opens,
click the appropriate export option.
The arrow points to Store in Stream File, which is consistent with saving the output in the
IFS.
3. Click Next to have the wizard request a printer, as shown in Figure 10-10.
Because the system has no active printers capable of PDF conversion, the printer
selection is disabled and Create new printer is automatically selected.
The arrow in Figure 10-13 points to an important function that minimizes the PDF size.
Transform Services embed the PDF fonts in the PDF to preserve text appearance and text
content. This action increases the size of the PDF file. This option directs transforms to not
embed the PDF fonts.
7. Click Next. Another advanced parameters window (Figure 10-14) opens. Accept the
defaults and click Next.
8. Create and secure the directories in the IFS, according to the following rules:
– The directories must exist.
– The QSPLJOB user (or *PUBLIC) must have *RWX (read / write / execute) authority to
the root (/) directory.
– The QSPLJOB user must have a minimum of *X (execute) authority to the directories in
the path.
– The QSPLJOB user must have *RWX (read / write / execute) authority to the directory
where the files are stored.
9. Click Next to continue, and click Finish in the confirmation window to print.
Support was added for the AFP to PDF Transform (option 1) of the IBM Transform Services
for i (5770-TS1) licensed program when you view printer spool output as a PDF document.
Output can be viewed in a browser or placed in the IBM i integrated file system (IFS).
For more information, see the “IBM i Access for Web” topic in the IBM i 7.1 Knowledge
Center, or the IBM i Access for web PDF at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzamm/rzamm.pdf
Change in requirements: System i Navigator and IBM i Access for Web previously
required the 5722-IP1 IBM Infoprint Server for iSeries product to view output as PDF. This
option is still usable for users that have the software, but it is no longer required.
When the API is started with a WSCST object with the CTXFORM attribute, the job reads the
input data stream from the spooled file that is specified in the API. Transform Services is
called to generate the PDF output from the input spooled file data. Transform Services returns
the PDF output in the output buffer that is provided on the API. For more information, see the
API documentation in the IBM i 7.1 Knowledge Center at:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahg%2Ficmain
.htm
10.9 References
The following references have more information about IBM i printing:
IBM i Printing Basic Printing, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzalu/rzalu.pdf
IBM i Printing Advanced Function Presentation, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzau6/rzau6.pdf
IBM Advanced Function Printing Utilities for iSeries: User's Guide, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v6r1m0/topic/books_web/s544534
9.pdf
IBM i Files and File Systems Spooled Files, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzata/rzata.pdf
System i Programming - DDS for Printer Files, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzakd/rzakd.pdf
iSeries Guide to Output, found at:
http://publib.boulder.ibm.com/infocenter/iseries/v6r1m0/topic/rzalu/s5445319.pd
f
InfoPrint AFP Font Collection, found at:
http://www-03.ibm.com/systems/i/software/print/afpfonthome_m_ww.html
IBM System x
IBM BladeCenter
IBM i on Power
Figure 11-1 Attaching servers to IBM i by using iSCSI
Within the IBM i Integrated Server Environment, you are limited to 1 Gb connectivity if you use
physical iSCSI Target HBAs.
With the new software target solution, you can now use dedicated Ethernet ports with 1 Gb or
10 Gb connectivity. It is now possible to intermix hardware and software target adapter
environments.
However, if you are using an iSCSI software initiator in combination with an iSCSI software
target, you have full 10 Gb connectivity.
NWSH
Resource Name Physical Adapter Target iSCSI HBA
Local SCSI Interface
Unique IP Address
Subnet mask
Local LAN Interface
Unique IP Address
NWSH LIND
Resource Name - *VRT Resource Name Ethernet NIC
Local SCSI Interface
Unique IP Address
Local LAN Interface
*IPSCSI Ethernet NIC
TCP/IP Interface
LIND
Internet Address
The physical iSCSI HBA is replaced by an Ethernet NIC, which is defined in the Ethernet line
description.
Within the TCP/IP interface, configure the IP address for this Ethernet NIC.
This same IP address is also used in the local SCSI interface parameter for the NWSH
configuration object.
Chapter 11. Integration with IBM BladeCenter and IBM System x 477
11.2.1 CRTDEVNWSH CL command interface
To define a iSCSI software target (Figure 11-4), you must specify *VRT for the resource
parameter for the Create Device Description for a Network Server Host
(CRTDEVNWH) command.
For the LCLIFC parameter, specify the *IPSCSI option, which indicates that the local LAN
interface IP address is the same as the local SCSI interface IP address.
Here you can specify the Virtual for the Hardware resource parameter to create the Network
server host adapter device description for the iSCSI software target.
Chapter 11. Integration with IBM BladeCenter and IBM System x 479
Within IBM Navigator for i, it is possible to create a TCP/IP interface and a corresponding line
description when you create an NWSH configuration object. You can do this task by clicking
New, as shown in Figure 11-6.
Before IBM i 7.1, this function was provided by IBM Director (5722-DR1), which is no longer
used for this purpose.
For the specific OS versions that are supported by each IBM i release, see the “Integrated
Server Operating System (Server OS) Versions” section of the IBM i iSCSI Solution Guide at:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
Figure 11-7 shows the new *ESX Server operating system Network server type for the
CRTNWSD command.
For VMware ESXi embedded servers, the system drive (the first drive) is no longer required.
Requirement: VMware ESX servers that were installed on prior IBM i releases must be
changed to the new NWSD type after you install IBM i 7.1.
An integrated Windows server can serve as the management server for any number of
integrated VMware ESX servers in the same IBM i logical partition. At least one integrated
Windows server is required in each IBM i logical partition that hosts integrated VMware
ESX servers.
Chapter 11. Integration with IBM BladeCenter and IBM System x 481
11.4.3 SWA storage spaces for VMware ESX servers
With IBM i 7.1, save while active (SWA) support is provided for integrated VMware ESX
servers. Storage spaces for VMware ESX servers can be saved from IBM i while the ESX
server is active. This setup allows a concurrent save of ESX data without requiring the ESX
server to be shut down or applications ended.
For more information, see the IBM i iSCSI Solution Guide, found at:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
See Figure 11-8 for an example of the window that shows the new OS support when creating
a server in IBM i Navigator.
For the equivalent 5250 interface using the Install Integrated Server (INSINTSVR) command,
see Figure 11-9 on page 483.
For the specific OS versions that are supported by each IBM i release, see the Integrated
Server Operating System (Server OS) Versions section of the IBM i iSCSI Solution Guide,
which is located at the following website:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
More...
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 11-9 Install Integrated Server (INSINTSVR) command
In addition, these worksheets are enhanced to allow them to be completed and saved as
softcopies.
The instructions for filling out these worksheets are in the IBM i iSCSI Solution Guide PDF.
Both PDFs are available at:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
The instructions and worksheets were previously part of the iSCSI Network Planning Guide
topic in the Knowledge Center.
Chapter 11. Integration with IBM BladeCenter and IBM System x 483
11.7 IBM Navigator for i
The IBM Navigator for i web GUI is now the preferred user interface for managing integrated
servers. Therefore, most integrated server management tasks are documented using the web
GUI.
GUI tasks: The System i Navigator GUI that runs on a client workstation is still available in
IBM i 7.1 and works for many tasks. However, the new GUI tasks that are listed in the
following paragraphs and support for IBM i 7.1 enhancements are not available in the
System i Navigator GUI.
New GUI tasks are available within the IBM Navigator for i web GUI, and are described in the
following sections:
Create Server task
Clone Integrated Windows Server task
Delete Server task
Launch Web Console
Figure 11-10 Create Server option in the IBM Navigator for i web GUI
To create a server in an IBM i Integrated Server environment, use IBM Navigator for i for a
walk-through of a server installation.
Tip: Review the “Server installation roadmap and checklist” chapter of the IBM i iSCSI
Solution Guide before you use this wizard. This guide can be found at:
http://www-03.ibm.com/systems/resources/systems_power_ibmi_iscsi_solution_gu
ide.pdf
Chapter 11. Integration with IBM BladeCenter and IBM System x 485
11.7.2 Clone Integrated Windows Server task
The New Based On (cloning) task, which is shown in Figure 11-12, creates an
iSCSI-attached integrated Windows server that is based on one that was previously installed.
Figure 11-12 New Based On...(cloning) option in the IBM Navigator for i web GUI
The server cloning process is provided for integrated servers that are running supported
Windows Server editions. The cloning process requires that you prepare the base server for
cloning before you use the cloning task. Additional configuration is required after the server is
cloned.
Review Chapter 5, “Server cloning roadmap and checklist” of the IBM i iSCSI Solution Guide
PDF before you use this wizard. It can be found at:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
Chapter 11. Integration with IBM BladeCenter and IBM System x 487
11.7.3 Delete Server task
This new task deletes an integrated server configuration, as shown in Figure 11-14.
Figure 11-14 Delete Server option in the IBM Navigator for i web GUI
This option is only available when the server is not active or starting.
Figure 11-15 Starting the web console from the IBM Navigator for i web GUI
Chapter 11. Integration with IBM BladeCenter and IBM System x 489
11.7.5 Simplified Windows File Level Backup (FLB) from IBM i
The IBM Navigator for i web GUI is enhanced to simplify the task to enable which Windows
share names under the IBM i /QNTC/servername directory can be saved from IBM i. The web
GUI now provides a File Level Backup tab on the integrated server properties page, as shown
in Figure 11-16.
Figure 11-16 File Level Backup tab on the integrated server properties
This new tab provides a way to select share names to enable for backup. It eliminates the
need to manually add a member to the QAZLCSAVL file in QUSRSYS and then manually edit
the file member to list the share names to enable for backup.
For more information about this topic, see 7.7.2, “Enabling Windows share names for file level
backup from IBM i” in the IBM i iSCSI Solution Guide PDF. This guide can be found at:
http://www.ibm.com/systems/i/advantages/integratedserver/iscsi/solution_guide.html
More...
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 11-17 INSINTSVR command
For more information about this topic, see the IBM i 7.1 Knowledge Center:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/cl/insintsvr.htm
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 11-18 DLTINTSVR command
Chapter 11. Integration with IBM BladeCenter and IBM System x 491
11.9 IBM i changed CL commands
The following new IBM i control language (CL) commands are changed for integrated servers:
Install Windows Server (INSWNTSVR) CL command
Create NWS Configuration (CRTNWSCFG) and Change NWS Configuration (CHGNWSCFG) CL
commands
In IBM i 7.1, iSCSI-attached integrated servers no longer support the multicast discovery
method for the remote server service processor. Instead, unicast discovery of the remote
server service processor must be used. Existing network server configurations of type
*SRVPRC that have Enable Unicast (ENBUNICAST) configured to *NO must use the Change
NWS Configuration (CHGNWSCFG) command to specify either the Service Processor Name
(SPNAME) or Service Processor IP Address (SPINTNETA) parameter.
iSCSI-attached network server descriptions cannot vary on until the network server
configurations of type *SRVPRC with ENBUNICAST configured to *NO are changed.
Tip: There are no alternatives available for Linux server installs. For VMware ESX server
installs, use the Create Server web GUI task or the INSINTSVR command.
This guide provides the information that you need to plan for and install an IBM BladeCenter
blade or System x server that is attached to IBM i using an iSCSI network (iSCSI). This guide
contains the following information:
iSCSI solution support matrixes: See the capabilities that the solution provides, which IBM
BladeCenter and System x server models and operating systems are supported, and
much more.
Concepts: Learn about how the solution works.
Server installation roadmap and checklist: Required information to install a server that is
integrated with IBM i.
Chapter 11. Integration with IBM BladeCenter and IBM System x 493
Server cloning roadmap and checklist: Required information to clone a Windows server
that is integrated with IBM i.
BladeCenter and System x configuration: iSCSI configuration tasks for BladeCenter blade
and System x servers.
Additional topics: Other topics that are related to the iSCSI solution.
Most of the technical content that was previously on this website (for example, the iSCSI
Install Read Me First web page) was moved to the IBM i iSCSI Solution Guide or to
developerWorks. For more information, see the previous sections.
Chapter 11. Integration with IBM BladeCenter and IBM System x 495
496 IBM i 7.1 Technical Overview with Technology Refresh Updates
12
IBM Navigator for i includes a number of welcome pages that enable the user to find the
action that they want to perform quickly. Most functions found in IBM Navigator for i are also
found in IBM Systems Director, which handles multiple IBM i systems and non- IBM i system
platforms.
In IBM i 6.1, the AJS function in IBM Systems Director for i5/OS was limited to viewing.
Viewing options include the following elements:
Activity logs for the system, for a scheduled job, and for a specific execution of a job
Configured jobs, their properties, and their status
Configured groups, their properties, and their status
Scheduled jobs and their status
The major limitation of the IBM i 6.1 function was that it could not change anything, and it
could not add, change, hold, or remove scheduled jobs.
The new IBM i 7.1 AJS functions in the IBM Navigator for i interface now include most of the
same functions that are found in the System i Navigator Windows client, with the advantage
that the interfaces are web-based, not client-based. Additionally, as IBM i 7.1 has matured
since release, more improvements to navigation and function have been implemented.
The rest of this section walks through the AJS web pages and describes the new functions
during the walkthrough.
Figure 12-4 Selecting Navigate Resources and the All Systems resource group
Figure 12-5 Selecting an IBM i system from the All Systems group list
Figure 12-6 Selecting Work Management from the Actions drop-down menu
When you click Work Management, the remaining function and steps are similar in look
and feel to those in IBM i Navigator.
The AJS menu contains items of system-wide scope, rather than of individual job scope. This
menu is the location where the job scheduler functions themselves are configured and
maintained. Deeper within various sections, more specific and granular definitions of jobs and
actions are performed. In the following sections, the Advanced Job Scheduler menu actions
are reviewed.
The AJS properties are divided into six tabs in the left pane as shown in Figure 12-7. The
following sections describe each tab.
General tab
The General tab that is shown in Figure 12-7 is used to view and set the general properties of
the job scheduler. You can specify the following options:
How long to retain activity
How long the log remains
The period during which scheduled jobs are not allowed to run
The working days that jobs are allowed to process
The notification command that sends a notification whenever a job completes successfully
or fails
Clicking New in Figure 12-8 opens the New Schedules window that is shown in Figure 12-9.
A schedule is a specification of days on which a job can run. No times are listed. With
scheduled jobs and group jobs, you can select a schedule rather than make one when you
create the jobs.
Clicking Add opens the New Data Library window that is shown in Figure 12-11.
From the New Data Library window, you can accomplish the following tasks:
Create a jobs scheduler data library.
Specify a job scheduler monitor.
Specify a monitor job’s name.
Start the job scheduler monitor automatically.
A system can have multiple job scheduler data libraries, and each library can have a monitor
job that is running simultaneously. You can switch from one job scheduler that is running a
production environment to the other job scheduler library that is running a test environment.
This capability is covered in more detail in 12.3.2, “Multiple Scheduling Environments
function” on page 532.
Users tab
The Users tab (Figure 12-13) enables maintenance of a list of job scheduler users that are
associated with a job scheduler data library. The Add button adds users, the Properties
button changes the properties of a user, and the Remove button removes a user.
Suppose that there is a system with multiple applications and each application’s personnel
are not allowed to access the scheduled jobs of other applications. The system administrator
can set up each application with their own job scheduler data library.
The system administrator uses the Users tab function to assign each application’s personnel
to their own job scheduler data library. Because a user can access only one job scheduler
data library, the administrator effectively locks the users to their own application’s job
scheduler while locking access to the others. Other security considerations should also be
implemented, but this is an additional layer of protection and segregation available within AJS.
Scheduling calendars and holiday calendars use different windows because they have
different parameters.
Figure 12-16 Advanced Job Scheduler Properties window with a stopped default instance
Figure 12-18 Advanced Job Scheduler Properties window with a running default instance
3. Ensure that the radio button to the left of the scheduler you want to start is selected, and
click Stop Scheduler on the right side of the window.
4. After clicking Stop Scheduler, the confirmation window shown in Figure 12-19 is
displayed. Clicking Yes stops the monitor. Clicking No returns you to the window shown in
Figure 12-18.
The Scheduled Jobs menu is shown in Figure 12-20. The menu offers the following options
with the row of icons that are located near the top:
Refresh: Click the blue swirling arrows to refresh the view of the data.
Export: Click the icon with a small grid and a green swooping arrow to export the list data
in HTML format.
Configure options: Click the icon with a small grid and a picture of sliders to choose which
fields are displayed on the view of the scheduled jobs.
Actions. Click Actions to see these additional options:
– New
– Reset scheduled jobs
– Properties
– Refresh
– Advanced Filter
– Export
– Configure options
Note: The options available when clicking Actions is different based on if a job, multiple
jobs, or no jobs are selected. The above listing of available actions is for when no jobs
are selected.
You can select multiple jobs by selecting their check boxes. There are options that enable
selection of all jobs. For now, review what you can do to a specific job.
To illustrate the many tasks you can accomplish in a specific job, see the menu for the
MYJOB01 job that is shown in Figure 12-21. Although it is beyond the scope of this book to
describe each of these actions in detail, a brief description of each action follows:
Job Dependencies: This action enables the display and updating of job dependencies,
including predecessors and successors, and whether all or just one dependency must be
met.
Activity: This action enables listing of the job activity (history) for a specific job scheduler
entry. A specific job can be selected from the job activity, which has another menu of
actions.
The Scheduled Jobs properties and display windows are described in 12.2.9, “Adding a
scheduled job” on page 517.
Basic sorting: Each column can be sorted by clicking the particular section of the header
row. Each click toggles between ascending and descending sort.
Quick filter: Entering data in the field in the upper right of the scheduled jobs table
immediately subsets the listing. The text is searched in all visible columns. No further
action is necessary other than entering text for this basic filter function. To remove the
filter, either delete the text or click the small X that appears after text has been entered in
the field.
Advanced filter: Clicking the arrows icon in the upper-right portion of the scheduled jobs
table or to the left of the “no filter applied” text produced a smaller window showing filtering
options. You can add multiple levels of filtering based on your needs.
Clear filter: If a filter is active, a link with the words “clear filter” is visible at the top of the
listing in the place of “no filter applied”. Click the Clear Filter text to perform the action.
Resizing columns
Columns can be resized by dragging the column separator in the table column header line.
If you want a selected column to not appear in the table, click to clear the corresponding
check box. Alternatively, to have it appear in the table, ensure that the check box is selected.
To select all, click the icon at the upper left of the pane. To clear all entries, click the icon
without the check picture.
Clicking OK puts the column changes into effect and the table is displayed again.
To return the table back to its original format, close the page and reopen it.
3. In the Command Properties window (Figure 12-26), specify the command. In this
example, RTVDSKINF is entered into the Command box and there are no messages to
monitor, so those fields are left blank. To check the RTVDSKINF command parameters, click
Prompt.
5. Check the advanced parameters by clicking Advanced. The window refreshes with
another RTVDSKINF command parameter displayed with a default value, as shown in
Figure 12-28.
8. The last thing of note about command prompting is the command help. Figure 12-31
shows the Help menu.
10.Now that the correct command is specified, click OK to close the Help window. Click OK
on the RTVDSKINF command prompt display to accept the RTVDSKINF command. Click OK
on the Command Properties window to accept the command properties for RTVDSKINF
usage. The General window of the New Scheduled Job function opens.
12.Figure 12-34 shows the three commands in the wanted order. Now that the command
order is correct, click OK.
To make this occur, complete the following steps as indicated in the Schedule window:
1. Click Weekly for the frequency.
2. Click Sunday for the details.
3. In the Times to run section, enter the time (6:00 AM) and click Add.
In the upper-right corner of the window, in addition to basic scheduling by time, you can
specify to repeat a run periodically during a time range. You can also apply a schedule name,
which uses a previously created schedule.
The Dates to run section has a set of radio buttons to specify the frequency at which the job is
to run. The Details section varies depending on the selected Frequency button. In this
example, the options were selected to run the scheduled job at 6:00 AM weekly and on
Sundays.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
In the Batch Information window, the fields have drop-down options and, in most cases,
include a blank field to enter the correct values manually. Parameters such as Job description
and User have Browse buttons that open a list window from which a selection can be made.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
There are five notification options in the Notification window. Each option has a check box for
enabling or disabling the notification.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
If you have other changes to make to the jobs, use the selections along the left side of the
pane. If you click OK on this window, you are brought back to the main listing of all scheduled
jobs. If you inadvertently clicked OK, select the new job in the list of scheduled jobs and use
the Properties action to come back to this view.
The same sets of options are available to you when changing properties of an existing
scheduled job.
The window that appears is similar in format and function to the Scheduled Jobs window
shown in Figure 12-24 on page 517, but contains different options for the listed items. The
filtering and sorting functions are the same. Figure 12-42 shows an example of the Scheduled
Jobs Activity window.
Figure 12-42 Options available from the Scheduled Job Activity window
Activity Log: Opens a view of the AJS log with all entries subsetted for this job only.
Job Log: Opens a new window with a job log for this job, if it exists.
Hold, release, and end: Available for jobs that are currently active. Holds, releases, or
ends the selected jobs.
Distribute reports: This option gives the ability to distribute spooled files created by the job
to a distribution list.
Status: Available only for active jobs, this shows the status of the scheduled job.
The window that appears has the same sorting and filtering functions as the other AJS tables.
These functions have been described in previous sections of this chapter.
The output provided on this window as shown in Figure 12-44 is the same as produced when
running the Display Log for Job Scheduler (DSPLOGJS) command. It can be filtered and
sorted in different ways using the web interface versus the character-based interface.
Figure 12-44 View of the activity log along with a selected entry and its available action
Figure 12-45 Output from viewing properties of an entry in the activity log
The window in the background shows the New Recipient menu and its General window, as
accessed through IBM Navigator for i. The window in the foreground shows the New
Recipient Path window and the path parameter. The path value shows the use of substitution
variables. This function is also available through IBM i Access client, but is not available
through the character-based interface.
Figure 12-47 Starting an addition job schedule environment by using the CHGDLJS command
You can also specify which users can use which job scheduler environment by issuing the Set
Data Library (SETDLJS) command, as shown in Figure 12-48.
Figure 12-48 Setting user MSRJJ’s access to a job scheduler data library
The equivalent multiple scheduling environments function is also available through the
System i Access (GUI) and IBM Navigator for i (web) interfaces.
Predefined schedules
Jobs in a group with a sequence number greater than 1 (jobs other than the first one in a
group) can now use a predefined schedule. This option is helpful when you have a group of
jobs where you want a subset of the jobs in the group to run on another schedule than the
rest. For example, if you have a group of jobs that run on a daily schedule, but one in the
group must run on Fridays only, a schedule can be used for the Friday job that overrules the
daily schedule of the group. The schedule can also be a holiday calendar. This function adds
flexibility for configuring which jobs in a group run on different days and dates without
breaking up the group.
The Start Group using JS (STRGRPJS) command was changed to add “Based on” parameters,
as shown in Figure 12-49.
If *FRI (Friday) is specified for the “Based on day of week” parameter, the group jobs, other
than the first one, run as though the day were a Friday. Jobs in the group that list *FRI as a
day to run and jobs in the group that use a predefined schedule specifying to run on *FRI run
even if the STRGRPJS command was issued on a Wednesday.
The “Based on date” parameter works similarly. If the specified date is December 1, 2009, the
job scheduler determines which of the jobs in the group can run on that date and runs them
when the STRGRPJS command is run.
The new “Based on” function is also found in the System i Access (GUI) and IBM Systems
Director Navigator for i (web) interfaces.
The Submit Job using Job Scheduler (SBMJOBJS) command in Figure 12-50 has a new
“Submit time offset” parameter.
Figure 12-50 Submit Job using Job Scheduler with new Submit time offset parameter
12.4 References
For more information about the topics that are covered in this chapter, see the following IBM i
7.1 Knowledge Center topics:
Advanced Job Scheduler
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaks/rzaksajsmanage
.htm?cp=ssw_ibm_i_71%2F5-2-4-4-10&lang=en
System i Navigator
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahg/rzahginav.htm?
lang=en
Any existing CL programs that use these commands might need to be modified. Any web
interfaces that use the search support must be modified so it no longer uses this removed
option.
All Application Server service programs that implement the HTTP plug-ins might need to be
updated before you start the HTTP servers on IBM i 7.1.
For more information related to WebSphere Application Server in IBM i 7.1, see 14.1, “IBM
Integrated Web Services for i” on page 548.
In IBM i 7.1, the LoadModule directives that are used by external HTTP servers that are
associated with WebSphere Application Server Version 6.1 and WebSphere Application
Server Version 7 changed.
For HTTP servers that are associated with WebSphere Application Server Version 6.1 or
Version 7, the LoadModule directive must be changed to match the following format:
LoadModule was_ap20_module /QSYS.LIB/<product_library>.LIB/QSVTAP22.SRVPGM
Where <product_library> is the product library for the Application Server installation.
The product library for each WebSphere Application Server installation on your system
contains the program and service program objects for the installed product:
For WebSphere Application Server V6.1:
– The product library name for Version 6.1 is QWAS61x (where x is A, B, C, and so on).
– The product library for the first WebSphere Application Server V6.1 product that is
installed on the system is QWAS61A.
The LoadModule directive can be modified from the IBM Web Administration for i window.
The following example assumes that a WebSphere Application Server 7 server instance
WAS70TOMV was created on IBM i 6.1:
1. Start the IBM Systems Director Navigator for i and click the IBM i Tasks Page link on the
Welcome window, as shown in Figure 13-1.
The IBM Web Administration for i window opens, as shown in Figure 13-3.
1 WebSpherePluginConfig
/QIBM/UserData/WebSphere/AppServer/V7/Base/profiles/WAS70TOMV/config/cells/M
ERCURE_WAS70TOMV/nodes/MERCURE.BE.IBM.COM-node/servers/IHS_WEB_TOMV/plugin-c
fg.xml
2 LoadModule was_ap20_module /QSYS.LIB/QHTTPSVR.LIB/QSVT2070.SRVPGM
3 # HTTP server (powered by Apache) configuration
4 DocumentRoot /www/web_tomv/htdocs
5 ServerRoot /www/web_tomv
6 Options -ExecCGI -FollowSymLinks -SymLinksIfOwnerMatch -Includes
-IncludesNoExec -Indexes -MultiViews
7 Listen *:10000
8 LogFormat "%h %T %l %u %t \"%r\" %>s %b \"%{Referer}i\"
\"%{User-Agent}i\"" combined
9 LogFormat "%{Cookie}n \"%r\" %t" cookie
10 LogFormat "%{User-agent}i" agent
11 LogFormat "%{Referer}i -> %U" referer
12 LogFormat "%h %l %u %t \"%r\" %>s %b" common
13 CustomLog logs/access_log combined
14 SetEnvIf "User-Agent" "Mozilla/2" nokeepalive
15 SetEnvIf "User-Agent" "JDK/1\.0" force-response-1.0
16 SetEnvIf "User-Agent" "Java/1\.0" force-response-1.0
17 SetEnvIf "User-Agent" "RealPlayer 4\.0" force-response-1.0
18 SetEnvIf "User-Agent" "MSIE 4\.0b2;" nokeepalive
19 SetEnvIf "User-Agent" "MSIE 4\.0b2;" force-response-1.0
20 <Directory />
21 Order Deny,Allow
22 Deny From all
23 </Directory>
24 <Directory /www/web_tomv/htdocs>
25 Order Allow,Deny
26 Allow From all
27 </Directory>
Figure 13-4 WEB_TOMV HTTP server configuration file
instance.name=WAS70TOMV
instance.type=appserver
instance.creating.product=BASE
instance.use.j9=false
instance.j9.path=$(j9path)
instance.j9.version=classic
default.server.name=WAS70TOMV
was.install.library=QWAS7A
was.install.path=/QIBM/ProdData/WebSphere/AppServer/V7/Base
Figure 13-5 was.install.library property within the.instance.properties file
5. Update the LoadModule directive by changing QHTTPSVR to QWAS7A and change QSVT2070 to
QSVTAP22, as shown in Figure 13-6.
WebSpherePluginConfig
/QIBM/UserData/WebSphere/AppServer/V7/Base/profiles/WAS70TOMV/config/cells/M
ERCURE_WAS70TOMV/nodes/MERCURE.BE.IBM.COM-node/servers/IHS_WEB_TOMV/plugin-c
fg.xml
LoadModule was_ap20_module /QSYS.LIB/QWAS7A.LIB/QSVTAP22.SRVPGM
# HTTP server (powered by Apache) configuration
DocumentRoot /www/web_tomv/htdocs
ServerRoot /www/web_tomv
Options -ExecCGI -FollowSymLinks -SymLinksIfOwnerMatch -Includes
-IncludesNoExec -Indexes -MultiViews
Listen *:10000
LogFormat "%h %T %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\""
combined
LogFormat "%{Cookie}n \"%r\" %t" cookie
LogFormat "%{User-agent}i" agent
LogFormat "%{Referer}i -> %U" referer
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog logs/access_log combined
SetEnvIf "User-Agent" "Mozilla/2" nokeepalive
SetEnvIf "User-Agent" "JDK/1\.0" force-response-1.0
SetEnvIf "User-Agent" "Java/1\.0" force-response-1.0
SetEnvIf "User-Agent" "RealPlayer 4\.0" force-response-1.0
SetEnvIf "User-Agent" "MSIE 4\.0b2;" nokeepalive
SetEnvIf "User-Agent" "MSIE 4\.0b2;" force-response-1.0
<Directory />
Order Deny,Allow
Deny From all
</Directory>
<Directory /www/web_tomv/htdocs>
Order Allow,Deny
Allow From all
</Directory>
Figure 13-6 LoadModule Directive
Figure 13-7 IBM Web Administration for i - Applying changes to the configuration file
7. You now get a message that the configuration was successfully changed. Do not restart
the server now. Click OK and from now on you can start the upgrade from IBM i 6.1
towards IBM i 7.1. After the upgrade, the HTTP server can be successfully started on
IBM i 7.1.
Note: The above example shows a manual update of IBM i 6.1 httpd.conf configuration file
to support the WebSphere Application Server Plug-in path change in IBM i 7.1. PTF
SI44746 is now available that does this update automatically when you start the HTTP
server.
Before IBM i 7.1, only IPv4 addresses were supported by PowerHA. Now PowerHA for i on
IBM i 7.1 fully supports IPv6 address (including all HA-related APIs, commands, and GUIs),
so HA IPv6 support was added to HTTP Server for i on IBM i 7.1. You can use IPv6
addresses to configure all web servers in the cluster and access your web applications that
are running in the highly available web server environment.
Network requirement: The IPv6 network among the clients and the cluster must be set up
already and available for access. Every client must be able to ping through the clustered
IPv6 address.
HTTP Server for i with HA IPv6 support has the following requirements:
Software requirements:
– 5770SS1 40 HA Switchable Resources
– 5770DG1 *BASE HTTP Server for i
– 5770HAS *BASE IBM PowerHA for i Standard Edition
– 5770HAS 1 PowerHA for i Enterprise Edition
Required PTFs: Current Group PTF for 5770DG1 SF99368 (minimum level 10)
13.5 IBM HTTP Server for i support for TLSv1.1 and TLSv1.2
IBM HTTP Server for i (5770-DG1) now supports Transport Layer Security Protocol (TLS)
v1.1 and v1.2.
For details about configuring TLSv1.1 and TLSv1.2 see 9.4, “System SSL support for
transport layer security version 1.2” on page 433. You can also refer to the following Secure
Sockets Layer (SSL) topic in the IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzain/rzainoverview.htm
?lang=en
The system value, QSSLPCL, must be changed to include the TLSV1.2 value if you want to
use this version of TLS.
For configuring SSL and TLS, use the following directives in the HTTP Server configuration
file (httpd.conf):
SSLVersion
SSLProxyVersion
SSLCipherSpec
SSLCipherBan
SSLCipherRequire
SSLProxyCipherSpec
Table 13-1 List of TLS ciphers now supported for IBM i SSL based applications
HEX Short Key Long Name SSL TLS TLS TLS
Name Size V3 v1.0 v1.1 v1.2
0x01 31 0 TLS_RSA_WITH_NULL_MD5 x x x x
0x02 32 0 TLS_RSA_WITH_NULL_SHA x x x x
0x03 33 40 TLS_RSA_EXPORT_WITH_RC4_40_MD5 x x
0x06 36 40 TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 x x
0x09 39 56 TLS_RSA_WITH_DES_CBC_SHA x x x
For example, the HTTP Server configuration file (httpd.conf) can include the following
directives:
SSLVersion TLSV1.2
SSLCipherSpec TLS_RSA_WITH_AES_256_CBC_SHA256
The latest enhancement for TLSv1.1 and TLSv1.2 added the last three ciphers:
TLS_RSA_WITH_NULL_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
For more information about the IBM HTTP Server for i, see the IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaie/rzaiemain.htm?lan
g=en
For clients using IBM WebSphere Application Server on IBM i, installing the latest fix packs
provides the corrected HTTP server plug-in. For more information, see the IBM i Technology
Updates, Web integration on i wiki on IBM developerWorks:
http://www.ibm.com/developerworks/ibmi/techupdates/web
The IBM i operating system integrates software technologies that support the externalization
of an ILE program object as a web service and the consumption of a web service by an ILE
program object. These technologies are the integrated web services server and the
integrated web services client for ILE.
The following sections describe enhancements that were made to the integrated web
services support on IBM i.
For the latest news about Integrated Web Services for i support, go to:
http://www.ibm.com/Systems/i/software/iws/
In the past, the tools supported only the generation of C and C++ code, and you had to
manually generate the program or service program that contained the generated code. The
tools were enhanced to generate RPG code in addition to automatically creating the service
program that contains the generated code.
To use this support, ensure that you have the latest PTFs or the latest replacement PTFs. At
the time of publication, the latest PTFs are SI44364 and SI44363.
More information can be found in the Web Services Client for ILE Programming Guide at:
http://www.ibm.com/Systems/i/software/iws/documentation.html
14.1.2 Accessing Web Services using DB2 for i UDFs and UDTFs
It is now possible to access Web Services using user-defined functions (UDF) and
user-defined table functions (UDTF) from DB2 for i. See the Accessing web services: Using
IBM DB2 for i HTTP UDFs and UDTFs article at the following website:
http://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sys_wp_acce
ss_web_service_db2_i_udf
Note: IBM WebSphere Application Server Express V6.0 is not included because it is not
supported and is not operational on IBM i 7.1.
JEE6
4
WAS v8 GA 20 1
•Web 2.0 & Mobile FEP 3
•WAS v8.5 Alpha, Beta 20 1
•Migration Toolkit Refresh
2
•WAS Tools Bundles 20 1
1
JEE5
20 1
WAS v7 GA 0
J2EE1.4
20 1
9
200
WAS v6.1 GA
8
200
WAS V7 Feature Packs:
7
20 0 XML, CEA, SCA, SAML,
OSGi Apps, JPA 2.0,
& Modern Batch
6
20 0 WOLA on zOS
WAS Hypervisor Edition
WAS EC2 AMI
Application Migration Toolkit
7 © 2013 IBM Corporation
Figure 14-3 History of the development of WebSphere Application Server for i with added functionality
14.2.4 Installation
Installing IBM WebSphere Application Server V6.1 on IBM i 7.1 requires a refreshed
installation version of the product. The refreshed installation version of IBM WebSphere
Application Server V6.1 is based on fix level 6.1.0.29. For ordering information, go to:
http://www.ibm.com/systems/i/software/websphere
For IBM WebSphere Application Server V7.0, apply Fix Pack 7 (7.0.0.7) or later after you
complete the installation.
To use the installation management support, your IBM HTTP Server for i DG1 must be at the
following minimum Group PTF levels:
For IBM i 7.1: SF99368 level 16
Ensure that your user profile has at least the *ALLOBJ, *IOSYSCFG, and *SECADM special
authorities.
Verify that your HTTP Admin server is active by running WRKACTJOB SBS(QHTTPSVR). The
displayed jobs should look as shown in Figure 14-4.
Bottom
Parameters or command
===>
F3=Exit F5=Refresh F7=Find F10=Restart statistics
F11=Display elapsed data F12=Cancel F23=More options F24=More keys
Figure 14-4 QHTTPSVR subsystem
In this window, you can install, update, uninstall, and review the fix level of WebSphere
Application Server V8.0 and later.
You can use the Install button to install a new WebSphere Application Server product on your
system. If the Installation Manager tool is not on your system, the wizard helps you install or
update it to the required level.
3. Specify the location of your WebSphere installation files, as shown in Figure 14-8. The
product packages can be on your local system or a remote server. To access the
packages on the local system, enter the location of the packages into the Location field or
click Browse to find the location. If accessing the packages’ location requires
authentication, select Specify authentication to access the install location and
complete the User and Password fields. Click Next.
5. The next two steps (5 and 6) are just verification steps that are not described here. In the
final window, click Finish to start the installation, as shown in Figure 14-10.
The product and all profiles that are based on it are deleted from the system.
To see the installed fix levels, select the installation and click View Fix, as shown in
Figure 14-14.
You can also delete fixes through the View Fix function. This topic is not described in this
book.
Before you upgrade to IBM i 7.1, ensure that all WebSphere Application Server installations
meet the minimum required fix levels. The version identifier is contained in the
<app_server_root>/properties/version/WAS.product file, where <app_server_root> is the
root directory of the IBM WebSphere Application Server installation. The version is also
displayed on the IBM Web Administration GUI. It is listed on the introduction page under the
Properties link. For WebSphere Application Server V6.1, apply Fix Pack 29 (6.1.0.29) or later
if needed. For WebSphere Application Server V7.0, apply Fix Pack 7 (7.0.0.7) or later if
needed.
Tip: Switch any WebSphere servers that are running classic to J9 before the OS upgrade.
You can accomplish this task by running the following command:
/qibm/proddata/websphere/appserver/<version>/<edition>/bin/enablejvm -jvm
std32'
When you upgrade to IBM i 7.1, enable WebSphere Application Server to use IBM
Technology for Java virtual machine. The classic Java virtual machine is not available for IBM
i 7.1. It is not operational until it is enabled to use IBM Technology for Java virtual machine.
After you upgrade to IBM i 7.1, if you upgraded from V5R4 or earlier, update the WebSphere
Application Server service programs for IBM i 7.1. To update the programs, complete the
following steps:
1. Start the Qshell interpreter.
2. Change the directory to <app_server_root>/bin.
3. Run export OSVER=V7R1M0 to export the OSVER environment variable to the
Qshell environment.
Considerations
Note the following items:
Do not use the same response files that are used with WebSphere Application Server V7
or earlier to install or uninstall Version 8 and later. Use response files that are based on
Installation Manager to install, update, or uninstall Version 8 and later.
The Installation Manager GUI is not available on IBM i; all interaction with Installation
Manager on IBM i is done through the command line or response files.
For more information about Installation Manager, go to the IBM Installation Manager Version
1.4 Knowledge Center at:
http://publib.boulder.ibm.com/infocenter/install/v1r4/index.jsp
IBM Web Administration for i has several wizards that guide you through a series of advanced
steps to accomplish a task. With a few clicks, you can have a web server or web application
server running in no time.
Enhancements were made to IBM Web Administration for i to include a web log monitor and
permissions.
Users can set rules that Web Log Monitor inspects. If a match is found, a notification is sent to
one of the following sources:
The *QSYSOPR system message queue
One or more email addresses
Both the *QSYSOPR system message queue and email addresses
Web Log Monitor ensures that important messages are not lost or missed.
Remember: Web Log Monitor inspects the log files if Web Administration for i is started.
The minimum OS supported is IBM i 6.1 PTF SF99115 level 12 or later.
8. Specify the log file to monitor. Click Browse to select the log file (Figure 14-21). Only log
files that you are authorized to use are shown in the browser window. Select the log file
and click Next.
You can use keywords to filter the content of specified log file. To specify more than one
keyword, use a comma or semicolon to separate each key word. Three modes are
provided:
– Match any keyword
For example, if the monitored file contains a line such as “JVMDUMP0061 Processing
Dump Event gpf.detail- Please wait” and the keywords that are specified here are
Dump, Failed, and Error, then the line is considered a match.
– Match all keywords
For example, if the monitored file contains a line such as “JVMDUMP0061 Processing
Dump Event gpf.detail- Please wait” and if the keywords specified here are Dump,
Event, and Wait, then this line is not considered a match. The white space or blank
character in front of Wait is also treated as part of the keyword. If the specified
keywords are Dump, Event, and Wait, then this line is considered a match, as all three
specified keys are in the line.
– Keyword A+any string+Keyword B
For example, if the monitored file contains a line such as “JVMDUMP0061 Processing
Dump Event gpf.detail- Please wait” and the keywords that are specified here are
Dump and detail, then this line is considered a match.
To disable Web Log Monitor, click Disable Web Log Monitor on the Web Log Monitor
introduction window.
Requirement: Only users who have Developer or higher authority can configure Web Log
Monitor.
14.3.2 Permissions
By default, only users with *IOSYSCFG and *ALLOBJ special authority can manage and
create web-related servers on the system through IBM Web Administration for i. To get to the
Permission tab, go to the IBM i Task window and click IBM Web Administration for i →
Advanced → Permissions.
A permission is the ability to perform an operation on a server. The ability for a user to
perform operations on a server is determined by the role they are assigned for the server. The
Web Administration for i roles are defined with the permissions listed in Table 14-1.
Start/Stop server x x x
Delete server x x
Install/Remove applicationsa x x
Start/Stop applications x x x
Create Serverb x
a. Web services that are deployed with integrated web services servers.
b. An administrator that grants permissions to a user profile must explicitly grant the create-server permission.
A new feature, group profile support, adds the ability to grant or revoke permissions to a
group of all users all at the same time. Otherwise, these users must be granted or revoked
permissions separately, which is time consuming and error prone. When you use this feature,
when a user has one or more supplemental groups, you combine the permissions that the
individual has and the ones from its groups. The cumulative and highest permissions are
granted when the user needs appropriate permissions to perform an operation through the
Web Administration Interface.
The group profile support applies only to IBM i 6.1 and later.
Users who are granted permission to servers can be given a different role for each server.
When a user is granted permission to create new servers, any server that they do create is
automatically updated to give them developer permission to that newly created server.
Modifying permissions
The Modify Permissions wizard allows an administrator to modify permissions for a specified
server or user. The Modify Permissions wizard guides the administrator through this process.
The Modify Permissions wizard considers all aspects of a selected server. If a selected
application server is associated with an HTTP Server, the Modify Permissions wizard
considers this situation and recommends that permissions are specified correctly for that
entire web environment. Either add or remove the permissions for all servers within that
specified web environment. This action ensures that the specified user can either
successfully manage the server based on the granted permissions or no longer successfully
manage the server.
Removing permissions
The Remove Permissions wizard provides an administrator with the ability to remove the
permissions for a specified server or user. The removing of permissions removes the ability of
the specified user to work with and manage a server within the IBM Web Administration for i
interface.
The Remove Permissions wizard considers all aspects of a selected server. If a selected
application server is associated with an HTTP Server, the Remove Permissions wizard takes
this situation into account and also removes the permissions for all servers within that
specified web environment. This action ensures that the specified user no longer successfully
manages the server.
If you are using WebSphere Application Server V6 and upgrading to IBM i 7.1, you must
update the web performance profile. The classic Java virtual machine is not available for
IBM i 7.1. If your WebSphere Application Server installation is enabled to use classic Java, it
is not operational until it is enabled to use IBM Technology for Java virtual machine. For more
information, see 15.7.1, “Enabling IBM Technology for Java virtual machine” on page 599.
For more information about ILE Compilers, see 16.3, “IBM Rational Development Studio for i”
on page 636.
For more information about these commands, see the following website:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frbam6%2
Frbam6whatsnew.htm
The Retrieve CL Source (RTVCLSRC) command can be used to retrieve control language CL
source statements from an Integrated Language Environment ILE module. The module must
be created with the Create CL Module (CRTCLMOD) command or the Create Bound CL program
(CRTBNDCL) command by specifying *YES for the ALWRTVSRC parameter. The module that
contains the CL source to be retrieved can be a module (*MODULE) object or a module within
an ILE program (*PGM) or service program (*SRVPGM) object.
The following command retrieves the CL source from module MOD1 in the ILE program
MYCLPGM:
RTVCLSRC PGM(MYCLPGM) MODULE(MOD1) SRCFILE(MYLIB/QCLSRC)
The retrieved CL source is stored in member MOD1 of the source physical file QLCSRC in
library MYLIB. The default value for the ALWRTVSRC parameter is *YES.
The Declare CL variable (DCL) command supports a value of 8 for the LEN parameter for
signed integer *INT and unsigned integer *UINT variables if the CL source is compiled using
the CRTCLMOD or the CRTBNDCL commands. This capability is useful when you call API
programs and API procedures that define 8-byte integer fields in input or output structures.
Important: LEN(8) can be specified only if the CL source is compiled with the Create CL
Module (CRTCLMOD) or the Create Bound CL Program (CRTBNDCL) command.
CL source programs contain DO and SELECT commands where these commands are nested
several levels deep. For example, between a DO command and the corresponding ENDDO
command can be a DOFOR and another ENDDO command. The CL compiler supports up to 25
levels of nesting for DO commands and SELECT commands.
You can specify OPTION(*DOSLTLVL) and the Create CL Program (CRTCLPGM) command or the
CRTCLMOD or the CRTBNDCL commands.
This compiler option adds a column to the compiler listing, which shows the nesting levels for
the following elements:
Do DO
Do For DOFOR
Do Until DOUNTIL
Do While DOWHILE
SELECT
If you do not want to see this nesting level information, you can specify *NODOSLTLVL for the
OPTION parameter.
In IBM i 7.1, a new keyword parameter Debug encryption key DBGENCKEY was added to the
CRTCLMOD and the CRTBNDCL commands. Specifying an encryption key value for the DBGENCKEY
parameter and *LIST for the DBGVIEW parameter causes the debug listing data to be encrypted
before it is stored with the module *MODULE or ILE program *PGM object. To see the listing view
during debugging, you must provide the same encryption key value.
When you start the debug session, you are prompted for the encryption key value. If the same
value is not specified for the debug session that was specified when the CL module was
created, no listing view is shown.
The CL source to be embedded can be in another member of the same source file that is
identified on the Source file SRCFILE parameter of the CL compiler commands or another
source file. The CL compiler commands include CRTCLPGM, CRTCLMOD, and the CRTBNDCL
Program.
You can run the RTVCLSRC command later to retrieve either the original CL source (which
contains just the INCLUDE commands) or the expanded CL source (which contains the
embedded CL source commands).
PHP applications are easily integrated with data in IBM DB2 for i, RPG, COBOL, and other
business applications that are running on IBM i.
PHP is used for content management, customer relationship management, database access,
e-commerce, forums, blogs, wikis, and other web-based applications.
Zend and IBM worked together to deliver Zend Solutions for IBM i, a complete PHP
development and production environment solution for the IBM i platform.
Zend Server Community Edition for IBM i comes with V6.1 and V7.1.
Tip: Always use the distribution downloaded from the Zend website, which ensures that
you have the latest version of the Zend Server Community Edition.
Figure 15-1 shows the Zend application development and deployment architecture for IBM i.
Figure 15-1 Zend PHP Application development and deployment architecture for IBM i
The following sections summarize features and enhancements of Zend products for IBM i.
Zend Server CE for IBM i is a lightweight version of Zend Server, and replaces Zend Core. It
offers the following features:
Preinstalled on IBM i 6.1 and IBM i 7.1 starting April 2010.
Includes extensions and a toolkit that provides the following functions:
– Enables PHP application to easily access DB2 for i data
– Takes advantage of RPG and COBOL applications in IBM i
– Supports for Program call, Procedure call, Data Area, Data Queue, Message Queue,
Commands, and System values
Zend Server replaces the Zend platform. It offers all the features that are provided in Zend
Server CE for IBM i and the following additional features:
High performance and scalability to provide customers with an improved web experience
and response time.
Delivers application uptime and reliability through enhanced PHP monitoring and
immediate problem resolution.
Includes the Java Bridge for integrating PHP application with Java application.
Includes 5250 bridge for integrating 5250 applications with PHP applications. The 5250
bridge allows running interactive IBM i based applications from a web browser.
Cloud connected Mobile Deployment
Zend Framework 2, Zend Server Gateway
PHP 5.4, PHP 5.3 are supported, PHP 5.2 is not supported
Side by side Migration of Zend Server 5.6 and 6.0
Improved User Interface, role-based access
Global monitoring rules by application
Shared Memory Toolkit option improves speed 30%+
The following website shows a comparison between the features that are offered in Zend
Server CE for IBM i and Zend Server for IBM i:
http://www.zend.com/en/products/server/editions-ibm-i
Zend Studio for IBM i includes the following features and enhancements:
Enhanced to work with the integration toolkit provided with Zend Server and Zend Server
CE for IBM i
Includes a comprehensive set of editing, debugging, analysis, optimization, database, and
testing tools
Toolkit support for easy integration with earlier IBM i applications and data
Customizable and context-sensitive templates for IBM i Toolkit functions
Zend DBi provides support for Open Source based applications. The application uses MySQL
data commands against Zend DBi or MySQL database. The storage engine translates the
commands, then passes them to DB2 for i. With this solution, there is only one database to
manage, backup, and protect. See Figure 15-2.
Figure 15-2 Zend DBi and MySQL using DB2 Storage engine
For more information about Zend products for IBM i, see the following websites:
Zend and IBM i:
http://www-03.ibm.com/systems/i/software/php/index.html
Zend Products for IBM i:
http://www.zend.com/en/solutions/modernize-ibm-i
Zend Studio:
http://www.zend.com/en/products/studio/
Zend DBi:
http://www.zend.com/en/solutions/modernize-ibm-i/ibm-i-product/dbi
XMLSERVICE can be accessed from PHP, Java, Ruby, RPG, Groovy, or other languages that
can work with XML.
All the information, including several demonstrations, can be found at the following website:
http://www.youngiprofessionals.com/wiki/XMLSERVICE
For more information about XML or open source programming languages, see:
http://www.iprodeveloper.com/article/associate/unleash-your-ibm-i-with-xmlservice-
65781
XMLSERVICE can be downloaded from the Young IBM i Professionals website at:
http://www.youngiprofessionals.com/wiki/XMLSERVICE
After you have the XMLSERVICE package, upload it in to your IBM i System by running the
commands that are shown in Example 15-1.
The commands that are shown in Example 15-2 finish the installation process.
CCSIDs: If you are running an IBM system with CCSID 65535 (hex), you must set the
Apache web server to a valid CCSID (such as 37 or 284) by running this command:
DefaultFsCCSID 37 CGIJobCCSID 37
XMLSERVICE
program
Session parameters
Return TAG
Figure 15-4 HTML file with XML tags to call an IBM i program
User ID and password: Depending on your security setup, you might need to change the
session parameters “uid” and “pwd” to a valid user ID and password.
To start the program through the web browser, enter the IBM i URL address or host name and
the previous HTML file name:
http://HostName/yourhtmlfilename.html
Changed variables
by the program
Return value
For more information about XMLSERVICE commands, parameters, and reserved words, see:
http://www.youngiprofessionals.com/wiki/XMLSERVICE
Only specific releases of these products are supported on IBM i 7.1. Before you upgrade to
IBM i 7.1, check the most current details about the product releases supported at the
following website:
http://www-03.ibm.com/systems/resources/systems_power_ibmi_lotus_releasesupport.pd
f
This support includes the following native APIs and a service program to create archive files:
QZIPZIP API
QZIPUNZIP API
QZIPUTIL Service program
Parameters
Table 15-1 shows the list of QZIPZIP API parameters.
CompressedFileName Input The name of the compressed archive file. This file is
created by the API. The path name must be in
Qlg_Path_Name_T structure.
formatName Input The format name to pass the user's options for
compressing a file or a directory to an archive file. For
more information, see “ZIP00100 format description” on
page 596.
zipOptions Input This pointer passes in the user's options to the QZIPZIP
API in ZIP00100 format.
During the time the API reads a file before compressing it, the file is locked and shared with
read only mode. The API releases the lock on the file after it reads the file. If the file that is to
be compressed is locked, then an error message is sent. Further compression is stopped.
Dec Hex
16 10 CHAR(512) Comment
CompressedFileName Input The name of the archive file that is to be extracted. The
path name must be in the Qlg_Path_Name_T structure.
dirToPlaceDecompFiles Input The directory in which you place the contents of the
archive file. The path name must be in the
Qlg_Path_Name_T structure.
formatName Input The format name to pass the user's options for extracting
an archive file. See “UNZIP100 format description” for a
description of this format.
unzipOptions Input This item is a pointer that passes in the user's options to
the QZIPUNZIP API in UNZIP100 format.
During the time, this API reads a compressed archive file for extracting it, and the file is locked
and shared for reading only. The API releases the lock on the file after it reads the file
completely. If the file that is to be extracted is locked, then an error message is sent. Further
extracting is stopped.
Dec Hex
Java Developer Kit 1.4, 5.0 and 6 (5761JV1 options 6, 7, and 10), which are referred to as
classic Java, are no longer supported in IBM i 7.1 and were replaced by IBM Technology for
Java. If your applications are still using classic Java, you must upgrade to IBM Technology for
Java, but before you do, be aware of the following information (also see Table 15-5):
Classic Java virtual machine (JVM) is a 64-bit virtual machine. Migrating to the 32-bit IBM
Technology for Java (default JVM) reduces the Java object heap to no larger than 3 GB,
which is approximately 1000 threads. If you require more than 1000 threads or a Java
object heap larger than 3 GB, use the 64-bit version of the IBM Technology for Java.
If you have ILE programs that use Java Native Interface functions, you must compile these
programs with teraspace storage enabled.
Adopted authority for Java program is not supported by IBM Technology for Java
virtual machine.
PASE for i now enforces stack execution disable protection.
You must install the latest Group PTF for Java SF99572.
Table 15-5 Classic Java levels and the suggested IBM Technology for Java replacement
Current product classic Java Option Replacements of IBM Option
Technology for Java
For more information about Java for IBM i 7.1, see the IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahg/rzahgjava.htm
Supported options: Licensed program product 5761-JV1 options 6, 7, and 10 are the
only options that are not supported in IBM i 7.1.
To install the IBM Technology for Java virtual machine option, complete the following steps:
1. Enter the Go Licensed Program (GO LICPGM) command and select Option 10.
2. If you do not see the program listed, then complete the following steps:
a. Enter the GO LICPGM command on the command line.
b. Select Option 11 (Install licensed program).
c. Choose option 1 (Install) for licensed program 5761-JV1 *BASE and select the option
that you want to install.
3. Load the latest Java PTF group.
4. Set the JAVA_HOME environment variable to the home directory of the Java Development
Kit that you want to use. As an example, enter one of the following commands from the
command line to select correct Java Development Kit (JDK). Full list of Java Development
Kits supported is in the Table 15-6.
– ADDENVVAR ENVVAR(JAVA_HOME) VALUE(‘/QOpenSys/QIBM/ProdData/JavaVM/jdk14/64
bit’)
– ADDENVVAR ENVVAR(JAVA_HOME) VALUE(‘/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32
bit’)
– ADDENVVAR ENVVAR(JAVA_HOME) VALUE(‘/QOpenSys/QIBM/ProdData/JavaVM/jdk50/64
bit’)
– ADDENVVAR ENVVAR(JAVA_HOME) VALUE(‘/QOpenSys/QIBM/ProdData/JavaVM/jdk60/32
bit’)
– ADDENVVAR ENVVAR(JAVA_HOME) VALUE(‘/QOpenSys/QIBM/ProdData/JavaVM/jdk60/64
bit’)
– etc for other JVMs. See Table 15-6.
Suggestion: Upgrade to IBM Technology for Java before you migrate to IBM i 7.1.
Supported options of 5761-JV1 IBM Developer Kit for Java product are listed in Table 15-6.
Table 15-6 shows that a single 5761-JV1 option (like options 11, 12, 14, and 15) can hold 2
different JDKs, like Java 6.0 and 6.2.6 in options 11 and 12, or Java 7.0 and 7.1 in options 14
and 15.
Table 15-6 Supported options of 5761-JV1 IBM Developer Kit for Java product
Option JDK JAVA_HOME
For complete and regularly updated table see the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/Java%20products%20and%20options%20on%20IBM%20i
This page also points to the Common Vulnerabilities and Exposures (CVE) web pages.
You can use the IBM Toolbox for Java JDBC driver to use JDBC API interfaces to issue
Structured Query Language (SQL) statements to and process results from databases on
the server.
The following sections describe the enhancements that are done to IBM Toolbox for Java
JDBC support for IBM i 7.1.
This enhancement allows JDBC clients easy access to IBM i 7.1 XML support.
This enhancement brings IBM Toolbox for Java into alignment with IBM i native JDBC support
in addition to JDBC drivers on other platforms.
To provide compatibility with earlier versions of the metadata functionality, a new connection
property, “metadata source”, can be used to force IBM Toolbox for Java JDBC to use the old
method of retrieval of database metadata.
Under the new currently committed semantics, if currently committed is enabled, then only
committed data is returned, as was the case previously, but now readers do not wait for
writers to release row locks. Instead, the data that is returned to readers is based on the
currently committed version, that is, data before the start of the write operation.
This feature also implements a way to direct the database manager to wait for the outcome
when it encounters data while being updated.
IBM Toolbox for Java JDBC adds support for arrays as IN, OUT, and INOUT parameters to
stored procedures. However, ResultSets returned from stored procedures or other queries
that contain arrays is not supported.
The Lite version of the Java toolbox for IBM i provides a new smaller foot print toolkit for
accessing IBM i native objects from Java running on mobile devices. Table 15-7 shows the
difference between both Java libraries.
Database - JDBC (SQL) and record-level access Database - JDBC (SQL) and record-level access
(DDM) (DDM)
Program calls (RPG, COBOL, Service Programs, Program calls (RPG, COBOL, Service Programs,
and so on) and so on)
Commands Commands
Data Queues
Data Areas
Print/Spool resources
Data Queues
User spaces
System values
System status
ARE provides a GUI that you can use to collect and verify a customized set of information,
system settings, and attributes about the following items:
Applications
An IBM i system
A runtime environment
ARE collects the needed information and builds it into a template. This template can then be
used in verifying the application and its environment, on the same system where the template
was built, or any other IBM i system.
When you run a template against an IBM i system, the system is verified against the
information that is stored in the template, and the results are documented in a set of reports.
These reports give a clear and concise view of what changed, or what is different.
Templates can be updated easily to include new information and settings from the ARE GUI.
The deployment template is used as an input to the ARE Core. ARE Core uses the
deployment template as the basis for comparison for the attributes and state of the
deployment on the system that is being verified.
The following sections describe some of the possible attributes and values that can be
collected by ARE:
Files and directories
Software requirements
Network
System Environment
Advanced
This plug-in can also verify that a symbolic link is truly a symbolic link and not a real file. This
plug-in is useful to detect cases where a symbolic link was replaced by an actual copy of the
file that it is supposed to be a link to.
The plug-in allows specific files and directories from IFS to be collected and packaged into a
single archive file. You can also use it to gather native IBM i objects and libraries by saving
these native object and libraries into SAVF objects that can then be included in the specified
archive file.
Software requirements
You can customize the template to verify various software requirements and prerequisites:
IBM i Products
PTFs
IBM i products
You can use this feature to select specific IBM i products from the list of all IBM i products.
The selected products are added to the list of products to verify in the template.
When a problem is found during product verification, the problem is added to the IBM i
product verification section of the report. The severity of the problem, which determines how
the problem is recorded in the report, can be customized for each product verified.
To select an IBM i product that is not installed on the system, click the Filter button and select
the Show all products supported by IBM i option, as shown in Figure 15-8.
PTFs
The PTFs plug-in verifies the status of individual and group PTFs on an IBM i system. It also
lists all group PTF levels on the system, which is equivalent to the information displayed by
the WRKPTFGRP CL command. The specific PTFs and group PTFs that are verified are fully
customizable. This plug-in can verify that a group PTF level meets both a minimum and
recommended level.
The primary purpose of this plug-in is to attempt to verify that the system's network
configuration is such that applications that require frequent access to the network, such as
web applications, can do so in a fast, reliable, and repeatable manner.
Ports on the system can be restricted by running CFGTCP and selecting option 4 - Work with
TCP/IP port restrictions. If an application requires a port that is already in use or is restricted,
the application might fail to start or not work correctly.
System Environment
You can use this feature to verify various system environments settings, including network
attributes, environment variables, and user IDs:
System Values and Network Attributes
Environment Variables
User IDs
Scripts and Commands
SQL Query Verifier
There is also the option to list, but not check, a value in the report, which is a useful
mechanism to automate the collection of system configuration information.
Custom plug-ins
You can use this feature to manage custom plug-ins that are included in the deployment
template. Custom plug-ins are Java classes that can augment the verification that is
performed by a deployment template. Anyone can write a custom plug-in, and after the
plug-in is created, it can be added to any deployment template.
A custom plug-in plugs into the IBM Application Runtime Expert for i environment, and is run
along with the other plug-ins selected and customized using the deployment template editor.
Plug-in Configuration
Several advanced plug-in features can be edited through the Plug-in Configuration window.
Additionally, any plug-in that can be used by a template, including plug-ins that are not
configured elsewhere in the GUI, can be configured and added to a template using the
Plug-in Configuration window.
The console is a web user interface that enables a system, or group of systems, to be verified
using a deployment template that was created or imported by using the deployment template
editor.
Requirement: A valid, enabled user profile name and password for the target system
must be provided. The user profile must have *ALLOBJ special authority on the target
system because the verification of the target system might involve the inspection of
many different files, authorities, PTFs, user profiles, and other items.
Verifying systems
Verification can be done for all systems that are specified in the console panel. They can even
have different templates. User IDs and passwords for individual systems can be set up
directly in the console window, or you can have one general User ID and password for all
systems. Such a User ID must have *ALLOBJ authority on all systems where verification must
run. This can be set up using the Runtime properties button. The Console Runtime
Properties window (Figure 15-11) is then shown. You then can specify a general User ID.
After the system verification is complete, a Complete icon is shown in the status column for
that system and a brief summary of its verification is shown in the result column. If the
console failed to perform the verification on a target system, a red icon is shown followed by a
brief reason description of why the verification cannot not be performed as shown in
Figure 15-13.
More details about the failure can be found by clicking the View log link in the result column.
All three ARE reports (summary, detailed, and XML) are available through the link in the result
column. You can also download all the reports in an archive file by clicking the Download
Archive link.
Summary report
The summary report contains a summary of all problems that are detected during the
verification of a system. Each row in the summary table contains the results for a specific
plug-in, such as the “Authority Verifier” or “System Value Verifier” plug-ins. The icon directly
before the plug-in name indicates the highest severity problem that was found by that plug-in.
The other items in each row indicate the number of attributes that are verified by the plug-in,
and the number of problems that are found at each severity level (Error, Warning, and Info).
The final item in each row, the “Fix actions” column, indicates how many of the detected
problems can be fixed directly from the console web interface.
Figure 15-14 and Figure 15-15 show examples of two summary reports.
XML report
The XML report is an XML formatted report that contains every status and problem message
reported during verification. This report is a complete record of everything that was checked
during verification and the result of each check, even if the check did not detect a problem. In
this regard, the XML report is exactly like the detailed report, except in an XML format instead
of plain text.
XML report details: XML reports include information about how to fix detected problems.
ARE Core uses an XML report as a guide for automatically fixing detected problems.
Automatic fixes
IBM Application Runtime Expert for i offers another important feature, which is the ability to
automatically fix problems that are detected by ARE.
The following list shows some of the problems that ARE can automatically fix:
Authority: This category includes ownership, primary group, authorization list, and private
authority.
User Profiles: Some, but not all, user profile attributes can be fixed.
Symbolic Links: If ARE detects that a symbolic link is missing, it can re-create the link.
Important: It is important to understand that only certain types of detected problems can
be fixed directly from the console.
Figure 15-17 shows a summary report in which the “Authority Verifier” plug-in has a fix action
available.
Clicking the Fix action link opens a new window that summarizes all of the problems that are
detected by that plug-in that can be fixed directly from the console. You can select which
problems to fix by selecting the check box that precedes each problem description, and then
clicking Fix, as shown in Figure 15-18.
Other than the console web user interface, ARE can also fix detected problems using a script.
For more information, see the “areFix.sh” script section in the document found at:
http://www-03.ibm.com/systems/resources/systems_i_are_script_interfaces.pdf
Prerequisite products
Here is the list of required software products on IBM i 7.1:
5770SS1 option 3 - Extended Base Directory Support
5770SS1 option 12 - Host Servers
5770SS1 option 30 - Qshell
5770SS1 option 33 - PASE
5761JV1 option 11 - J2SE 6 32 bit
5770DG1 - IBM HTTP Server for i
PTF requirements
The latest Group PTF level must be installed on the system before you install ARE. For
up-to-date PTF requirements, go to:
http://www-03.ibm.com/systems/power/software/i/are/gettingstarted.html
To benefit from the latest ARE enhancements, you must install the latest PTF. For the latest
PTFs, see the ARE support that is found at:
http://www-03.ibm.com/systems/power/software/i/are/support.html
15.11 Operably
Operably is a freely available and commercially supported port of the Ruby language. It is a
web application development framework (Rails) written in the Ruby language.
PowerRuby does include the native DB2 driver. This means that the use of the MySQL
database is not necessary on IBM i. It integrates with XMLSERVICE for access to IBM i
programs and objects. For more about XMLSERVICE, see 15.4, “XMLSERVICE for IBM i” on
page 590.
The current version of IBM Rational Developer for i product is V9.0. This version is built on
Eclipse V4.2.2.
Note: This edition of IBM Rational Developer for i is a replacement of the IBM Power
Tools feature of the former IBM Rational Developer for Power Software V8.x.
Note: The RPG and COBOL + Modernization Tools, EGL edition does not include the
HATS toolkit as the former IBM Developer for i for SOA Construction 8.x product did.
However, the HATS toolkit 9.0 can be downloaded at no additional charge from this
website:
http://www.ibm.com/developerworks/downloads/ws/whats
When used in combination with IBM i compilers (which is part of the 16.3, “IBM Rational
Development Studio for i” on page 636 and 16.2, “IBM Rational Team Concert” on page 628),
IBM Rational Developer for i provides a comprehensive application development
environment, including compilers, development tools, and collaborative application lifecycle
management.
The following sections focus on Rational Developer for i features for the IBM i platform:
RPG and COBOL development tools
Modernization tools
Java tools
EGL tools and IBM Rational Business Developer V9.0
Rational Team Concert client integration for IBM i
Version 9.0 fix packs
Migration to Rational Developer for i v9.0
Upgrades to Rational Developer for i v9.0
Remote System Explorer allows effective management and organization of IBM i through
these features:
Remote Connection to IBM i server
Manage IBM i objects
Manage library lists
Manage jobs
Manage commands and shells
Manage user actions
Manage objects in “Object Table View”
Editing, compiling, and debugging applications
IBM i Projects
IBM i Projects, which is shown in Figure 16-2 on page 621, allows for disconnected
development. A network connection is required only when code updates or a build are
needed, or when you must view remote resources for a project.
In disconnected mode, you work on files locally, and upload them to the server after you
finish. Working in disconnected mode, you can still check source code for syntax and
semantic error and connect only to submit a compile when you are ready to create the
program object.
Application Diagram
Application Diagram provides a graphical view of the different resources in an IBM i native
application and their relationships to each other.
There are two different diagrams that you can look at in the Application Diagram view:
Source Call Diagram
This diagram takes ILE RPG, ILE COBOL, and CL sources as input and displays a call
graph that shows the subroutine and procedure calls.
Program Structure Diagram
This diagram takes program and service program objects as input and displays the
binding relationships between them and the modules that are bound to each program and
service program.
Report Designer
You can use the Report Designer to graphically design and modify the content of DDS printer
files. The Report Designer window provides an integrated palette for easy access to design
items.
You can use the Report Designer to group individual records and see how this group of
records will appear on the printed page. In addition, you can specify default data for each
output field, and specify which indicators are on or off.
Integrated i Debugger
You can use Integrated i Debugger to debug an application that is running on an IBM i
system. It provides an interactive graphical interface that makes it easy to debug and test your
IBM i programs.
The Web Services wizard works in the context of a web project and allows for creation,
deployment, testing, generation of a proxy, and publication to a Universal Description,
Discovery, and Integration (UDDI) registry of Web Services.
Note: The IBM i Web Services and Java Tools are part of Rational Developer for i 9.0 -
RPG and COBOL + Modernization Tools, Java edition. Web Services wizards can be found
in Modernization Tools in both Java and EGL editions.
All Java tools are optimized for WebSphere Application Server runtimes.
Developers can now deliver web, Web 2.0, and mobile applications and services without
having to master Java and SOA programming. This allows them to create, test, and debug
Following are the main features of IBM Rational Business Developer (and also EGL tools):
EGL transformation
– Transforms EGL source into Java, JavaScript, or COBOL code that is optimized for
deployment to application hosting environments that include Java EE servers and
traditional transactional systems.
– Streamlines development by using a single, high-level language for complete
development of the business application.
– Generates different languages for a single application, such as JavaScript for an
application user interface and Java or COBOL for the application back end.
– Increases productivity and reduces the technology learning curve to improve business
agility and competitiveness.
Simplified service creation
– Simplifies service creation, concealing the technical complexity of SOA. Multiplatform
deployment deploys applications and services on many platforms either as web
services or natively.
– Provides built-in service constructs and a facility for service generation, allowing
business-oriented developers to create SOA applications without extensive training.
– Creates EGL services and automates generation of web services from EGL services.
– Supports development and deployment of services to IBM WebSphere Application
Server on multiple platforms.
– Allows developers to work within the familiar Eclipse-based environment using their
existing development skills.
UML support
– Supports Unified Modeling Language (UML) to EGL transformations, allowing complex
applications to be modeled graphically.
– UML supports a model-driven approach that streamlines the creation of Java and Web
2.0 applications and services.
– UML supports the implementation of EGL services.
– UML supports full Create, Read, Update, Delete applications without the need for
manual coding.
Extensible platform
– Integrates with several IBM products to extend support for IBM i and expand software
lifecycle functionality.
– Extends existing IT assets and provides the extensibility, scalability, and productivity
features of an Eclipse-based platform.
– Integrates with IBM Rational Developer for i for SOA Construction and IBM Rational
Software Architect.
When Rational Developer for i is used in combination with IBM Rational Development Studio
for i compilers and Rational Team Concert, it provides a comprehensive application
development environment, including compilers, development tools, and collaborative
application lifecycle management.
For more information about Rational Team Concert, and integration with Rational Developer i,
see 16.2, “IBM Rational Team Concert” on page 628.
Requirement: The IBM Rational Team Concert V4.0.3 client product must be installed
before you install the client integration.
For information about the latest product fix packs, see the Rational Developer for Power
Systems Software Support website (Downloads tab):
http://www-947.ibm.com/support/entry/portal/Overview/Software/Rational/Rational_De
veloper_for_i/
Upgrade to v9.0 can be done through Rational Developer for Power i. Migration is supported
from Version 7.5.x, Version 7.6.x, Version 8.0.x, and Version 8.5.x.
Migration of projects from earlier releases is not supported because of additional features that
were added.
IBM Rational Developer for Power Software RPG and COBOL IBM Developer for i v9.0, RPG and COBOL Tools
Tools Edition
IBM Rational Developer for Power Software Power Tools IBM Developer for i v9.0, RPG and COBOL +
Modernization Tools, Java Edition
IBM Rational Developer for i for SOA N/A IBM Developer for i v9.0, RPG and COBOL +
Construction Modernization Tools, EGL Edition
IBM Rational HATS Toolkit is not included in any of the editions of IBM Rational Developer for
i v9.0. It is available at no additional charge to download from the following website:
http://www.ibm.com/developerworks/downloads/ws/whats/
A full HATS runtime license must be purchased to be able to run applications that are
modernized by IBM Rational Host Access Transformation Toolkit v9.0. This has not changed
from older versions of Rational Developer tools.
Other features of IBM Rational Developer for Power Software (for example COBOL, C/C++,
Power Tools for AIX, or Linux) can be upgraded to IBM Rational Developer for AIX and Linux
v9.0.
Rational Team Concert is the core of the Rational CLM solution. Rational Team Concert is a
team collaboration tool that supports cross-platform development and features native hosting
of the Jazz Team Server. Rational Team Concert includes an integrated set of collaborative
software delivery lifecycle tools for development, including source control, change
management, and build and process management.
Rational Team Concert is offered for several platforms including IBM i, AIX, z/OS, Red Hat
(RHEL) and SUSE (SLES) Linux distributions, and Windows 2003 and 2008 Server.
Implementations can have different limitations on different platforms. Such limitations are
described in the Rational System Requirements webpage. For example, for latest 4.0.3 and
4.0.4 versions of Rational Team Concert, see the following website:
https://jazz.net/wiki/bin/view/Deployment/CLMSystemRequirements403
Eclipse Clients Web Clients Microsoft .NET Clients Rational Desktop Clients
Jazz Client Extensions Web 2.0 Visual Studio Rational Software Architect
Administration:
Collaboration Best Practices Users, projects,
process
Presentation: Storage
Mashups Discovery Query
JAZZ SERVICES
Business Partner
Extensions Your Extensions
Using Rational Team Concert and Rational Developer for i together, software development
teams can develop IBM i applications using the tools that are provided by Rational Developer
for i and the planning, team collaboration, build, source control management, defect tracking,
and deployment tools that are provided by Rational Team Concert.
Using the IBM i Projects perspective available with Rational Developer for i, Rational Team
Concert and Rational Developer for i work together so that you can share and modify files that
are managed by Jazz based source control, in addition to files on the remote IBM i system.
Remember: You cannot install the Rational Team Concert client integration for IBM i
feature because the feature is not available if Rational Team Concert is not yet
installed.
Figure 16-7 Installing Rational Team Concert Support for RDi v9.0
To make your IBM i Project available to other team members, complete the following steps:
1. From the menu for your i Project, click Team → Share Project, as shown in Figure 16-8.
Consideration: If your IBM i Project contains any file types or folders that are ignored
by Eclipse, the wizard prompts you to review them.
The IBM i Project is now added to a component in a repository workspace. Changes that you
make in the i Project are copied to the repository workspace when you check them in.
For version 3, there are v3.0.0, 3.0.1 releases, and for version 4 there are v4.0.1, 4.0.2, 4.0.3,
and 4.0.4 releases. Each release provides new functions and interoperability to the Rational
Team Concert product.
The support is mainly based on the version of the Eclipse environment, Rational Team
Concert clients support, and the version of Rational Development Tools. Table 16-2 can be
used as initial rough reference of interoperability.
Table 16-2 General interoperability table for Rational Team Concert and RDi/RDP products versions
Rational Team Concert Supported Eclipse Supported version of
Version Environments Rational Development tools
Attention: Because Rational Team Concert is a rich set of functions and Rational
Development products are generally complex, it is necessary to go to the actual
requirements for individual Rational Team Concert and Rational Development products.
There are many other factors like Eclipse version, JVM used, and server and client
requirements. It is usually necessary to verify compatibility of all functions.
16.2.4 General links for more information about Rational Team Concert
For more information about Rational Team Concert, see the following websites:
Projects on jazz.net
http://jazz.net/projects/rational-team-concert/
Wiki that includes tutorials and articles on various topics
https://jazz.net/wiki/bin/view/Main/RTCHome
End to end tutorial
https://jazz.net/wiki/bin/view/Main/RTCpHome
ALM on IBM developerWorks
https://www.ibm.com/developerworks/rational/community/alm.html
• Feature 1 – ADTS
• Stabilized
• PDM, SEU, SDA, RLU
• Feature 2 - OPM Compilers
• Stabilized
• RPG/400, COBOL/400
• S/36 and S/38 compilers
• Feature 3 – ILE Compilers
• Continued Investment
• RPG IV, ILE COBOL
• ILE C, ILE C++
Figure 16-12 Rebranding the WebSphere Development Studio product
Application Development ToolSet (ADTS) and Original Program Model (OPM) Compilers
were previously stabilized. However, Rational Development Studio for i V7.1 does include
enhancements for the ILE RPG and ILE COBOL compilers.
In Version 7.1, the ILE compilers (RPG, COBOL, CL, C, and C++) and precompilers have a
new parameter that you can use to encrypt your debug views. You can send code that can be
debugged and know that your code is not exposed.
With the DBGENCKEY compiler option, you can specify the encryption key that is used to encrypt
the program source that is embedded in debug views. The debugger requires the user to
To use this capability to protect data source code and at the same time allow the usage of the
debug view, complete the following steps:
1. Encrypt the debug view so that the debug view is only visible if the person knows the
encryption key by running the following command:
CRTBNDRPG MYPGM DBGENCKEY(‘my secret code’)
2. Then, either run STRDBG MYPGM DBGENCKEY(‘my secret code’) or STRDBG MYPGM and wait to
be prompted for the encryption key.
Important: For customers using Source Entry Utility (SEU) to edit the ILE RPG source, the
syntax checkers do not recognize any features that were added after V6R1. All new and
subsequent updates to the ILE RPG IV language will be made only for Rational Developer
for i product. They will not be available in ADTS SEU editor.
Figure 16-13 shows an example ILE RPG program before the latest enhancements and
updates.
Figure 16-13 OPM IBM RPG/400® (RPG III) from OS/400 V2R3
Figure 16-14 ILE RPG Program example as it looked after V5R3 ILE RPG updates
Figure 16-15 ILE RPG program example as it looks after latest V7.1 ILE RPG enhancements
For more information about RPG, see the following RPG Café website:
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-00
00-0000-000000002284
D emp ds
D id 7p 0
D type 10a
D value 100a
/free
XML-INTO emp %xml(‘emp.xml’ :
: ‘datasubf=value doc=file’);
// emp.id = 13573
// emp.type = ‘regular’
// emp.value = ‘John Smith’
An array can be sorted in ascending order by using SORTA(A) and descending order by using
SORTA(D). The array cannot be a sequenced array (ASCEND or DESCEND keyword), as shown in
Example 16-3.
The %LEN function can be used with a new optional second parameter *MAX to obtain the
maximum number of characters for a varying-length character, UCS-2, or Graphic field.
The %PARMNUM built-in function returns a parameter’s position in the parameter list, as
shown in Example 16-5.
Soft-coding the parameter’s number makes the code easier to read and maintain.
D custDs e ds ALIAS
D QUALIFIED EXTNAME(custFile)
/free
custDs.customer_name = 'John Smith';
custDs.customer_address = '123 Mockingbird Lane';
custDs.id = 12345;
Here are some programs and procedures that do not require a prototype:
An exit program, or the command-processing program for a command
A program or procedure that is never intended to be called from RPG
A procedure that is not exported from the module
P sayHello b
/free
dsply ('Hello ' + name);
In Example 16-8, there is only a “makeTitle” procedure with a UCS-2 parameter and a return
value. If the passed parameter is alpha or DBCS, it is converted to UCS-2 on the call. The
procedure works with the UCS-2 parameter and returns a UCS-2 value. This returned value
can then be converted on assignment to alpha or DBCS, if necessary.
When a procedure is prototyped to return a large value, especially a large varying value, the
performance for calling the procedure can be improved by defining the procedure with this
keyword.
The impact on performance because of the RTNPARM keyword varies from having a small
negative impact to having a large positive impact. There can be a small negative impact when
the prototyped return value is relatively small, such as an integer, or a small data structure.
There is improvement when the prototyped return value is a larger value, such as a
32767-byte data structure. The performance improvement is most apparent when the
prototyped return value is a large varying length string, and the actual returned value is
relatively small. For example, the prototype defines the return value as a 1 million byte varying
length character string, and the value 'abc' is returned.
Using RTNPARM for a procedure prototype can also reduce the amount of automatic storage
that is required for other procedures that contain calls to that procedure. For example, if
procedure MYCALLER contains a call to procedure MYPROC that returns a large value,
procedure MYCALLER requires more automatic storage (even if MYCALLER does not call
procedure MYPROC at run time). In certain cases, procedure MYCALLER cannot compile
because of excessive automatic storage requirements; in other cases, MYCALLER is not able
to be called because the total automatic storage on the call stack exceeds the maximum.
Using RTNPARM avoids this problem with additional automatic storage.
Using SEU: For customers using SEU to edit ILE COBOL source, the syntax checkers do
not recognize any features that were added after V6R1.
The COMP-5 type is a true integer. This type is a native binary data type that is supported by
the USAGE clause. COMP-5 data items are represented in storage as binary data, and can
contain values up to the capacity of the native binary representation (2, 4, or 8 bytes).
When numeric data is moved or stored into a COMP-5 item, truncation occurs at the binary
field size rather than at the COBOL picture size limit. When a COMP-5 item is referenced, the
full binary field size is used in the operation.
COMP-5 is supported by COBOL on IBM System z®. This support enhances portability to or
from COBOL on other IBM platforms and operating systems.
Table 16-3 shows the equivalent SQL data types for the COBOL COMP-5 data type.
Encrypted debug view (see 16.3.1, “Source code protection option” on page 636).
Allows programmers to include a debug view with their application that is only visible with
an encryption key.
OPTIMIZE(*NEVER) and NEVEROPTIMIZE reduces the size of the generated code by preventing
the COBOL compiler from generating the information necessary to optimize the program.
Additionally, the activation group parameter ACTGRP on the CRTBNDCBL command now has a
new default option value. When you specify STGMDL(*TERASPACE), the program is activated in
the QILETS activation group. For all other storage models, the program is activated in the
QILE activation group when it is called.
Example 16-10 shows the usage of a PROCESS statement with XML GENERATE.
Predefined macros
These macros can be grouped either for C or for C++. Most of them are new, and others were
modified.
C macros
Here are the predefined macros for C:
__C99_CPLUSCMT indicates support for C++ style comments. You can define it when the
LANGLVL(*EXTENDED) compiler option is in effect.
__IBMC__ indicates the version of the C compiler. It returns an integer of the form VRM,
where V represents the version, R the release, and M the modification level. For example,
using the IBM i 7.1 compiler with the TGTRLS(*CURRENT) compiler option, __IBMC__
returns the integer value 710.
__ILEC400__ indicates that the ILE C compiler is being used.
__ILEC400_TGTVRM__ is functionally equivalent to the __OS400_TGTVRM__ macro.
__SIZE_TYPE__ indicates the underlying type of size_t on the current platform. For IBM i,
it is unsigned int.
C++ macros
Here are the predefined macros for C++:
__BOOL__ indicates that the bool keyword is accepted.
__cplusplus98__interface__ can be defined when the LANGLVL(*ANSI) compiler option is
specified.
__C99_COMPOUND_LITERAL indicates support for compound literals and can be
defined when the LANGLVL(*EXTENDED) compiler option is in effect.
__C99_FUNC__ indicates support for the __func__ predefined identifier and can be
defined when the LANGLVL(*EXTENDED) compiler option is in effect.
C and C++
Here are the predefined macros for C and C++:
__BASE_FILE__ indicates the fully qualified name of the primary source file.
__IBM_DFP__ indicates support for decimal floating-point types and can be defined when
the LANGLVL(*EXTENDED) compiler option is in effect.
__IBM_INCLUDE_NEXT indicates support for the #include_next preprocessing directive.
__IBM_TYPEOF__ indicates support for the __typeof__ or typeof keyword. This macro is
always defined for C. For C++, it is defined when the LANGLVL(*EXTENDED) compiler option
is in effect.
__IFS_IO__ can be defined when the SYSIFCOPT(*IFSIO) or SYSIFCOPT(*IFS64IO)
compiler option is specified.
Pragmas
The #pragma preprocessor directive allows each compiler to implement compiler-specific
features that can be turned on and off with the #pragma statement. The do_not_instantiate
and namemanglingrule pragmas are included in IBM i 7.1.
do_not_instantiate
The #pragma do_not_instantiate directive suppresses the instantiation of a specified entity. It
is typically used to suppress the instantiation of an entity for which a specific definition is
supplied. If you are handling template instantiations manually (that is, compiler options
TEMPLATE(*NONE) and TMPLREG(*NONE) are in effect), and the specified template instantiation
exists in another compilation unit, using #pragma do_not_instantiate ensures that you do not
get multiple symbol definitions during the link step.
namemanglingrule
Name mangling or name decoration is a technique that is used to solve various problems that
are caused by the need to resolve unique names for programming entities. You can use it to
encode additional metadata information in the name of a function, structure, class, or another
data type to pass more semantic information from the compilers to linkers. Most of the time,
you need it when the language allows entities to be named with the same identifier if they
occupy another namespace, which is typically defined by a module, class, or explicit
namespace directive.
The #pragma namemanglingrule directive provides fine-grained control over the name
mangling scheme in effect for selected portions of source code, specifically regarding the
mangling of cv-qualifiers in function parameters. You can use it to control whether top-level
cv-qualifiers are mangled in function parameters or whether intermediate-level cv-qualifiers
are considered when the compiler compares repeated function parameters for equivalence.
In general, decimal floating-point operations are emulated with binary fixed-point integers.
Decimal numbers are traditionally held in a binary-coded decimal (BCD) format. Although
BCD provides sufficient accuracy for decimal calculation, it imposes a heavy cost in
performance because it is usually implemented in software.
IBM POWER6 and POWER7 processor-based systems provide hardware support for decimal
floating-point arithmetic. POWER microprocessor core includes the decimal floating-point unit
that provides acceleration for the decimal floating-point arithmetic.
ILE C++ compiler enhancements that are available with Rational Development Studio for i
V7.1 include decimal floating-point support. The support has the following features:
Allows floating-point computations to be performed by using decimal arithmetic (base 10).
Avoids potential rounding errors when you convert binary floating-point data to / from
human readable formats.
Conforms to the decimal formats and arithmetic that is described in the IEEE 754-2008
Standard for Floating-Point Arithmetic.
Adds support to the ILE C++ compiler, which is based on Draft Technical Report 24732
submitted to the ISO/IEC JTC1/SC22/WG14 Programming Language C committee.
New data types:
– _Decimal32, 4 bytes, 7 digits precision, and -95/+96 exponent
– _Decimal64, 8 bytes, 16 digits precision, and -383/+384 exponent
– _Decimal128, 16 bytes, 34 digits precision, and -6143/+6144 exponent
Provides conversions to / from C++ built-in data types, such as integers and binary
floating-point types
Includes the DECFLTRND option for the C++ compiler commands (CRTCPPMOD and CRTBNDCPP)
to control compile-time decimal floating-point rounding mode.
Open Access opens RPG’s file I/O capabilities, allowing anyone to write innovative I/O
handlers to access other devices and resources, such as:
Browsers
Mobile devices
Cloud-computing resources
Web services
External databases
XML files
Spreadsheets
Open Access is the linkage between parts 1 and 2. Licensed program 5733-OAR is required
to use Open Access at run time.
5250 Screens
RPG Applications
Target
Program
Handler JSPs
Target
F Define the Handler Program
Handler
D***********************
C* Data
C Write Section1
C* : Handler
C* : Target
C Write Section2
C : Program
Handler
Target
Program
Mobile
Devices
Other
Servers
Open Access does not provide handlers. A handler can be customer-developed or it can be
provided by another provider, such as an ISV. The following list details the characteristics of a
handler:
A handler is a program or a procedure in a service program.
A handler can be a generic handler that can handle any file of that device type, or it can be
a handler that is specifically written to handle a particular “file”.
A handler is not required to support every operation that RPG allows for that type of file. If
the handler does not support a specific operation, then the RPG programmer must not
code that operation. For example, for a PRINTER file, if the handler does not support the
Force-End-Of-Data operation, then the RPG programmer does not code an FEOD
operation for the file.
Handler First
The handler is written before the application is written. For example, the RPG programmer
wants to use a web service that returns information for a specific set of criteria, where:
The handler provider creates a keyed database file that matches the web service.
The handler provider can tell the RPG programmer what I/O operations that the handler
supports.
The RPG programmer codes the RPG program, using the file as an externally described
keyed DISK file, with the HANDLER keyword to identify the Open-Access handler.
The handler uses externally described data structures that are defined from the same file.
This type of handler can be written by the same RPG programmer who uses the Open
Access file, or it can be provided by an outside expert.
Where:
1. A city name is input as a key.
2. An RPG keyed DISK “file” is used as the interface to the web service.
3. The handler provider chooses to handle OPEN, CLOSE, and CHAIN.
The handler service program uses PF CITYWTHR to define the records and key information, as
shown in Example 16-12.
...
if info.rpgOperation == QrnOperation_CHAIN;
. . . call the web service using the key ... (not shown here) . . .
if an error occurred . . .
info.rpgStatus = 1299;
else;
// set the data values from the info returned by the web service
data.precip = . . .
data.temp = . . .
To encourage the adoption of the technology of Rational Open Access: RPG Edition and
enhance the value of the IBM i platform and the RPG language, IBM removed the additional
license entitlement requirement for Rational Open Access: RPG Edition for customers of IBM
i and IBM Rational Development Studio for i. IBM i 6.1 and 7.1 program temporary fixes
(PTFs) are available to enable Open Access: RPG Edition applications to run in the absence
of a Rational Open Access runtime license, and to move the technology of Open Access:
RPG Edition into Rational Development Studio for i. Here are descriptions of the relevant
PTFs:
A PTF for the RPG runtime function that checks for the 5733-OAR license: With this PTF,
the function does not check for the 5733-OAR license, but instead returns the value that
indicates that the license is valid.
A PTF for the RPG compiler: With this PTF, the compiler does not generate a call to the
RPG runtime license-checking function as part of the generated code for opening an Open
Access file.
A PTF to make the QOAR library part of the Rational Development Studio for i product: If
you do not have 5733-OAR, you must apply this PTF to obtain the include files that are
used for developing handlers. If you have 5733-OAR, it is not necessary to apply this PTF.
However, any further updates or fixes to the include files are available only through more
PTFs for the compiler product 57xx-WDS. 5733-OAR must be removed from the system
before the PTF is applied.
As of February 1, 2012, running an ILE RPG program using an Open Access file is available,
at no additional charge, to everyone who is licensed to use IBM i 6.1 and 7.1. Also, compiling
an ILE RPG program using an Open Access file is available, at no additional charge, to
everyone who is licensed to use the ILE compilers feature of either IBM Rational
Development Studio for i 7.1 or IBM WebSphere Development Studio for i 6.1
For the announcement letter and more PTF information, see the RPG Cafe at:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/We131
16a562db_467e_bcd4_882013aec57a/page/Open%20Access%20announcement
Such a tool is now available from ARCAD software called ARCAD-Transformer RPG. This
tool is able to convert RPG IV code, including calculation specifications (C) and declaration
specifications (H, F, D, P).
The ARCAD-Transformer tool is delivered and maintained by ARCAD software. The product
is also sold by IBM as part of the ARCAD Pack for Rational under the name ARCAD
Converter.
Prerequisites for this tool are IBM Rational Developer for Power Software V8.5 or IBM
Developer for i V9.0. ARCAD-Transformer RPG works as an Eclipse plug-in for these Rational
tools. For more information, see the Arcad website:
http://www.arcadsoftware.com/
ARCAD Pack for Rational provides modern development enhancements to Rational Team
Concert and Rational Developer for i. Designed specifically to complement these Rational
products, ARCAD Pack provides a modern collaborative development environment,
supporting both agile and traditional methods. ARCAD Pack for Rational uses deep
dependency analysis to provide audit, impact analysis, intelligent build, and deployment
capabilities. It also allows RPG code to be converted to the latest free format RPG
specifications.
ARCAD Pack for Rational helps IBM i development teams deliver high-quality software faster
based on six main components:
ARCAD-Observer provides in-depth application analysis and visualization.
– Provides deep dependency analysis and powerful change impact analysis coupled
with graphical presentation and reporting, giving analysts and developers a much
faster understanding of applications.
– Automates program and application documentation production.
– Synchronizes dependency information with code that is in Rational Team Concert.
– Integrates with Rational Developer for i for making dependency information available to
developers when and where they need it.
ARCAD-Builder supports complex integration builds of composite IBM i applications.
– Automate 100 percent of the build process for any type of IBM i component.
– Manage and save data, and automatically restore into a new file structure.
– Automate the recompilation sequencing of dependent components.
– Manage all compilation specifics, such as SQL and ILE compilations.
– Use ARCAD-Builder with Rational Team Concert.
ARCAD-Deliver automates and synchronizes deployment across multiple platforms with
automatic rollback on error.
– Deploys any type of files to any number of servers that host UNIX, AIX, Linux,
Windows, and IBM i operating systems.
– Coordinates deployment of all platform components in a single transfer.
– Allows return to the previous release at any stage during the implementation, using
automatic rollback.
– Provides validated integration with Rational Team Concert.
ARCAD-Audit: provides IBM i code audit and restructuring.
– Analyzes libraries and source code to identify relationships between components (such
as programs and files), databases, and work fields.
– Identifies application components that are no longer used.
– Provides tools for rapid clean-up of applications: compare, archive, compile, and delete
obsolete components with full traceability and security.
– Cleans applications before loading Rational Team Concert.
The ARCAD Software web page contains information about the ARCAD Pack for Rational
and ARCAD tools included in this product:
http://www.arcadsoftware.com/products-scm-rational
Application Development
ToolSet (ADTS)
Language Editor - CL Application Management
Language Editor - COBOL ToolSet (AMTS)
Figure 16-18 Application Management Toolset for i includes modified components of ADTS
The version of SEU that is included in Application Management Toolset for i supports only
editing of the CL source. It does not support editing of source members that are written in
other languages, such as RPG, COBOL, C, C++, or DDS. Like SEU, this editor provides
language-sensitive features, such as syntax checking and prompting for CL source members.
Application Management Toolset for i supports the operating system member types CL,
CLLE, CLP, TXT, and CMD in the EDTCLU (same as STRSEU) command.
The version of PDM that is included in Application Management Toolset for i can be used to
browse, move, filter, and manipulate objects of any type. However, it enables only software
development options (such as Edit and Compile) for CL objects.
Application Management Toolset for i supports the following functions from PDM:
All the menu functions of STRPDM (new command STRAMT)
All the functions of WRKLIBPDM (new command WRKLIBAMT)
All the functions of WRKOBJPDM (new command WRKOBJAMT), including FNDSTRPDM (new
command FNDSTRAMT), except for:
– No option 18 to call DFU.
– No option 34 to call ISDB.
– No option 54 to call CMPPFM.
All the functions of WRKMBRPDM (new command WRKMBRAMT), including FNDSTRPDM (new
command FNDSTRAMT), with the following exceptions:
– Option 2 (Edit) uses the new command EDTCLU, which supports only the CL, CLLE,
CLP, TXT, and CMD member types.
– No option 17 to call SDA.
– No option 19 to call RLU.
None of the other components from ADTS are included with Rational Application
Management Toolset for i.
Like WebSphere Development Studio for i and Rational Development Studio for i, ongoing
maintenance and support costs for Application Management Toolset for i are included in the
IBM i system Software Maintenance agreement (SWMA).
Selection or command
===>
As a prerequisite for development activity it needs one of the following Rational products:
Rational Developer for i V9.0
Rational Business Developer V9.0
Rational Application Developer for WebSphere Software V9.0
Rational Software Architect for WebSphere Software V9.0
Rational Developer for AIX and Linux V9.0
Note: Any of the WebSphere, WebLogic, and Apache Geronimo technologies can run on a
different platform from where they are supported (IBM i is not mandatory). These servers
then connect using telnet, or secure telnet protocol to the IBM i server where the original
5250 application runs.
HATS V9.0 can create two types of clients, the first is web based running in a web browser,
and the second is a client/server based client running either in Lotus Notes, Lotus Expeditor
environment, or in Eclipse SDK environment.
For detailed system requirements and possible limitations, see the following website:
http://www-01.ibm.com/support/docview.wss?uid=swg27011794
Note: Rational HATS for Multiplatforms can create applications using IBM i 5250, IBM
Mainframe 3270, and UNIX VT based screens (VT is only for capturing data, not screens).
Rational HATS for 5250 can use only IBM i 5250 screens.
Rational HATS for Multiplatforms and Rational HATS for 5250 Applications on
Multiplatforms can use any supported HTTP server on any supported Java application
server (see 16.7.1, “HATS general description” on page 663). Rational HATS for 5250
Applications can only have a Java application server runtime on IBM i.
The Rational HATS product provides both a HATS Toolkit (Windows Eclipse based plug-in for
development of HATS applications) and a HATS run time. The HATS toolkit can be
downloaded for no extra cost from the Rational HATS product web page.
Rational HATS allows you to reuse your existing assets in the following innovative ways:
Terminal applications
– Transforms the user interface of your 3270 and 5250 green-screen applications.
– Allows tunable default rendering of all non-customized panels of the application.
– Transforms specific panels using panel customizations and transformations.
– Transforms terminal application components, such as function keys and menu items,
into intuitive links, buttons, or text with images.
Web services
– Extends 3270, 5250, and VT application core business logic as web services or
JavaBeans.
– Captures panel flows, inputs, and outputs with a wizard-based macro recorder. Allows
you to edit what is captured with the Visual Macro Editor, and creates integration
objects and web services from the screen flows.
– Reuses terminal application business logic in new business processes and
applications.
For more information about the IBM Rational HATS product, see the following websites:
Rational HATS product web page:
http://www-03.ibm.com/software/products/us/en/rhats
IBM Rational Host Access Transformation Services (HATS) V9.0 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/hatshelp/v90/index.jsp
Note: IBM Navigator for i is the strategic application for IBM i administration tasks. New
administration tasks are supported only by the web application and will no longer be added
to the System i Navigator Windows client application.
Cluster Resource Services plug-in: The Cluster Resource Services plug-in for
System i Navigator from the High Availability Switchable Resources licensed program
(IBM i option 41) was removed in IBM i 7.1.
The actual PTFs are all packaged and delivered as part of the HTTP PTF group. In addition, a
number of other PTF groups are required to ensure that all parts of the IBM Navigator for i
interface function properly:
SF99368 level 18, HTTP Server group
SF99145 level 4, Performance Tools group
SF99701 level 22, Database Group PTF
SF99572 level 12, Java Group PTF
For the most current list of required PTFs for IBM Navigator for i, see the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/IBM%20Navigator%20for%20i
Internet configuration tasks: Using IBM Navigator for i to perform Internet configuration
tasks using the Internet Configurations option requires additional IBM i installed licensed
programs and PTFS. This is a similar requirement to IBM i Access for Web. For more
information, see 18.5.1, “Requirements for using IBM i Access for Web” on page 825.
After you sign on, the Welcome window opens, as shown in Figure 17-3.
The containers open quickly, and you can reach endpoints with fewer clicks. It is also possible
to dynamically update the content by clicking the Reload icon as shown in Figure 17-4.
Previously, the left navigation was a static list with only core functionality enabled. Now the
interface has dynamic navigation, making the left frame a true navigation area.
Note: Do not press Enter when you use the Quick filter option, as the window updates
automatically when you enter the filtering argument. If you press Enter, you discard the
search argument and do not get the correct output.
Figure 17-7 5250 emulation using the System option in IBM i management
IBM Navigator for i can manage a target IBM i 6.1, or 7.1 system or partition. The options that
are available on that target partition or system can vary depending on the IBM i release that is
on that target system or partition.
2
3
Figure 17-9 Setting the target system to manage another system or partition
The example in Figure 17-9 shows IBM Navigator for i on a IBM i 7.1 system and shows that
the select target system selection changed to another system IBM i 7.1.
With the Set Target System feature, the IBM Navigator for i management server runs in one
place. One single browser can be used to manage multiple environments and management is
extended to previous IBM i environments.
Other database enhancements are discussed in other various locations in this book:
5.3.38, “Enhanced analyze program summary detail” on page 211
5.4.53, “Navigator – Improved Index Build information” on page 260
5.5.18, “IBM i Navigator improved ability to mine journals” on page 279
5.7.2, “Navigator for i - Omnifind Collection Management” on page 286
5.2.11, “New generate SQL option for modernization” on page 187
QSYS2.SYSLIMITS in 5.5.15, “Tracking important system limits” on page 270
Previously, a system security officer needed to grant the *JOBCTL user special authority so
that database analysts and database administrators could use the database tools. Because
the *JOBCTL authority allows a user to change many system critical settings that are
unrelated to database activity, it was not an easy decision for security officers to grant this
authority. In certain cases, *JOBCTL was not granted to database analysts, thus prohibiting
the usage of the full set of database tools.
In IBM i 7.1, the security officer can authorize access to the database analysis tools, and the
SQL Plan Cache. DB2 for i takes advantage of the function usage capability available in the
operating system.
A function usage group that is called QIBM_DB was created. In IBM i 7.1, there are four
function IDs in the QIBM_DB group:
QIBM_DB_SQLADM (IBM i Database Administrator tasks)
QIBM_DB_SYSMON (IBM i Database Information tasks)
QIBM_DB_DDMDRDA (DDM and DRDA Application Server Access)
QIBM_DB_ZDA (Toolbox Application Server Access)
The security officer can grant authorities by using either of the following methods:
Granting *JOBCTL special authority
Authorizing a user or group to the IBM i Database Administrator Function through
Application Administration in IBM Navigator for i
Tip: You can use the Change Function Usage (CHGFCNUSG) command, with a function ID of
QIBM_DB_SQLADM, to change the list of users that are allowed to perform database
administration operations. The function usage controls which groups or specific users are
allowed or denied authority. The CHGFCNUSG command also has a parameter
(ALLOBJAUT(*USED)) that can be used to grant function usage authority to any user who has
*ALLOBJ user special authority.
The access customization configuration for the database administration operations can also
be performed in a similar way for database information-related operations. The same is true
for the DDM and DRDA Application Server Access and the Toolbox Application Server
Access.
Figure 17-14 Start SQL Performance Monitor from SQL details for a job
The indexes list also includes columns for SSD (Media Preference) and Keep In Memory
values, as shown in Figure 17-17.
Figure 17-17 Index columns for SSD and Keep In Memory values
For more information related to the DB2 Media Preference, see 8.4.1, “DB2 media
preference” on page 407.
Tip: You might need to add the columns so that they show up, as they are not displayed by
default. Select the Columns option in the Actions menu to add those columns.
Sequential reads: The sequential reads are available on the next page of the Activity tab.
For more information related to this support, see 5.2.1, “XML support” on page 161.
With IBM i 7.1, support was added for listing and working with XSRs. However, there is no
support to create an XSR using this IBM Navigator for i interface.
For more information related to global variables, see 5.2.4, “Creating and using global
variables” on page 169.
For more information related to array support, see 5.2.5, “Support for arrays in procedures”
on page 170.
For more information related to FIELDPROC support, see 5.2.7, “FIELDPROC support for
encoding and encryption” on page 174.
New menu option: There is also a new menu option called “New based on...”, which you
can use to build a similar procedure from an existing one.
List support: This list support is also available for Tables, Indexes, Aliases, Views,
Functions, Triggers, Index Advice, Condensed Index Advice, SQL Plan Cache Snapshots,
SQL Plan Cache Event Monitors, Schemas, Database Transactions, and Global
Transactions.
Work with Job: From within this same interface, you can get to the corresponding job by
clicking the Work with Job button.
The Total Elapsed Time field shows the time of all the history entries plus the current entry.
For example, if you start reorganizing a table, you see an entry in the history section for the
current run. It is updated and shows in green.
If you choose to suspend that instance of the reorganization and resume it later, you see a
row in the history section for the previous instance, and a new row for this instance.
The Total elapsed time value then includes both the previous instance of the reorganization,
plus this current instance of the reorganization. The history applies to only the history of the
reorganization for one instance of the reorganization of this table. It does not show the prior
history of completed reorganizations of this table.
From within this same interface, you can now further drill down into the corresponding job by
clicking the Work with Job button.
Improved functionality: The reorganize progress in releases before IBM i 7.1 required
that you find the table and select to reorganize it to see whether it is being reorganized.
This process is an easier way to accomplish the same task because the Table
Reorganizations option in the Database Maintenance category is now available.
Figure 17-34 defines a new Long Schema Name within an IBM i 7.1 database.
Figure 17-35 shows the support that enables the management of OmniFind text search
servers and indexes.
The support for OmniFind text search in DB2 adds simpler access to non-structured data that
is often stored in XML format.
For more information, see the OmniFind Text Search Server for DB2 for i topic in the IBM i 7.1
Knowledge Center:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzash%2
Frzashkickoff.htm
For more information related to journal management capabilities and enhancements, see 5.3,
“Performance and query optimization” on page 189.
For more information related to Integrated Server Administration, see Chapter 11, “Integration
with IBM BladeCenter and IBM System x” on page 475.
The Create Server wizard starts. This function is described in more detail in 11.7.1, “Create
Server task” on page 484.
Starting with IBM i 7.1, several iSCSI configuration functions were simplified. For more
information about these functions, see 11.7, “IBM Navigator for i” on page 484.
You can use the Export as PDF function to save the contents of a printer output file to the
following sources:
Your client desktop or file system
An output queue
The Integrated File System (IFS)
An attachment to an email
Requirement: For the last three options listed, the Infoprint Server licensed program
5722-IP1) is required. Users can use the native IBM Transform Services for i
(5770-TS1) licensed program to export to the IFS, but they must map a network drive to
the IFS and then select the first option, as shown in Figure 17-50 on page 709.
Downloading and uploading files. One of the features that many clients like about the
client Navigator is the ability to easily move files from your PC or network drivers to your
IBM i file system. IBM Navigator for i now has two new functions. You can now download
or upload files to your IBM i.
Right-click the file of choice, or you can highlight multiple files and right-click, to display the
action list. Select Download and the download interface is shown with all the files that you
have selected as shown in Figure 17-53. When you click the Download button, these
selected files are downloaded to your PC. You can then determine where you want them to
be located.
A temporary file system is supported. The temporary file system can be created by
creating a user-defined file system (UDFS) by specifying .TMPUDFS= as the extension
instead of the default .UDFS for the UDFS name field as shown in Figure 17-55.
For more information about temporary user-defined file system support, see 19.3,
“Temporary user-defined file systems” on page 831.
For more information related to networking enhancements in IBM i 7.1, see Chapter 9,
“Networking enhancements” on page 421.
Tip: You must follow these procedures before you can perform any disk management tasks
using IBM Navigator for i or System i Navigator. For more information, see the
Requirements for disk management topic in the IBM i 7.1 Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaly/rzalyplanning.
htm?lang=en
For more information related to asynchronous transmission delivery for geographic mirroring,
see 4.1.6, “Asynchronous geographic mirroring” on page 135.
This function is similar to the Work with Disk Unit Recovery function to rebuild device parity
disk unit data. The rebuild function is now part of the web navigator interface. Figure 17-63
illustrates the rebuild data on failed disk unit in parity set.
The Virtual Partition Manager supports environments with a hosting IBM i partition and up to
four client Linux or IBM i partitions. The hosting IBM i partition must own and manage all of
the I/O resources. The logical partition uses virtual I/O to access disk, tape, DVD, and
Ethernet resources that are owned by the hosting partition.
HMC consideration: You cannot use Virtual Partition Manager on an IBM i server that is
configured using an HMC. This means that you must disconnect your IBM i server from the
HMC before you can use the Virtual Partition Manager. You can only use Virtual Partition
Manager on the hosting IBM i partition. PowerVM Standard or Enterprise edition is
required to support four client partitions.
IBM Navigator for i provides a graphical interface for Virtual Partition Manager. This function
requires IBM i 7.1 Technology Refresh 6 and PTFs SI48848 and SI49568.
2. Follow the Create Partition wizard instructions, which create an IBM i or Linux partition as
shown in Figure 17-67.
The Create Partition function associates virtual Ethernet connections between client
partitions and the hosting partition. This association is done by specifying the same
VLAN ID for virtual Ethernet adapters in both the client and hosting partitions.
IBM Navigator for i also creates and associates the required network server description
(NWSD) and network storage space (NWSSTG) objects in the hosting partition. There are
several options for implementing virtual Ethernet. For more information, see the following
publications:
– Virtual Partition Manager A Guide to Planning and Implementation, REDP-4013,
Chapter 6 “Establishing Network Connectivity for Linux Partitions” shows the proxy
ARP method.
– Creating IBM i Client Partitions Using Virtual Partition Manager, REDP-4806, section
entitled “Ethernet Layer-2 Bridging” shows how to set up layer-2 bridging.
3. The last step of creating logical partition is to install operating system on your logical
partition.
In the Tape Devices menu, you have the following options available:
Stand-Alone Devices offers the following possibilities:
– Make (un)available
– Look into the properties
– Upgrade the firmware
Tape Image Catalogs offers the following possibilities:
– Add or list volumes
– Look at the properties
Tape Libraries offers the following possibilities:
– Make (un)available
– Look into the properties
Create Image Catalog
Create Virtual Device
Figure 17-69 shows the IBM Navigator for i interface that is used to work with
performance-related tasks within IBM i 7.1.
Several enhancements were made to the Performance Data Investigator (PDI), which can be
accessed by selecting the Investigate Data task, shown in Figure 17-70.
Main system resources and components (such as processor, DASD, and memory) and
communications are analyzed. The results are displayed graphically. The main source of data
for analysis is the Collection Services performance data files.
The new content package, which deals with the general health of your partition, is shown in
Figure 17-71.
You can use this perspective to quickly determine the percentage of intervals that exceeded
the various defined thresholds for CPU, Disk, Memory Pools, and Response Time.
From the System Resources Health Indicators perspective, you can open the following new
perspectives:
CPU Health Indicators
Disk Health Indicators
Memory Pools Health Indicators
Response Time Health Indicators
From the CPU Health Indicators perspective, you can open the following perspectives:
CPU Utilization and Waits Overview
CPU Utilization Overview
Interactive Capacity CPU Utilization
From the Disk Health Indicators perspective, you can open the following perspectives:
Resource Utilization Overview
Disk Overview by Disk Pools
Disk Details by Disk Pools
From the Memory Pools Health Indicators perspective, you can open the following
perspectives:
Resource Utilization Overview
Page Faults Overview
From this perspective, you can open the 5250 Display Transactions Overview perspective.
Figure 17-79 shows how to modify the disk health indicators thresholds by specifying the
current threshold values as 10 for the Warning field and 20 for the Action field for the Average
Disk Response Time.
Figure 17-80 Threshold that is reached for average disk response time
All the individual thresholds can be added, removed, and tailored to your own specifications.
To turn on this option, from the HMC, complete the following steps:
1. Select Systems Management → Servers.
2. Click your IBM i system.
3. Select the partition profile.
4. Click Properties.
5. Click the Hardware tab.
6. Click Processors.
7. Check box Allow performance information collection.
8. Click OK.
Required program: IBM i 7.1 5770-PT1 licensed program Performance Tools - Manager
Feature - Option 1 must be installed to use this interface.
Database I/O
For the Physical I/O perspective, the following detailed views are available:
Physical Database I/O Overview
Physical Database I/O by Job or Task
Physical Database I/O by Thread or Task
Physical Database by Generic Job or Task
Physical Database by Job User Profile
Physical Database by Job Current User Profile
Physical Database by Subsystem
Physical Database by Server Type
This new breakdown makes it definitely much easier to notice what changed.
Select these collections with the SQL Performance Data perspectives to view several
high-level charts for a specific SQL Performance Monitor, SQL Plan Cache Snapshot, or SQL
Plan Cache Event Monitor.
You have definitions that provide predefined sets of perspectives to create for a specific
collection. A report definition can be selected along with the collection you are interested in to
generate a .pdf or a .zip file of the perspectives, as shown in Figure 17-104. This definition
makes it easy to export multiple charts or tables for collection at one time.
Figure 17-105 Exporting an image, comma delimited file, or a tab delimited file
The IBM Systems Workload Estimator is a web-based sizing tool for IBM Power Systems,
System i, System p®, and System x, and is available at the following website:
http://www.ibm.com/systems/support/tools/estimator
Use this tool to size a new system, to size an upgrade to an existing system, or to size a
consolidation of several systems.
Important: You must select bars of the same metric (the color / pattern must match) or
each selection clears the previous selections.
Note: The Size Next Upgrade action to start Workload Estimator (WLE) is now updated
so that the metrics for Disk Read IO (bytes) and Write IO (bytes) calculations give more
accurate information.
This perspective gives you an easy visual context of the (run time) length for jobs and the
concurrency of workload on the system. Select a thread or task to view its detailed run and
wait contributions.
Figure 17-111 shows the Waits by Job or Task. You can select the following actions:
Waits for One Job or Task
All Waits by Thread or Task
Figure 17-112 Memory Pool Sizes and Fault Rates, view one: (Pools 001-004)
Figure 17-113 Memory Metrics for One Pool: Memory metrics overview for one pool
In the Ethernet Lines Overview chart, you can click Communications Overview for One
Ethernet Line from the Actions menu as shown in Figure 17-118.
For more information related to Ethernet Link Aggregation, see 9.9, “Ethernet link
aggregation” on page 438.
This chart shows an overview of workload group dispatch latency. It shows the total delay
time for each workload group, which is the amount of time that threads that were ready to run
could not be dispatched because of the group's maximum concurrent processor limit.
The Display Latency Totals by Workload Group view is shown in Figure 17-119.
From these views, you can select a Workload Group and drill down to either of the following
views:
Dispatch Latency Totals by Thread for one Workload Group.
Dispatch Latency for One Workload Group.
The workload capping function prevents a subsystem from getting more than the capacity of
the number of cores that are allocated to that workload capping group. There is nothing that
causes or ensures that the threads of a capped subsystem receive the full throughput
allowed. The data can be analyzed to determine whether the workload capping group limit on
the number of processors that are assigned is causing wait time for the threads to complete.
This function does not ensure that adding more processor cores will prevent the workload
group from encountering other bottlenecks because the system is busy with other uncapped
subsystem threads.
The purpose of the PDI perspectives is to show the statistics of the Workload Groups as
collected by Collection Services.
For more information about Workload Groups, see 19.7, “IBM i workload groups” on page 838
and the IBM i developerWorks Wiki - IBM i Technology Updates - IBM i workload groups at:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/IBM%2
0i%20Technology%20Updates/page/IBM%20i%20workload%20groups
The charts contain general information for active Java virtual machines (JVM).
Figure 17-121 shows the IBM Technology for Java Memory Overview perspective that shows
average heap sizes and memory over the entire time of the collection broken down by job.
You can use this perspective to discover what jobs are using the memory so that you can look
at them in more depth afterward.
This chart shows the average heap sizes and memory over the entire time of the collection
broken down by job. You can use this chart to discover what jobs are using the memory so
you can look at them later.
Figure 17-123 IBM technology for Java - Memory for one job
Figure 17-126 Disk I/O Rates Overview with Cache Statistics - detailed
For more information about these enhancements, see “QAPMDISKRB” on page 306.
2. Select the Enable Design Mode option and click OK. The options that are chosen here
are persistent across connections for the user.
4. In the New Package window, specify a name, and click OK. The package is created and
displayed.
6. Specify a name and a description for the perspective and click Add to add a view (Table or
Chart) with a Data series and a corresponding Threshold.
Figure 17-133 Customized package and perspectives added to the main perspective tree list
After you modify the view, click OK to save the view with the changed information.
Formatting enhancements were made to the Modify SQL window, making the text easier to
read and understand. The Modify SQL action is available for charts and tables.
You can use the Metric Finder function to display perspectives that are based on a specific
metric. This function is useful when you know what type of information you are looking for but
do not know where to start, or where certain metrics are included.
Tip: You can specify a filter to limit the metric names included in the menu. The filter
facilitates your search for one specific metric without knowing its exact name. After you
enter a filter, click Apply Filter to update the metric name list. After a perspective is
selected, you can display it by clicking Display.
The tree-format of perspectives is replaced with the new search function. The Search button
is replaced by a List button that is used to revert to the normal window afterward.
The following PEX Profile perspectives provide functions that are similar to what Profile Data
Trace Visualizer (PDTV) offers:
Profile by Procedure
Profile by Component
Hierarchical Trace Profile
Job/Thread List
For more information about these buckets, see “QAPMDISKRB” on page 306.
Use Active Energy Manager V4.2 (AEM) with IBM Systems Director 6.1 to set the power
savings value for IBM Power Systems running IBM i 7.1. This power savings value is then
used to achieve a balance between the power consumption and the performance of the
Power Systems system.
Figure 17-143 shows the CPU rate (Scaled CPU: Nominal CPU Ratio) for a specific period.
Remember: When you click Investigate Job Wait Data, the chart that is rendered
includes only data that pertain to that specific job. If you click Investigate Jobs Wait Data,
the chart that is rendered includes all jobs that were active during that collection.
Figure 17-148 Improved integration with disk status and system status
By default, the name of the viewed perspective is shown as the title of the perspective
window.
You might not want this information to be displayed. If so, see the View menu that has the
Show Context check box, as shown in Figure 17-150. If you clear this check box, this
information is hidden. This choice is preserved across sessions.
The new menu at the top of the perspective has the same actions available as the one at the
bottom. However, it is available without needing to scroll down because the actions are
sometimes rendered off-window. As such, it improves the availability of the options.
From within that menu bar, it is possible to go back to the previous perspectives by clicking
the corresponding item in the History Data.
When you click Show Holder, the holding job or task information is displayed.
Selected items on charts: On certain charts (for example, Waits for one job or task), you
see which job, task, or thread was selected for drill down.
For more information that is related to Advanced Job Scheduler enhancements, see
Chapter 12, “IBM Advanced Job Scheduler for i enhancements” on page 497.
For more information related to Backup Recovery Media Services enhancements, see
Chapter 3, “Backup and recovery” on page 45.
Figure 18-1 Main ACS web interface window including information about the ACS version
ACS uses the same IBM i host servers as the other IBM i Access Family products and
requires the same IBM i Access Family license (XW1) to use the 5250 emulation and Data
Transfer features.
ACS also provides two optional platform-specific packages that include middleware for using
and developing client applications:
ACS - Windows Application Package for Windows operating systems, which includes
these features:
– Connectivity to IBM DB2 for i using open database connectivity (ODBC), .Net, and OLE
DB
– Programming Toolkit for accessing IBM i system objects
– Support for Transport Layer Security (TLS) / Secure Sockets Layer (SSL) connections
– Advanced function printing (AFP) printer driver
ACS - Linux Application Package for Linux operating systems, which includes these
features:
– Connectivity to DB2 for i using ODBC
– Full support for 64-bit ODBC data types
– TCP/IP connectivity
ACS uses the same IBM i host servers as the other IBM i Access Family products and
requires the same IBM i Access Family license (5761-XW1 or 5770-XW1) to use the 5250
emulation and Data Transfer features.
The General Availability (GA) version of ACS is available to customers with an IBM i software
maintenance contract. It can be downloaded from the Entitled Software Support (ESS)
website under 5761-SS1 (feature codes 5817, 5818, 5819) or 5770-SS1 (feature codes 5817,
5818, or 5819). Using the hide/show option on the ESS website allows you to download just
the specific parts that you need.
The IBM Entitled Software Support (ESS) is available at the following website:
http://www-304.ibm.com/servers/eserver/ess/index.wss
A technology preview of ACS is available for an evaluation period of 120 days, at the following
website:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=swg-ia
18.1.2 Prerequisites
This section discusses ACS prerequisites.
Workstation prerequisites
ACS runs on most operating systems that support Java V6.0 or later, including various
versions of Linux, Mac OS, and Windows.
One of the ways to check the version of Java that is installed on your system is to open a
command prompt and run the following command:
Java -version
Detailed instructions for running ACS are in the GettingStarted.txt file included in the archive
file of the product.
Session Manager
The Session Manager supports these capabilities:
Start a saved session.
Create a display or printer session.
Create a multiple session start batch file from existing saved sessions.
Convert Macro (beta version).
Full migration support for .ws, .bch, .kmp, and .pmp files from IBM i Access for Windows
Import colors from IBM i Access for Windows.
Figure 18-5 shows the 5250 Session Manager option to import a profile.
Figure 18-7 illustrates ACS settings for configuring the SSL feature that is used by 5250
sessions.
Figure 18-9 shows data transfer with output device set to display.
The latest updates for ACS features are available at the following website:
http://www-03.ibm.com/systems/power/software/i/access/solutions.html
For more information about configuring ACS for mobile devices, see the following website:
http://www.ibm.com/developerworks/ibmi/library/i-access_client_solutions/
The following are the .NET Data Provider enhancements for IBM i Access for Windows for
IBM i 7.1:
128-byte schema names
Support for the IBM i XML data type
Connection property to configure Concurrent Access Resolution
Support for multi-row UPDATE, DELETE, and MERGE statements
Support Visual Studio 2008
Online help through Visual Studio
For more information, see Personal Communications for Windows Administrator's Guide and
Reference, available at the following website:
http://pic.dhe.ibm.com/infocenter/pcomhelp/v6r0/index.jsp?topic=%2Fcom.ibm.pcomm.d
oc%2Fbooks%2Fhtml%2Fadmin_guide.htm
For more information, go to the IBM i Planning website at the following address:
http://www-947.ibm.com/systems/support/i/planning/upgrade/v6r1/planstmts.html
For more details about new Navigator for i features, see Chapter 17, “IBM Navigator for i 7.1”
on page 667.
The following IBM i functions are available in the Windows Client interface only compared to
IBM Navigator for i:
Management Central functions
Database functions that involve graphics and charts:
– SQL Scripts
– Visual Explain
– Database Navigator
– SQL Assist
AFP Manager capabilities:
– Resources
– PSF configurations
– Font Mapping tables
You can use the following access path through the IBM i 7.1 Knowledge Center to get to the
prerequisites: IBM i 7.1 Knowledge Center → Connecting to your system → IBM i
Access → IBM i Access for Web → Planning → Prerequisites for installing IBM i
Access for Web.
Refer to the service pack PTFs page for PTF requirements, available at the following website:
http://www-03.ibm.com/systems/power/software/i/access/web_sp.html
For more information about IBM i Access for Web, go to the following website:
http://www.ibm.com/systems/i/software/access/web
For IBM i Access for Web in a web application server environment, a new preference that is
called Use AFP to PDF Transform is available to control whether the administrator or user
can use the AFP to PDF Transform. The possible values are Yes or No (the default is Yes). To
see this new preference, click Customize → Preferences → View all preferences and look
under the Print category. The usage of this preference is similar to the usage of the “Use
Infoprint Server if installed” preference in versions earlier than IBM i 7.1.
For IBM i Access for Web in a Portal environment, the JSR168 Print portlets are enhanced to
use the AFP to PDF Transform. The PDF Output Settings (available in edit mode) have two
new options:
Use Infoprint Server, if installed
The possible values are “Yes” and “No” (the default value is “Yes”).
Use AFP to PDF Transform, if installed
The possible values are “Yes” and “No” (the default value is “Yes”).
These options control the usage of Infoprint Server and the AFP to PDF Transform. They are
similar to the policies and preferences that are used with the servlets. The IBM specification
portlets are not enhanced.
To use the AFP to PDF Transform, click Print → Printer output and select the View PDF
icon, as shown in Figure 18-17.
For more information about IBM i Access for Web AFP to PDF Transform, go to the IBM i 7.1
Knowledge Center web page:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzamm/rzammhprint.htm
The DVD installation media is consolidated for IBM i 7.1 into three sets of multiple language
version media that support a total of 51 globalizations.
With IBM i 7.1, there is no offline installation version of the IBM i Knowledge Center that is
included on physical media. The IBM i 7.1 Knowledge Center is available online:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/welcome.html
The following changes to the licensed product (LPP) structure are implemented in IBM i 7.1:
DHCP moved from the base OS to option 31 “Domain Name System”.
The Clusters GUI was withdrawn from option 41 “HA Switchable Resources” and is
available with PowerHA for i (5770-HAS).
IBM HTTP Server i (DG1) option 1 “Triggered Cache Manager” was removed.
IBM Toolbox for Java (JC1) moved to 5770-SS1 option 3 “Extended Base Directory
Support”.
IBM Developer Kit for Java (JV1) options 6 (JDK 1.4) and 7 (JDK 5.0) are no longer
supported. J2SE 6.0 32 bit is the default JVM in IBM i 7.1.
Extended Integrated Server Support for IBM i (5761-LSV) is no longer supported. Option
29 “Integrated Server Support” is available as a replacement.
IBM System i Access for Wireless (5722-XP1) was withdrawn. The IBM Systems Director
family provides similar systems management functionality.
IBM Secure Perspective for System i (5733-PS1 and 5724-PS1) was withdrawn, although
it continues to be available as a custom service offering only.
The IBM WebSphere Application Server (5733-W61 and 5733-W70) minimum required
levels for IBM i 7.1 are 6.1.0.29 and 7.0.0.7.
For more information about LPP changes, see the IBM i Memo to Users at:
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahg/rzahgmtu.htm
Before you plan an IBM i release upgrade, see the IBM i upgrade planning website, which
provides planning statements about IBM i product changes or replacements:
http://www-947.ibm.com/systems/support/i/planning/upgrade/v6r1/planstmts.html
For temporary UDFSs, the system allocates only temporary storage. These temporary files
and directories are automatically deleted after an IPL, unmount, or reclaim storage operation.
Although regular (that is, permanent) UDFSs can be created in any ASP or IASP, the
temporary UDFSs are supported in the system ASP only.
Normally, the /tmp IFS directory contains permanent objects that are not cleared when the
system is restarted. To have /tmp on IBM i behave more like other platforms, a temporary
UDFS can be mounted over /tmp so that it is cleared at system restarts. The files in a
temporary UDFS should not contain critical data because it is not persistent.
The CRTUDFS command and IBM Navigator for i are enhanced to support the creation of
temporary UDFSs through a new naming convention. Although names for permanent UDFSs
must end with .udfs, the names for the new temporary UDFSs adhere to the naming
convention of /dev/QASP01/newname.tmpudfs, as shown in Figure 19-1.
Additional Parameters
More...
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 19-1 IBM i CRTUDFS command for creating temporary file systems
For more information about temporary user-defined file systems, see the IBM i 7.1 Knowledge
Center at the following web address:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/ifs/rzaaxudftempudfs
.htm
The function can also be used to end a trace when a watched event occurs. Watch
parameters exist for the following trace CL commands:
Start Trace (STRTRC)
Start Communications Trace (STRCMNTRC)
Trace Internal (TRCINT)
Trace TCP/IP Application (TRCTCPAPP)
Trace Connection (TRCCNN)
The Watch for Event function was initially available for the trace commands in V5R3M0 of
IBM i. Watches were generalized in the following release, in V5R4M0, so that the watches
were no longer tied only to trace commands. The Start Watch (STRWCH) and Start Watch API
(QSCSWCH) commands were created for the generalized support. Additionally, the Work with
Watch (WRKWCH) command was created to view watches, and the End Watch (ENDWCH) and End
Watch API (QSCEWCH) commands were created to end watches. Support to watch for
messages and LIC log entries was added in V5R4M0. Support to watch for PAL entries was
added in V6R1M0.
The IBM i Knowledge Center contains exit program information to describe all the parameters
that are passed to a watch or trace exit program.
Figure 19-2 shows the Watch for Message keyword with the STRWCH command for IBM i 7.1,
with the new message type, relational operator, and severity code fields near the bottom
portion of the window.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 19-2 Start Watch command
19.5.7 Enhanced password policy to use global date and time for initialization
The proposed design change for the initialization of password policy attributes when the
Password Policy function is first turned on is to introduce a new password policy entry
attribute, ibm-pwdPolicyStartTime, that is added to the cn=pwdPolicy entry. This attribute is
generated by the server when the administrator sends a request to turn on the Password
Policy function. The current time is put into this attribute. This attribute is an optional attribute
that cannot be deleted by a client request. It cannot be modified by a client request, except by
administrators with administrative control. It can be replaced by a master server-generated
request. The value of this attribute is changed when the Password Policy function is turned off
and on by an administrator.
Your total PTF installation time is shorter because none of the system jobs start during the
first IPL when the partition restarts. However, you have a longer IPL time because the system
is doing the work you previously did interactively, that is, the second GO PTF to set all PTFs
for delayed applies.
When all PTFs are set for delayed apply, you see the IPL requested by PTF processing
status message at the bottom of the panel and then the partition restarts to apply the delayed
LIC PTFs. The next time that you reach the “PTF Processing” IPL step, you see the usual
“Applying PTFs” step and the IPL continues.
To take advantage of this new function, you must have the following PTF (PTF management
code) temporarily applied before you run your PTF installation:
Version 7.1: SI43585 in HIPER PTF group SF99709 level 30 or higher
Version 6.1: SI43939 in HIPER PTF group SF99609 level 94 or higher
For Version 7.1, if an IPL is required for a technology refresh PTF, the new function supports
installing only from a virtual optical device or *SERVICE (PTFs downloaded electronically to
save files). If you are installing from a physical optical device, you still must run the additional
IPL and second GO PTF manually. So, if you received your PTFs on physical DVDs, create an
image catalog from the DVDs and use the new support.
A workload is defined as a job, subsystem, or product that is running on the IBM i system. The
user or system administrator can define a workload group, assigning a specified number of
processing cores to that group. The workload group is then assigned to a job or subsystem.
After the assignment is done, the workload is limited to the defined number of processing
cores. The system enforces this processing core assignment, ensuring that a job or all the
jobs that are running (and threads) under the subsystem are not allowed to run on more
processing cores than are designated.
Application #1 = 8 Cores
Application #2 = 8 Cores
IBM i = 8 Cores
Application #1 = 3 Cores
Application #2 = 6 Cores
IBM i = 8 Cores
This new capability can help users get better control of the workloads on their systems along
with ensuring products are using only a designated number of processor cores. Software
vendors can take advantage of the workload group support as a new virtualization
technology. A workload can be virtualized and licensed within a virtualized system. Product
entitlements can be specified based on the usage of the product instead of the total processor
cores of the LPAR.
Customers who want to take advantage of the enhanced licensing controls must register the
specified products with the native IBM i License Management tool that facilitates both the
registering and management of the enforcement of the workload groups. To help users
To learn more about the workload groups support, see the IBM i 7.1 Knowledge Center:
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzaks/rzaksworkloadc
apping.htm
For details and current information about Sub-capacity Licensing terms, go to these websites:
http://www.ibm.com/software/lotus/passportadvantage/subcaplicensing.html
http://www.ibm.com/software/tivoli/products/license-metric-tool
http://www.ibm.com/software/lotus/passportadvantage/pvu_licensing_for_customers.ht
ml
To see the details, see WebSphere MQ and Workload Groups Final, found at the following
web address:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/form/anonymous/api/libra
ry/beb2d3aa-565f-41f2-b8ed-55a791b93f4f/document/d0e23be6-8d9d-4739-9f8c-fbfced730
30f/attachment/b141097b-30b0-4c2a-8603-6f069736b9d0/media/WebSphere%20MQ%20and%20W
orkload%20Groups%20Final.pdf
The following two models of the POWER processor-based Flex Compute Node support IBM i.
IBM Flex System® p260 Compute Node Model 7895-22X
IBM Flex System p460 Compute Node Model 7895-42X
The following releases of IBM i are supported on the POWER processor-based Flex
Compute Node.
IBM i 6.1 Resave RS610-10 with License Internal Code 6.1.1 Resave RS-611-H or later
IBM i 7.1 with Technology Refresh 4 or later
You can also run IBM i on the POWER processor-based Compute Node in the IBM
PureFlex® System. When you order IBM PureFlex System, a dedicated management
appliance, which is called Flex System Manager (FSM), is bundled with the IBM PureFlex
System. FSM is used for managing and operating an IBM PureFlex System, including
hardware, firmware, virtualization environment, and operating systems environment on a
POWER processor-based Compute Node.
This chapter also covers new features and enhancements available in IBM i 7.1, relating to
installing, upgrading, distributing software and maintenance options for IBM i. The following
are changes in licensed programs (LPP) support when you upgrade to IBM i 7.1:
Planning to upgrade to IBM i 7.1
Upgrading from i5/OS 5.4 to IBM i 7.1 considerations
Media delivery changes
IBM i network upgrade
More considerations for upgrading to IBM i 7.1
Performance improvement to LIC PTF application
When you are planning upgrades, consider software, hardware, and the strategy for
connecting a console to your system or logical partition.
For detailed information, review the document System to IBM i maps available at the following
website:
http://www-01.ibm.com/support/docview.wss?uid=ssm1platformibmi
9 9 9
POWER6/6+ Power
520, 550*, 560, 570, 595
POWER5/5+
515, 520, 525, 550, 570, 595 9 9 9
800, 810, 825, 870, 890 9 9
270, 820, 830, 840 9
Figure 20-1 IBM i 7.1 hardware model support
Requirement: IBM i 6.1 is required to upgrade to IBM i 7.1 for a POWER6+™ 550 system.
For enterprise clients, IBM i 7.1 is now supported on the 16-way through 256-way POWER7
795. IBM i supports up to 32 cores in a single partition. You can contact IBM Lab Services
about an offering to grow beyond 32 cores in a single partition.
To map between the SLIC Resave level and the IBM i Resave level, see Table 20-1.
For more information about Resave, see 1.3.1, “What a Resave is” on page 6.
Table 20-1 IBM i 7.1 Resave history (with 7.1.0 machine code)
Resave Description 5770-999 5770-SS1
release date Resave level Resave level marker PTF
marker PTF
05/31/2013 Provides IBM i native attach support for IBM RS-710-H RS 710-10
SAN Volume Controller, IBM Storwize V7000, RE13106 AP11067(*BASE)
and IBM Storwize V3700. RS00106 (Option 0003)
Beginning with the IBM i 7.1 September 2010 GA, IBM i introduced a new code delivery
mechanism referred to as a Technology Refresh. A Technology Refresh is a collection of
operating system software that is developed together, packaged together, tested together,
and delivered together as a PTF Group for IBM i 7.1.
For more information about this new delivery method, see the following website:
http://www-03.ibm.com/systems/power/software/i/tech-refresh/
Option 0003: The resave for Option 0003 was done to reduce installation time from IBM
media.
If you are upgrading from i5/OS 5.4 to IBM i 7.1, the same object conversion considerations
apply as though the target release were IBM i 6.1. Read the “Program conversion” section in
PSP SF98026 - i5/OS Memo To Users, V6R1, available at this web address:
https://www-912.ibm.com/s_dir/sline003.nsf/3a8f58452f9800bc862562900059e09e/1ba9ea
e00b72a0ea8625772e00713fd0?OpenDocument
The program conversions refresh programs to take advantage of the latest system
enhancements. Program conversion includes the conversion of programs in libraries and
When you upgrade from IBM i 5.4, allow more time to analyze your system and adjust your
programs for conversions. The length of time that is required to run the analysis varies based
on the individual system environment. Program conversion can also affect vendor software.
These vendors are contacted as part of the upgrade planning, as they might need to verify
their applications support IBM i 6.1 or 7.1.
The ANZOBJCVN command was introduced for i5/OS 5.3 and i5/OS 5.4 to help object
conversion planning for upgrades to IBM i 6.1. This command can also be used for upgrades
to IBM i 7.1. The command is available through a set of PTFs. Information APAR II14306
provides a brief description of ANZOBJCVN and PTF requirements. For IBM i 5.4, PTF SI39402
adds the option to specify a target release of V7R1M0. To review Information APAR II14306,
go to the following web page:
http://www-01.ibm.com/support/docview.wss?uid=nas23af47a966c4df94586257306003c6868
For complete preparation and planning details, see IBM i Program Conversion: Getting Ready
for 6.1 and Beyond, REDP-4293.
For more reference materials related to conversion, see Integrated file system conversions
(V5R4 to IBM i 7.1 upgrade) in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahc%2Frzahcp
rogramconversions.htm
By default, conversion occurs during the upgrade, which can add a significant amount of time.
ANZOBJCVN can help by identifying the number of spool files and providing an estimate for
conversion time, helping you determine your best options. Spooled files that are restored to
the IBM i 7.1 release are automatically converted. The time for the spooled file conversion
process can be reduced by saving and deleting the spooled files before you upgrade from
IBM i 5.4 and then restoring them after you have IBM i 7.1 installed.
More options are available for managing the spool file conversion after the upgrade. Detailed
options and instructions are available in the IBM i 7.1 Knowledge Center section Spooled file
conversions (V5R4 to IBM i 7.1 upgrade) at:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzahc/rzahcspooledfileconve
rsions.htm
Important action: Only newly created spooled files or spooled files that were converted
can be seen and used after the upgrade. Until the conversion is complete, unconverted
spooled files appear not to exist. If a data area is used to direct the conversion, delete the
data area after the conversion occurs.
Before you upgrade from IBM i 5.4, review Information APAR II14306 and IBM i Program
Conversion: Getting Ready for 6.1 and Beyond, REDP-4293. These resources help you
analyze your system and help identify objects that are going to be affected by the Unicode
conversion. You can then decide whether you want to change the names of the affected
objects before you upgrade or allow the automatic conversion to occur.
The conversion of the directories automatically begins shortly after IBM i 7.1 is installed. This
conversion runs in the background during normal operations and does not significantly affect
your system activity.
The IBM i 7.1 Knowledge Center section Integrated file system conversions (V5R4 to IBM i
7.1 upgrade) section has detailed options and instructions:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzahc/ifsconv.htm
The IBM i 7.1 Knowledge Center has detailed options and instructions in the section IBM
Backup Recovery and Media Services for i conversions (V5R4 to IBM i 7.1 upgrade) at:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzahc/br1conv.htm
For more information, see the section Media labels and their contents in the IBM i 7.1
Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzahc/rzahcswsme
dialabel.htm
Additionally, a new API called Fill Image Catalog (QVOIFIMG) was made available for
IBM i 5.4, IBM i 6.1, and IBM i 7.1 through PTFs. This API makes it easier to use image
catalogs when you work with images that are downloaded through the ESD process.
Information APAR II14482 Required PTFS for Upgrading to V7R1MX includes the specific
PTF numbers for each of these releases. To review Information APAR II14482, go to the
following website:
http://www-01.ibm.com/support/docview.wss?uid=nas200630d41e1453ee6862575ab003c6e30
Before this enhancement, you needed physical media or virtual media locally on the system
that is being upgraded. Using virtual media used to require using FTP to manually download
the virtual images across the network to the individual systems to be installed.
The Network File System (NFS) system is the repository for the virtual images, and can be
any NFS system that can meet the basic requirements. On the IBM i client system, this new
function takes advantage of the 632B-003 virtual optical device that supports virtual image
files on a remote system in a network. An image directory identifies a network path on the
central system that contains the virtual image files that are prepared for use with a target
system.
The steps to prepare for a IBM i network upgrade are available in the IBM i 7.1 Knowledge
Center in the section Preparing to upgrade or replace software with virtual optical storage
using the Network File System:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Frzahc%2Frzahcp
reparingtoupgradevirtoptnfs.htm
For complete details about IBM i network upgrade, review IBM i Network Install using Network
File System available at the following website:
ftp://ftp.software.ibm.com/systems/support/power/i/nfs_optical_upgrade.pdf
October 2009
DVD Upgrade to IBM i 6.1.1 over existing i 6.1 system
April 2010
Electronic
Upgrade to IBM i 7.1 from i 6.1 or i 6.1.1
software
distribution
November 2010
Install IBM i 6.1.1 or i 7.1 into a new
partition
NFS Server
Install / Restore
Save IBM or
Custom Image
SAVSYS, SAVOBJ, POWER6 or
SAVLICPGM
to NFS Device POWER7 IBM
6.1 or later
IBM i 6.1 (CRTDEVOPT *SRVLAN)
The 632B-003 optical device is created by using the Create Device Description Optical
(CRTDEVOPT) command:
CRTDEVOPT DEVD(virtual_device_name)
RSRCNAME(*VRT)
LCLINTNETA(*SRVLAN)
RMTINTNETA('X.X.XXX.XXX')
NETIMGDIR('/catalog_directory ')
Parameter definitions:
RMTINTNETA is the remote IP address of the NFS server where this virtual optical device
looks for virtual image files.
The NETIMGDIR parameter specifies the network path on the NFS server that contains
the virtual image files that were prepared for use with this device.
Determine whether you must configure a service tools server. The type of system and
configuration determines what type of setup might be required. If a LAN console is
configured, no further setup is required. For more information about configuring the Service
Tools Server, see the following website:
ftp://ftp.software.ibm.com/systems/support/power/i/nfs_optical_upgrade.pdf
POWER6 does not support any IOPs in the central processor complex. Therefore, any
IOP-based interface, such as Twinax, must be placed in an HSL attached IO drawer and an
HMC is required to tag the console location.
Support: Operations Console Direct attached and Twinax console are not supported on
any POWER7 processor-based server. IBM i console options on POWER7 consist of
either Operations Console LAN attached or HMC managed console.
For more information about changing consoles, see the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphca/chgconsol
es.htm
The new enhancements on IBM i 7.1 for console support are as follows:
Auto-create service tools device IDs remote panel (RCP) privilege
By default, IBM i 7.1 sets the default value of the Auto-create service tools device IDs
remote panel (RCP) privilege to be revoked. To view or change this default, go to Work
with Service Tools Security Data–Option 12.
Console takeover / recovery status panel
The new default for IBM i 7.1, after you enter a Service Tool user ID and password, is that
the Console Take Over Status panel is skipped and the previously displayed console panel
is displayed. To view or change this default, go to Work with Service Tools Security
Data–Option 13
Console takeover F18
In IBM i 7.1, you can take over a console type or console device type by using the PF key
18. This key allows temporary switching of the console type from HMC console to LAN
console without changing the tagging or resetting the operations console session. To view
or change this default, go to Work with Service Tools Security Data–Option 14.
Connecting LAN operations console for uninitialized Load Source (LS)
For a Manufacturing Default Configuration (MDC) system that does not have a preinstalled
image, and is not HMC managed, the console type must be set to LAN by the Using the
Console Service functions. For more information about changing consoles, see the
“Changing consoles” topic in the IBM systems Hardware Knowledge Center.
Additionally, if a LAN console uses the embedded Ethernet ports, then the Enable
Ethernet embedded port (E1) function must be set through the console service functions.
Selection
__
F3=Exit F12=Cancel
Figure 20-4 Work with Service Tools Security Data
If you plan to change the primary language during the upgrade or installation, set the
preferred installation language by using the QINSTLNG API. This new API was introduced
with IBM i 7.1.
For details about using this API, see the Set Install National Language Version (NLV)
(QINSTLNG) API topic in the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fapis%2Fqinstln
g.htm
Before you upgrade to IBM i 7.1, ensure that the node has the appropriate cluster version.
Clusters support only a one version difference. If all the nodes in the cluster are at the same
release, upgrade to the new release before you change the cluster version. This upgrade
ensures that all functions associated with the new release are available.
For detailed actions for an upgrade to a new release, see Scenario: Upgrading operating
system in a high-availability environment at the IBM i 7.1 Knowledge Center:
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzaig/rzaigscenu
pgrade.htm
To verify and change the cluster version for a node, complete the following steps:
1. In a web browser, enter http://mysystem:2001, where mysystem is the host name of the
system.
2. Log in to the system with your user profile and password.
3. Click Cluster Resource Services on the IBM Systems Director Navigator for i5/OS
window.
4. On the Cluster Resource Services window, select the Display Cluster Properties task.
5. On the Cluster Properties window, click the General tab.
6. Verify the cluster version setting or change the version to the wanted setting.
For the latest PTFs group for JAVA, see the following website:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20T
echnology%20Updates/page/PTF%20groups%20and%20latest%20SR%2C%20FP
VMware ESX on iSCSI attached BladeCenter and System x server are supported by IBM i
7.1 - 5770-SS1 Option 29 - Integrated Server Support.
Suggested replacement
Use IBM i support for VMware ESX on iSCSI-attached BladeCenter or System x server to
host Linux servers. However, save when active, file level backup, and virtual optical and tape
are not supported.
Linux running in IBM i hosted partitions continue to be supported in IBM i 7.1, but save when
active and file level backups are not supported.
This enhancement is part of the IBM i 7.1 TR 6 PTF Group. If you have a configuration that
you believe would see some potential performance benefit and you already have IBM i 7.1 TR
5 PTF Group on your partition, you would see that benefit while applying the IBM i 7.1 TR 6
PTF Group. If you do not already have IBM i 7.1 TR 5 PTF Group on your system, you would
experience the benefit with subsequent PTF applies, IBM i 7.1 PTF MF56423 and IBM i 6.1
PTF MF45484.
The iDoctor GUI now requires the Visual Studio 2012 Update 1 or later redistributable
package and.NET 4.0 or later. More information about these requirements can be found on
the following website:
https://www-912.ibm.com/i_dir/idoctor.nsf/downloadsClient.html
My Connections View
My Connections View, which is shown in Figure A-1 on page 863, provides the following
enhancements:
Added columns to show access code expiration dates, missing PTFs, ASP group name,
and relational database name (if the connection uses an independent ASP).
New menu options added to Check Expiration Dates or Check PTFs against the wanted
partitions. Check PTFs includes checking for the latest Performance Group PTF levels.
Added menus to Load and Remove all iDoctor Stored Procedures.
Added Uninstall iDoctor option.
Added an option to Edit an existing connection.
Deleted obsolete analysis files for each component.
When you sign on to a system, iDoctor uses the configured sign-on setting defined in System
i Navigator (you can access this setting by clicking Properties and clicking the Connection
tab for a system). You can use options such as Use Windows user name and password to
avoid needing to sign on through iDoctor if your Windows password matches the user name
and password of the System i to which you are connecting. iDoctor also uses the System i
Access for Windows password cache to avoiding prompting for a password unless needed. If
you still want to be prompted for a password every time you start iDoctor, set the Prompt
every time option within System i Navigator.
Support was added to view collections that are stored in libraries that are created in
Independent ASPs. Use the Add connection menu or Edit menu from the My Connections
View to specify the appropriate ASP group name and relational DB name, as shown in
Figure A-2. These values cause the QZRCSRVS and QZDASOINIT jobs that are created by
iDoctor to recognize data that is stored in the Independent ASP.
You can also create connections of type HMC, AIX, or VIOS. Doing so enables appropriate
options for each.
On the main window, which is shown in Figure A-4, the clock icon can now be used from any
component to set the preferred time range interval size. The clock icon now has the following
additional time grouping options: one-tenth-second, five-second, fifteen-second, five-minute,
four-hour, eight-hour, twelve-hour, and twenty-four-hour. The small groupings are useful in
PEX Analyzer and the large groupings are useful in Collection Services. More checking was
added to the GUI to ensure that only relevant time grouping options are shown for the current
data.
You can save a URL for libraries, collections, call stacks, and so on, in iDoctor using the Copy
URL menu option or button. The URL can then be pasted into a web browser or saved for
future use. The URL shown in Example A-1 opens library COMMON within Job Watcher on
system Idoc610.
Example A-1 Opening the COMMON library within Job Watcher on system Idoc610
idoctor:///viewinfo1[type=CFolderLib,sys=Idoc610,lib=COMMON,comp=JW]
Added a Super collections folder that you can use to work with the super collections that
exist on the system. These collections contain a folder for each collection type that is
collected within the super collection.
You can use the Saved collections folder to work with any save files that are found on the
system that contain iDoctor collections that were saved previously using the iDoctor GUI.
The iDoctor components now contain two new folders that show the ASPs and disk units that
are configured on the current system. You can use the ASPs folder to drill down to view the
disk units within an ASP. The Disk Units folder provides WRKDSKSTS type of statistics with
updates provided with each refresh (it also includes total I/O and total sizes), as shown in
Figure A-6.
Right-click the Disk Units or ASP folder and click Reset Statistics to restart the collection of
disk statistics. You can also use the Select fields menu option when you right-click the folder
to rearrange fields or add more fields. The status bar of the main window shows the times for
first disk statistics snapshot, and the last one.
Similarly, you find an Active Jobs (see Figure A-4 on page 864) folder on the same window,
which provides WRKACTJOB-like function from the iDoctor client, as shown in Figure A-7.
You can also sort by a statistic and refresh to keep tabs on the top processor users, and so
on. There is also a filter option to filter the list by name, user, number, current user, or
minimum processor percentage. Click the Select fields menu option when you right-click the
folder to rearrange fields or add more fields. Expanding a job shows the threads and the
thread statistics available for each job. You can start Job Watcher or PEX Analyzer collections
or add Job Watcher / PEX definitions using the selected jobs within the Active jobs folder. You
can also end the selected jobs or view job logs.
Collection options
The Collection menu now contains an Analyses menu for all components. Choosing an option
under this menu runs a program that creates SQL tables that are needed for further analysis.
In most cases, more reports become available after the analysis completes and the collection
is refreshed (by pressing F5.)
The Summarize menu option in CSI and Job Watcher moved to Analyses - Run Collection
Summary. Choosing this option now displays a window that you can use to filter the collection
data by job name, job user, job number, current user, subsystem, or time range. Filtered
results can be viewed under the SQL tables folder. By not filtering the data, the summarized
results are accessible using the graphing options that are provided under the collection.
The Create Job Summary menu option in CSI and Job Watcher moved to Analyses - Run
Create Job Summary.
There is a new iDoctor Report Generator for all collections (Figure A-8 on page 868). To
access it, right-click a collection and click the Generate Reports. The default web browser is
opened to show the HTML report after the reports are captured to JPG files. As reports are
running, you can switch to other windows, but before screen captures are taken, the data
viewer must be moved to the front of all windows. This action happens automatically, but
might look strange the first time you use it.
With the Save option (see Figure A-9), you can save multiple collections or monitors. After
you use this option, the new Saved collections folder shows a record that identifies the save
file, which you can use to restore the data or distribute it.
In all components that support copying a collection, you can now specify the collection name
in the target library. You can use this function to copy a collection to a new name in the same
library.
Data Viewer
The Data Viewer toolbar has a toolbar that shows all the idle waits (include all buckets) for
wait bucket jobs and larger grouping graphs in CSI and Job Watcher. This toolbar is a toggle
button that you can use to see the idle waits and click again to see the interesting waits.
Previously, the idle waits were not shown for job or higher groupings. Figure A-10 shows an
example.
There is a new menu, Choose Database Members, in the SQL editor that clears the current
member selections and opens the member selection window.
In the record quick view, you can see the table alias name before the column name if it is
known.
You can now click Save on the Data Viewer toolbar to save the current graph and legend as a
JPG image.
You can click File → Create Shortcut to save a Data Viewer graph or table as a shortcut file
(*.idr). The file can be sent to other users or saved on the PC to revisit the report later.
The interval grouping option on the Clock icon now has a five-minute and four-hour time
interval specification.
Side-by-side comparisons
You can use the side-by-side comparisons to sync up the scrolling and Y-axis scaling of any
two graphs (or tables) in a Data Viewer.
When two or more graphs or tables exist in a Data Viewer, the buttons are ready for use. See
Figure A-11.
The rest of the folders are described in “Main window” on page 864.
Figure A-15 gives an example of a Historical Summary Collection Overview Time Signature
graph over 12 days of data.
Use the one-hour grouping option when you create the Historical Summary.
Historical summaries provide a complete set of graphs similar to the graphs provided under
normal collections. A full set of “average day” and “average week” graphs are also supplied.
More information about historical summaries can be found at:
http://public.dhe.ibm.com/services/us/igsc/idoctor/iDoctorSep2011.pdf
Managing collections
On the same drop-down menu that is shown in Figure A-16 on page 873, you see that a
Copy function was added. You can also use the Copy Performance Data (CPYPFRTA)
command to obtain the same result. The Delete function now uses the Delete Performance
Data (DLTPFRDTA) command.
The import data to WLE option is accessible from a CSI graph if a time range is selected. The
numbers that are provided to WLE are based on the selected time period.
A search function, similar to the one in Job Watcher, is now available in this window. You can
use it to generate a report that shows job records that are based on a specific job name, user,
number, subsystem, pool ID, or current user profile. From these reports, you can drill down
into the graphs for the wanted job over time. You can also search over multiple collections at
one time by selecting the wanted collections in the CSI component view’s list side and then
using the Search menu. After the search results are shown, you can drill down on the same
set of collections that are provided for the wanted job or thread.
You can create graphs over multiple collections at one time in the same library. Select the
wanted collections from the CSI component view's list side and then right-click and choose
the graph of interest. Click Yes when prompted if the graph is to be created for all collections
selected. From then on, any drill down that you do on rankings and the single object over time
graphs apply to this same set of collections.
The current list of situations and the default minimum thresholds are shown in the following
list:
Interactive feature use high: 100%
Write cache overruns: 20%
High disk response times: 15 ms
High faulting in the machine pool: 10 faults per second
High disk storage consumed: 90%
Jobs ineligible to run: Three instances for a job per interval
Significant changes in pool size: 25% change from one interval to the next
After you run the analysis, a new IASP bandwidth estimations folder is available that contains
the generated table and a subfolder with graphs.
The IASP bandwidth estimate table represents the statistics that are generated by the
analysis. The statistics that are generated include the number of blocks, writes, database
write percentage, and various bandwidth estimates (all in megabits per second).
The IASP bandwidth overview graph displays the database writes for IASPs and the full
system writes.
The IASP bandwidth overview graph with lag includes how much bandwidth lag there would
be based on the parameter estimation values given when the analysis was created.
The information from the hypervisor for systems that are running release 6.1 or higher can
now be viewed in the new System Graphs HMC folder (see Figure A-24).
The graphs shown vary depending on the data available. The QAPMLPARH file is required for
the CPU graphs and in IBM i 7.1, the file QAPMSYSINT is required for the TLBIE graphs.
LPAR cycles per instruction and CPU time: Same as previous but includes CPI on Y2.
LPAR instructions per second and CPU time: Same as first graph but includes IPS on Y2.
LPAR entitled CPU time: This graph breaks down the CPU time by entitled time versus
uncapped time in excess of entitled capacity.
70
700 [000A] Average CPU utilization
60 [0009] Average CPU utilization
600
[0010] Average CPU utilization
500 50 [0005] Average CPU utilization
400 40 [000C] Average CPU utilization
300 [000D] Average CPU utilization
30 [0000] Average CPU utilization
200
20 [0001] Average CPU utilization
100 [0002] Average CPU utilization
0 10 [0003] Average CPU utilization
[0004] Average CPU utilization
[1] 04/25 14:56:00.000
0
[000E] Average CPU utilization
[000F] Average CPU utilization
[0012] Average CPU utilization
[[0017] Average CPU utilization
[0015] Average CPU utilization
[0013] Average CPU utilization
[0016] Average CPU utilization
[0014] Average CPU utilization
The Rankings folder contains the following graphs that rank the LPARs in various ways:
LPAR CPU time
LPAR cycles per instruction and CPU time
LPAR instructions per second and CPU time
LPAR advanced CPU time
LPAR memory allocated
LPAR donated processor time
Physical processor utilization
LPAR dedicated processor utilization
Job counts
100 X-axis (Labels)
Number of jobs/tasks/threads
CPU utilization
2000 70 Primary Y-axis (Bars)
60 Number of system tasks
1500 50 Number of processes (primary threads)
40 Number of secondary threads
1000
30
500 20 Secondary Y-axis (Lines)
10 Average partition CPU utilization
0 0 Maximum partition CPU utilization
Average interactive feature utilization
[11] 02:45:00.000
[13] 03:15:00.000
[15] 03:45:00.000
[17] 04:15:00.000
[19] 04:45:00.000
[21] 05:15:00.000
[23] 05:45:00.000
[25] 06:15:00.000
[27] 06:45:00.000
[29] 07:15:00.000
[31] 07:45:00.000
[33] 08:15:00.000
[35] 08:45:00.000
[37] 09:15:00.000
[39] 09:45:00.000
[41] 10:15:00.000
[43] 10:45:00.000
[45] 11:15:00.000
[47] 11:45:00.000
[49] 12:15:00.000
[51] 12:45:00.000
[53] 13:15:00.000
[55] 13:45:00.000
[57] 14:15:00.000
[59] 14:45:00.000
[61] 15:15:00.000
[63] 15:45:00.000
[65] 16:15:00.000
[67] 16:45:00.000
[69] 17:15:00.000
[71] 17:45:00.000
[73] 18:15:00.000
[75] 18:45:00.000
[77] 19:15:00.000
[79] 19:45:00.000
[81] 20:45:00.000
[83] 20:15:00.000
[85] 21:45:00.000
[87] 21:15:00.000
[89] 22:45:00.000
[91] 22:45:00.000
[93] 23:15:00.000
[95] 23:45:00.000
[1] 00:15:00.000
[3] 00:45:00.000
[5] 01:15:00.000
[7] 01:45:00.000
[9] 02:15:00.000
Flyover Fields
Available Fields
SQL Position 1 Go
AMQZLA*
Flyover Fields
CPSSUP*
LDSUB0* Job runtime (for this summary)
PHPAIX Minimum interval timestamp
SMSWAI* Job current user profile
LOC3C1* Available Fields
QYUSCM* Job grouping value
LDSUB1* Job grouping identifier (0=thread, 1=j
QZRCSR* Elapsed time (seconds)
LDCS00* Minimum job priority
LDCP00* Maximum job priority
LOC3B1* JBPOOL
APPST1*
WDXTRO*
0
200
400
600
800
1000
1200
1400
1600
1800
2000
2200
2400
2600
2800
3000
3200
3400
3600
3800
4000
4200
4400
4600
4800
5000
5200
5400
5600
5800
6000
Number of jobs/tasks/threads
You can use the memory pool graphs to right-click the wanted pool and time range to drill
down and see the jobs within the wanted pool in various ways.
There is also support to allow multiple collections to be graphed at the same time to compare
the evolution in memory use. You can either select multiple collections and right-click and
select the wanted memory pool graph, or use the Historical Summary analysis to graph
multiple collections more easily.
Two more reports show the same disk configuration data, where one is a flat table and the
other is a tree. The tree provides counts and percentages of the units, IOAs, IOPS, and ASPs
within each prior level grouping. To access these reports, right-click and select Collection →
Disk configuration. The window that is shown in Figure A-37 opens.
A report called Capacity (in GBs) by ASP with paths is also provided.
The seven summarized graphs provide drill downs into seven ranking graphs for the wanted
time period.
J9 JVM graphs
A set of J9 JVM graphs at 6.1 and higher have been added. This includes overview graphs
(all JVMs combined), rankings (by thread), and selected thread over time. The graphs
included at each level are:
J9 JVM heap sizes (includes allocated, heap in use, malloc, JIT, and internal sizes)
J9 JVM allocated heap size
J9 JVM heap in use size
J9 JVM malloc memory size
J9 JVM internal and JIT memory sizes
Enhanced graphs
Changes were applied to a number of graphing capabilities that support Version 6.1 and later:
The communication graphs folder shows the following information:
– Average IOP uses
– Maximum IOP uses
– SSL authentications
Under Disk graphs, new graphs named I/O size and Ethernet rates and Buffer
overruns/underruns are now available.
The collection overview wait graphs now show batch and interactive processor usage on
the second Y axis.
The wait bucket counts are added to the overview graphs for a single thread/job.
The IP address family and formatted IP address is added to the end of the job search
report.
Flattened type graphs provide the capability to customize graph labels, hide values on the
Y1/Y2 axis, change scaling, and use the graph definition interface to change the fields that
are used, scaling, colors, and sizes.
Starting with Version 7.1, seizes and locks graphs were added over the 7.1 lock count fields in
the QAPMJOBMI file. Run the Collection Summary analysis and then access the graphs by
clicking Wait graphs and looking under the Seizes and locks folder.
Job Watcher
The folders available in the Job Watcher component changed. Instead of showing libraries
that contain Job Watcher data, new folders are available, as shown in Figure A-40:
Libraries containing Job Watcher database file collections (filterable).
A definitions folder provides a list of Job Watcher definitions on the system.
The rest of the folders are covered in 6.4, “IBM iDoctor for IBM i” on page 318.
Monitors
In the window to start a Job Watcher (or Disk Watcher) monitor, you can specify the maximum
collection size (in megabytes) for each collection that is running in the monitor.
The next set of changes applies to the following monitor commands: STRJWMON, STRPAMON, and
STRDWMON. These options can also be found in the GUI when you start a monitor.
The Collection Overlap (OVRLAP) parameter is no longer used. The monitor now detects that a
new collection started before the previous one ended.
The Collection Duration (MAXSIZE) parameter can now be specified in minutes with a decimal
point (for example, 59.5 minutes).
Within the list of collections, the status field indicates which files are not yet created.
Situational analysis
This option can be found by clicking Collection → Wait graphs. It has new situations:
Concurrent write not enabled
Journal caching that are not properly used
Jobs ineligible to run
Long sync write response times
Fixed allocated length setting on a varchar or lob type column is defaulted to 0 or is set too
small
Contention on DB in use table possibly because of a high number of opens and closes
High number of creates and deletes by multiple jobs where all of the objects are owned by
the same user profile
A wait object, holder, and a SQL client job are added to the flyover (if one exists).
Figure A-43 SQL client job drill-down options on the Interval Details - Wait Buckets window
Status views
The Remote SQL Statement Status view was updated with new options:
Remove/Cancel Selected cancels or removes running SQL statements.
Copy Selected SQL Statement(s) to Clipboard.
The Remote Command Status view was updated with new options:
Remove/Cancel Selected cancels or removes running commands.
Copy Selected Commands to Clipboard.
Add Command defines more commands to run in the view.
You can find JVM statistics on the Java virtual machine interval details tab and J9 call stacks
on the Call Stack tab. The J9 Java entries are embedded within the regular Job Watcher call
stacks. J9 Java call stack entries are not usable with the call stack reports.
You find a Situations tab in the Collection Properties (Figure A-44), showing all situation types
known to Job Watcher and how many occurred in the collection.
In the Interval Details interface, a button was added to go to the primary thread from a
secondary thread.
A new Call Stack Summary analysis was added to identify the call stacks, waits, and objects
that are associated with the most frequently occurring call stacks that are found in the
collection.
Reports were added under the Detail reports menu that show the top programs causing DB
opens for the selected time period in a graph. The Detail reports - Call stack summary menu
now has the following options:
50 level call stacks,
50 level call stacks with wait objects only
50 level call stacks CPU current state only
Collections
In the Start Disk Watcher Wizard (Figure A-46), you now can collect the hardware resource
file, schedule the collections, and check whether there are any PTFs. You can use another
parameter in this window to set the maximum collection size (in MB) for each collection.
The Change SQL Parameters interface now has options for changing the library and
collection currently being used to display the graph.
Monitors
Disk Watcher monitor server commands were added. These commands are similar to the Job
Watcher monitor commands and include STRDWMON, HLDDWMON, RLSDWMON, ENDDWMON, and
DLTDWMON.
Support was added in the GUI to work with or start monitors in Disk Watcher for either Job
Watcher or Disk Watcher. The same Monitors folder is also available in Job Watcher, which
you can use to work with Disk Watcher monitors from the Job Watcher component.
Definitions
New iDoctor supplied Disk Watcher definitions are available.
QFULLO
QFULL1MINO
QTRCO
QTRC1MINO
The reload IBM-supplied definitions option must be used on systems that have definitions so
that the new definitions are visible.
Reporting
The graph titles match the naming convention that is used by the trace graphs. The word pool
was changed to disk pool, and disk unit to disk path.
PEX Analyzer
The folders available in the PEX Analyzer component were changed. Instead of showing
libraries that contain PEX Analyzer data, new folders are available, as shown in Figure A-49
on page 900:
Libraries: This folder displays libraries that contain PEX collections or libraries where
active PEX collections created with the STRPACOL command (or the Start Collection
Wizard) are running.
Active collections: You can use this folder to work with any active PEX sessions on the
system. This function is similar to the ENDPEX command that lists the active PEX sessions.
PEX objects: You can use this folder to work with the PEX *MGTCOL objects on the
system.
Definitions: You can use this folder to work with PEX definitions.
Filters: You can use this folder to work with PEX filters.
The rest of the folders are covered in “Main window” on page 864.
Definitions
The Add PEX Definition Wizard supports defining statistics counters into buckets 5 - 8.
The PEX Analyzer Add/Change PEX Definition interface supports the latest event additions
and removals at 6.1/7.1:
Program events that are removed as of 6.1+: *MIPRECALL, *MIPOSTCALL,
*JVAPRECALL, and *JVAPOSTCALL
Base event *CPUSWT added as of 6.1+
Base events that are added as of 7.1: *PRCFLDSUSPEND, *PRCFLDRESUME,
LPARSUSPEND, and *LPARRESUME
Storage event *CHGSEGATR added as of 7.1
OS *ARMTRC event added as of 6.1
Sync event *MTXCLEANUP added as of 6.1
Because collecting DASD start events is no longer necessary for PDIO analysis, the Start
PEX Analyzer Collection (STRPACOL) command now makes sure that the *PDIO_TIME event
type always collects the *READEND, *WRTEND, *RMTWRTSTR, and
*RMTWRTEND events.
The STRPACOL command (and the Start Collection Wizard) now includes Format 2 events for
all MI user problem types (*DB_OPEN, *DB_LDIO, and so on) and the Netsize problem type.
Not collecting with Format 2 now requires you to create your own PEX definition.
In PEX Analyzer in the Start Collection Wizard, and when you use one of the iDoctor problem
types, the default event format value for PMCO and Taskswitch is now Format 2.
When you create a collection, a QSTATSOPEN problem type collects DB opens into statistics
counter #1. It runs concurrently with the QSTATSOPEN filter to ensure that only the user
application program opens are counted. You can use this function to determine which
programs or procedures caused the most opens by looking at the inline counter 01. The
QSTATSOPEN problem type is a PEX definition that is created using ADDPEXDFN by the
GUI before STRPACOL is run.
You can divide a large PEX collection into a more manageable size by using the Split option.
The Start iDoctor Monitor Wizard supports the creation of PEX monitors into *MGTCOL
objects. There is an ENDPEX option, on the basic options window of the wizard, with three
possible values: Create DB files, Create *MGTCOL, and Suspend.
Analyses
Several changes were implemented in this menu.
Classic analyses and support for the green panel QIDRPA/G* analysis commands was
removed and replaced by the SQL-based analyses (SQL stored procedures).
The Analyses menu, found by right-clicking a collection, contains a list of all available
analyses (Figure A-50). The menu also contains the Analyze Collection option, which allows
a user to kick off several analyses at once.
The Trace details analysis is available for any PEX collection that contains trace events and
produces a SMTRMOD-like file. It handles retrieving and formatting event information from
many of the PEX files including QAYPETIDX, QAYPEASM, QAYPESAR, QAYPEDASD,
QAYPEPGFLT, QAYPETASKI, QAYPETSKSW, QAYPEPROCI, QAYPESEGI QAYPEMBRKT.
The Call stacks analysis displays the most commonly occurring call stacks for each event
type collected. This analysis includes options to show the top programs causing opens and
closes. Options are also available to view the call stacks by job and event type.
The TPROF analysis now has the tree table views that display the percentage of processor
hits in various ways.
The TPROF analysis folder contains more reports to support MCLI analysis if format 4 PMCO
events were collected.
PEX Analyzer has a new analysis called Hot Sectors. This SQL-based analysis is only
available if the PDIO analysis was run. It allows disk activity to be measured by portions of the
disk address of the I/O in megabyte chunks of either 1, 16, 256, or 4096.
A Data Area analysis is available for collections that collected data area events. It provides an
SQL-based report similar to the SMTRDTAA file. A similar analysis for data queue events is
available.
A CPU Profile by Job analysis is available if PMCO events were collected. It shows the
estimated processor consumption during the collection over time and processor thread
rankings for the wanted time periods.
The MI user event analyses (LDIO and data area) now resolve the user program if Format 2
events were collected. These analyses allow for MI entry / exit events to be excluded.
A database opens analysis, similar to the database LDIO analysis, provides statistics about
the user program that is associated with the DB open events and reports 16 call level stacks,
if DBOPEN FMT2 events are collected.
The new IFS analysis is equivalent to the classic version, except it also provides user
program names for either MI entry / exit or FMT 2 call stacks, depending on what is available.
There is a new Netsize analysis for 6.1 and higher PEX Analyzer, including several new
graphs with drill downs.
A save / restore analysis runs save / restore event parsing in the QAYPEMIUSR table into
several reports.
In the Taskswitch analysis, added graphs show what the wait bucket time signature looks like
for the wanted thread / task (also known as TDE). See Figure A-51. More drill downs and
reporting options are also provided.
80 Dispatched CPU (s
CPU queuing (sec
70 Other waits (seco
Disk page faults
60 Disk writes (seco
50 Journaling (secon
40 Flyover Fields
30 TNX_CNT
20 Available Fields
[Interval] time
10 Interval number
0 INTMAXEND
03:30:36.691 PM -
03:30:36:701 PM -
o3:30:36.711 PM -
03:30:36:721 PM -
03:30:36:731 PM -
03:30:36.741 PM -
03:30:36.748 PM -
03:30:36:681 PM -
INTMINSTR1
INTMAXEND2
MIN_TNX_PRN
MAXWAITSECS
Interval delta ti
Interval delta ti
MAXINTNBR
MININTNBR
TIME01A
[Interval] end time (collected interval size) TIME01D
Figure A-51 Taskswitch run / wait time signature graph for a single job / thread / task (or TDE)
The Summarized CPU and I/O by pgm / MI instruction report contains the inline processor
percent of total and the inline elapsed time percent of total information.
The plan cache is a repository that contains the access plans for queries that were optimized
by SQE.
A list of available components appears in the next window. Double-click the Plan Cache
Analyzer component or select Plan Cache Analyzer and click Launch to continue, as shown
in Figure A-52.
For more information about how to use Plan Cache Analyzer, see the IBM iDoctor for IBM i
documentation at:
http://public.dhe.ibm.com/services/us/igsc/idoctor/iDoctorV7R1.pdf
VIOS Investigator
VIOS Investigator combines NMON data and a VIOS to IBM i disk mapping process to help
analyze the performance of your VIOS using the power of the DB2 database on IBM i.
You can use VIOS Investigator to import one or more NMON files into the tool. The NMON
CSV files are converted and expanded into DB2 SQL tables, which are used to produce
graphs with several drill-down options.
Graphing Options:
Disk graphs (% busy, counts, sizes, rates, block sizes, service times, and response
times)
System configuration
System graphs (Processor, memory, kernel, paging statistics, and processes)
Processor graphs (Processor usage)
TOP graphs (Processor usage, paging size, character IO, memory usage, and faults for
the top processes)
If a valid disk mapping was created, then you can use the disk graphs to rank the data by disk
name, disk unit, disk path, ASP, or disk type. Without the disk mapping, only rankings by disk
name can be performed.
VIOS Investigator can also be used to analyze AIX and Linux systems using NMON data, but
the focus is primarily on VIOS analysis with an emphasis on usage by IBM i customers.
VIOS Investigator is a no additional cost tool that is offered as-is and does not require an
access code. To download VIOS Investigator, you must first accept the license agreement.
NMON
The VIOS Investigator data is created by the NMON or Topas_NMON command that is found in
AIX.
On AIX V6.1 TL02 and Virtual I/O Server (VIOS) V2.1 (or higher), NMON is installed by
default with AIX and the Topas_NMON command should be used for collecting data for use with
VIOS Investigator.
NMON is the primary/preferred collection tool of AIX performance statistics. NMON is similar
in nature to Collection Services on IBM i. Both tools use time intervals and collect high-level
statistics for processor usage, disk, memory and much more.
Disk mappings
Disk mappings (also known as correlations) refer to the VIOS to IBM i correlations between
hDisks and IBM i disk unit numbers and disk path names that are assigned to them.
For comparison purposes with the Collection Services Investigator, whenever possible, the
disk graphs in VIOS Investigator use the same colors, labels, and field names as the disk
graphs in Collection Services Investigator. However, the number of disk metrics that are
provided by NMON are far fewer than those disk metrics found in Collection Services (see the
QAPMDISK file.)
In most cases, especially if there are any known hardware/system changes, collect the disk
mapping before or immediately after you collect the NMON data on your VIOS. This action
provides more graphing options (for example, rankings by unit, path, ASP, or disk type) that
are otherwise not available.
Disk mappings are collected by using a program that is written on the IBM i that interrogates
the HMC to acquire disk information that is useful when you perform the analysis that
otherwise would not be available with just the NMON data.
A list of available components appear in the next window. Double-click the VIOS Investigator
component or select VIOS Investigator and click the Launch button to continue, as shown in
Figure A-53.
FTP connections: The FTP connections are provided through the Windows WININET
APIs, which do not support any options for FTP using SSL or other secure FTP modes.
The FTP GUI is similar to other iDoctor components and can be accessed through the My
Connections view. From the My Connections view, you can access this option by right-clicking
the correct system and selecting Start FTP session from the menu, as shown in Figure A-54.
You can also access the option from the Connection List View by double-clicking the system.
IBM i system: If you connect to an IBM i system, you see subfolders to work with (either
the Integrated File System (IFS) or the Libraries on the system). These options are not
present for other types of connections.
There are several options available to you when you right-click a file or folder from the FTP
GUI, including Upload, Download to PC, and Transfer to another system.
For more information about how to use the iDoctor FTP GUI, see the IBM iDoctor for IBM i
documentation at:
https://www-912.ibm.com/i_dir/idoctor.nsf/documentation.html
HMC Walker
HMC Walker is a new option currently in beta test that uses the HMC lslparutil data to provide
big picture views of performance across all LPARs attached to the HMC. If you want to join the
beta test program contact:
[email protected]
HMC Walker provides views that display the configuration for the HMC and VIOS details.
Several graphs are available with drill down into the LPARs using the appropriate iDoctor
components (for VIOS, IBM i and AIX depending on the LPAR type.)
Figure A-57 shows the CPU used by several physical systems over 60 days.
Figure A-58 CPU time for all LPARs across all of the physical systems
More information
For more information about the new features in iDoctor, go to:
https://www-912.ibm.com/i_dir/idoctor.nsf
Presentations are created every few months with in-depth explanations of the latest features.
You can find these presentations at:
http://www-912.ibm.com/i_dir/idoctor.nsf/downloadsDemos.html
They can also be viewed directly on the YouTube Channel (20+ videos) for IBM iDoctor at:
http://www.youtube.com/user/IBMiDoctorForIBMi
The publications listed in this section are considered suitable for a more detailed discussion of
the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get Redbooks” on page 915. A
few of the documents referenced here might be available in softcopy only.
DS8000 Copy Services for IBM i with VIOS, REDP-4584
End to End Performance Management on IBM i, SG24-7808
Getting Started with DB2 Web Query for i, SG24-7214
IBM Power Systems HMC Implementation and Usage Guide, SG24-7491
IBM BladeCenter JS23 and JS43 Implementation Guide, SG24-7740
IBM i 6.1 Independent ASPs: A Guide to Quick Implementation of Independent ASPs,
SG24-7811
IBM i 6.1 Technical Overview, SG24-7713
IBM i and Midrange External Storage, SG24-7668
IBM i Program Conversion: Getting Ready for 6.1 and Beyond, REDP-4293
IBM Power 520 and Power 550 (POWER6) System Builder, SG24-7765
IBM Power 520 Technical Overview, REDP-4403
IBM Power 550 Technical Overview, REDP-4404
IBM Power 710 and 730 (8231-E2B) Technical Overview and Introduction, REDP-4636
IBM Power 720 and 740 (8202-E4B, 8205-E6B) Technical Overview and Introduction,
REDP-4637
IBM Power 750 and 755 (8233-E8B, 8236-E8C) Technical Overview and Introduction,
REDP-4638
IBM Power 770 and 780 (9117-MMB, 9179-MHB) Technical Overview and Introduction,
REDP-4639
IBM Power 795 (9119-FHB) Technical Overview and Introduction, REDP-4640
IBM PowerVM Virtualization Active Memory Sharing, REDP-4470
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM System i5, eServer i5, and iSeries Systems Builder IBM i5/OS Version 5 Release 4 -
January 2006, SG24-2155
IBM System i5 V5R4 Technical Overview Redbook, SG24-7271
IBM System i Security: Protecting i5/OS Data with Encryption, SG24-7399
IBM Systems Director Navigator for i, SG24-7789
Implementing IBM Systems Director 6.1, SG24-7694
Implementing PowerHA for IBM i, SG24-7405
Other publications
These publications are also relevant as further information sources:
Droms, et al, DHCP Handbook, 2nd Edition, SAMS, 2002. 0672323273
Rational Development Studio for i ILE RPG Language Reference, SC09-2508
Rational Development Studio for i ILE RPG Programmer’s Guide, SC09-2507
Online resources
These web pages are also relevant as further information sources.
AFP Font Collection page
http://www-03.ibm.com/systems/i/software/print/afpfonthome_m_ww.html
Application Runtime Expert for IBM i page
http://www-03.ibm.com/systems/power/software/i/are
Backup Recovery & Media Services page
http://www-03.ibm.com/systems/i/support/brms/index.html
Connecting to IBM i - IBM Systems Director Navigator for i
http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topic/rzatg/rzatgdirect
or.pdf
IBM i Access page
http://www-03.ibm.com/systems/i/software/access/
IBM i Access for Web page
http://www-03.ibm.com/systems/i/software/access/web/
IBM Advanced Job Scheduler for i
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaks/rzaksajsmanage
.htm?cp=ssw_ibm_i_71%2F5-2-4-4-10&lang=en
IBM Advanced Job Scheduler for i page
http://www-03.ibm.com/systems/i/software/jscheduler/index.html
IBM DB2 for i
http://www-03.ibm.com/systems/i/software/db2/index.html
At the upper right corner of the IBM Redbooks web page, there is a search window. You can
search for a specific book by book ID or by title text using that search capability.
The easiest way to download a Redbooks publication is to complete the following steps:
1. Locate the book through one of the menu or search options,
2. Select the Redbooks publication. This action opens a summary page for the book.
3. Select the Download PDF link. You then have two options:
– Right-click the link and click Save target as. This action downloads and saves
the PDF.
– Click the link. This action displays the PDF. Select the Save file icon.
Enriched database This IBM Redbooks publication provides a technical overview of the
functionality and features, functions, and enhancements available in IBM i 7.1, including INTERNATIONAL
enhanced graphical all the Technology Refresh (TR) levels from TR1 to TR7. It provides a TECHNICAL
environments for summary and brief explanation of the many capabilities and functions SUPPORT
application in the operating system. It also describes many of the licensed ORGANIZATION
programs and application development tools that are associated with
developments
IBM i.
Boosted efficiency with The information provided in this book is useful for clients, IBM
enriched virtualization Business Partners, and IBM service professionals who are involved
with planning, supporting, upgrading, and implementing IBM i 7.1 BUILDING TECHNICAL
and more effective INFORMATION BASED ON
solutions.
utilization of system PRACTICAL EXPERIENCE
resources
IBM Redbooks are developed
Easier deployment of by the IBM International
new features with Technical Support
technology refreshes Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.