Peromance v7R4
Peromance v7R4
Peromance v7R4
7.4
Performance
IBM
Note
Before using this information and the product it supports, read the information in “Notices” on page
171.
This edition applies to IBM i 7.4 (product number 5770-SS1) and to all subsequent releases and modifications until
otherwise indicated in new editions. This version does not run on all reduced instruction set computer (RISC) models nor
does it run on CISC models.
This document may contain references to Licensed Internal Code. Licensed Internal Code is Machine Code and is
licensed to you under the terms of the IBM License Agreement for Machine Code.
© Copyright International Business Machines Corporation 2002, 2018.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Performance......................................................................................................... 1
What's new in IBM i 7.4............................................................................................................................... 1
PDF file for Performance..............................................................................................................................1
Managing system performance................................................................................................................... 2
Selecting a Performance Management Strategy................................................................................... 2
Determining when and how to expand your system............................................................................. 4
Comparing performance metrics before and after system changes.....................................................5
Tracking performance............................................................................................................................ 5
Researching a performance problem.....................................................................................................6
Identifying a performance problem................................................................................................. 6
Identifying and resolving common performance problems............................................................ 7
Collecting system performance data................................................................................................8
Collecting information about system resource utilization............................................................... 9
Collecting information about an application's performance......................................................... 10
Dumping trace data................................................................................................................... 10
Dumping memory......................................................................................................................11
The basics of IBM i Wait Accounting.............................................................................................. 11
Scenario: Improving system performance after an upgrade or migration.................................... 14
Displaying performance data............................................................................................................... 15
Tuning performance............................................................................................................................. 16
Performing basic system tuning..................................................................................................... 16
Adjusting performance automatically............................................................................................ 18
Determining when to use simultaneous multithreading............................................................... 19
Electronic business performance........................................................................................................ 19
Client performance......................................................................................................................... 20
Network performance.....................................................................................................................20
Java performance in IBM i..............................................................................................................21
IBM HTTP Server performance...................................................................................................... 21
WebSphere performance................................................................................................................22
Applications for performance management............................................................................................. 22
Performance data collectors................................................................................................................25
Collection Services......................................................................................................................... 25
How Collection Services works................................................................................................. 26
Creating database files from Collection Services data.............................................................27
Customizing data collections.................................................................................................... 28
Collection Services support for system monitoring................................................................. 30
Collection Services support for historical data.........................................................................32
Implementing user-defined categories in Collection Services................................................33
Managing collection objects......................................................................................................42
User-defined transactions........................................................................................................ 42
Finding wait statistics for a job, task, or thread........................................................................49
Understanding disk consumption by Collection Services........................................................ 50
Collecting and displaying CPU utilization for all partitions...................................................... 51
Collecting ARM performance data............................................................................................ 52
Short lifespan threads and tasks.............................................................................................. 53
IBM i Job Watcher...........................................................................................................................53
IBM i Disk Watcher..........................................................................................................................54
Performance Explorer.....................................................................................................................55
Performance Explorer concepts............................................................................................... 55
Configuring Performance Explorer............................................................................................62
Viewing and analyzing data..................................................................................................................63
iii
IBM Navigator for i.......................................................................................................................... 63
IBM Navigator for i Performance interface...............................................................................64
IBM Navigator for i Monitors..................................................................................................... 89
IBM Performance Management for Power Systems (PM for Power Systems) - support for
IBM i........................................................................................................................................... 97
PM Agent concepts....................................................................................................................97
Configuring PM Agent................................................................................................................98
Managing PM Agent.................................................................................................................104
IBM Systems Workload Estimator............................................................................................... 106
IBM Performance Tools for i.........................................................................................................107
Performance Tools concepts.................................................................................................. 107
Installing and configuring Performance Tools........................................................................111
Performance Tools reports..................................................................................................... 112
Scenarios: Performance.......................................................................................................................... 165
Scenario: Improving system performance after an upgrade or migration....................................... 165
Scenario: System monitor..................................................................................................................166
Scenario: Message monitor................................................................................................................167
Related information for Performance..................................................................................................... 168
Notices..............................................................................................................171
Programming interface information........................................................................................................ 172
Trademarks.............................................................................................................................................. 172
Terms and conditions.............................................................................................................................. 173
iv
Performance
Monitoring and managing your system's performance is critical to ensure you are keeping pace with the
changing demands of your business.
To respond to business changes effectively, your system must change too. Managing your system, at first
glance, might seem like just another time-consuming job. But the investment pays off soon because the
system runs more efficiently, and this is reflected in your business. It is efficient because changes are
planned and managed.
Managing performance of any system can be a complex task that requires a thorough understanding of
that system's hardware and software. IBM® i is an industry leader in the area of performance management
and has many qualities that are not found in other systems, including unparalleled performance metrics,
always-on collection services, and graphical viewing of performance data. While understanding all
the different processes that affect system performance can be challenging and resolving performance
problems requires the effective use of a large suite of tools, the functions offered by IBM i are intended to
make this job easier for users.
This topic will guide you through the tasks and tools associated with performance management.
Note: By using the following code examples, you agree to the terms of the “Code license and disclaimer
information” on page 170.
Related concepts
Work management
Collection Services
• Remote Direct Memory Access (RDMA) metrics: The new collection category *RDMA collects
performance information about network redundancy groups (NRGs) and RDMA links. The *RDMA
category is collected when Collection Services is configured to collect the *STANDARDP collection
profile.
• Disk metrics: New disk metrics are collected into the QAPMDISK file about the disk free space map.
• Configuration data: The following new configuration information was added to the QAPMCONF file:
– Physical processor model.
– Logical processor model.
– Partition universal unique identifier (UUID).
– The current number of secondary hardware threads per processor.
– The maximum number of secondary hardware threads per processor.
Small business
A small business most likely has fewer resources to devote to managing performance than a larger
business. For that reason, use as much automation as possible. You can use IBM Performance
Management for Power Systems (PM for Power Systems) to have your performance data sent directly
to IBM where it will be compiled and generated into reports for you. This not only saves you time, but IBM
also makes suggestions to you when your server needs an upgrade.
The following is a list of recommended performance applications for a small business:
• IBM Navigator for i Performance interface: Display and manage performance data.
2 IBM i: Performance
• Collection Services: Collect sample data at user-defined intervals for later analysis.
• IBM Performance Management for Power Systems (PM for Power Systems): Automate the collection,
archival, and analysis of system performance data.
• IBM Performance Tools for i: Gather, analyze, and maintain system performance information.
• IBM Navigator for i Monitors: Observe graphical representations of system performance, and automate
responses to predefined events or conditions.
Mid-sized business
The mid-sized business probably has more resources devoted to managing performance than the small
business. You may still want to automate as much as possible and can also benefit from using IBM
Performance Management for Power Systems (PM for Power Systems).
The following is a list of recommended performance applications for a mid-sized business:
• IBM Navigator for i Performance interface: Display and manage performance data.
• Collection Services: Collect sample data at user-defined intervals for later analysis.
• IBM Performance Management for Power Systems (PM for Power Systems): Automate the collection,
archival, and analysis of system performance data.
• IBM Performance Tools for i: Gather, analyze, and maintain system performance information.
• IBM Navigator for i Monitors: Observe graphical representations of system performance, and automate
responses to predefined events or conditions.
Large business
The large business has resources devoted to managing performance.
The following is a list of recommended performance applications for a large business:
• IBM Navigator for i Performance interface: Display and manage performance data.
• Collection Services: Collect sample data at user-defined intervals for later analysis.
• IBM Performance Management for Power Systems (PM for Power Systems): Automate the collection,
archival, and analysis of system performance data.
• IBM Performance Tools for i: Gather, analyze, and maintain system performance information.
• IBM Navigator for i Monitors: Observe graphical representations of system performance, and automate
responses to predefined events or conditions.
• IBM i Job Watcher: Collect detailed information about a specific job or thread resource.
• IBM i Disk Watcher: Collect detailed information about disk performance data.
• Performance explorer: Collect detailed information about a specific application or system resource.
Related concepts
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
IBM i Job Watcher
IBM i Job Watcher provides for the collection of job data for any or all jobs, threads, and tasks on
the system. It provides call stacks, SQL statements, objects being waited on, Java JVM statistics, wait
statistics and more which are used to diagnose job related performance problems.
IBM i Disk Watcher
Performance 3
IBM i Disk Watcher provides for the collection of disk performance data to diagnose disk related
performance problems.
IBM Performance Management for Power Systems (PM for Power Systems) - support for IBM i
The IBM Performance Management for Power Systems (PM for Power Systems) in support of IBM i
offering automates the collection, archival, and analysis of system performance data and returns reports
to help you manage system resources and capacity.
Performance Explorer
Performance Explorer collects more detailed information about a specific application, program or system
resource, and provides detailed insight into a specific performance problem. This includes the capability
both to perform several types and levels of traces and to run detailed reports.
Related reference
IBM Performance Tools for i
The IBM Performance Tools for i licensed program product includes many supplemental features that
supplement or extend the capabilities of the basic performance tools that are available in the operating
system.
IBM Navigator for i Monitors
IBM Navigator for i monitors track current information about the performance of your system.
Additionally, you can use them to carry out predefined actions when a specific event occurs. Monitors
continue to monitor and perform any threshold commands or actions you specified until you stop the
monitor.
4 IBM i: Performance
Developing a good performance management strategy will help you manage your system's performance.
Tracking performance
Tracking your system performance over time allows you to plan for your system's growth and ensures that
you have data to help isolate and identify the cause of performance problems. Learn which applications to
use and how to routinely collect performance data.
Tracking system performance helps you identify trends that can help you tune your system configuration
and make the best choices about when and how to upgrade your system. Moreover, when problems occur,
it is essential to have performance data from before and after the incident to narrow down the cause of
the performance problem, and to find an appropriate resolution.
The system includes several applications for tracking performance trends and maintaining a historical
record of performance data. Most of these applications use the data collected by Collection Services. You
can use Collection Services to watch for trends in the following areas:
• Trends in system resource utilization. You can use this information to plan and specifically tailor system
configuration changes and upgrades.
• Identification of stress on physical components of the configuration.
• Balance between the use of system resource by interactive jobs and batch jobs during peak and normal
usage.
• Configuration changes. You can use Collection Services data to accurately predict the effect of changes
like adding user groups, increased interactive jobs, and other changes.
• Identification of jobs that might be causing problems with other activity on the system
• Utilization level and trends for available communication lines.
The following tools will help you monitor your system performance over time:
• IBM Navigator for i Performance interface
• Collection Services
• IBM Performance Management for Power Systems (PM for Power Systems)
Related concepts
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
IBM Performance Management for Power Systems (PM for Power Systems) - support for IBM i
Performance 5
The IBM Performance Management for Power Systems (PM for Power Systems) in support of IBM i
offering automates the collection, archival, and analysis of system performance data and returns reports
to help you manage system resources and capacity.
Related reference
IBM Navigator for i Monitors
IBM Navigator for i monitors track current information about the performance of your system.
Additionally, you can use them to carry out predefined actions when a specific event occurs. Monitors
continue to monitor and perform any threshold commands or actions you specified until you stop the
monitor.
6 IBM i: Performance
Identifying and resolving common performance problems
Many different performance problems often affect common areas of the system. Learn how to research
and resolve problems in common areas, for example, backup and recovery.
When performance problems occur on the system, they often affect certain areas of the system first.
Refer to the following table for some methods available for researching performance on these system
areas.
Main storage Investigate faulting and the wait- • Performance Data Investigator
to-ineligible transitions. found in IBM Navigator for i.
• Disk storage metrics within the
IBM Navigator for i monitors.
• Work with System Status
(WRKSYSSTS) command.
• The Memory Pools function
under Work Management in
IBM Navigator for i.
Communications Find slow lines, errors on the line, • Performance Data Investigator
or too many users for the line. found in IBM Navigator for i.
• IBM Performance Tools for i
Component Report.
• LAN utilization metrics within
the IBM Navigator for i
monitors.
Performance 7
Area Description Available tools
Software Investigate locks and mutual • Performance Data Investigator
exclusions (mutexes). found in IBM Navigator for i.
• IBM Performance Tools for i
Locks report.
• IBM Performance Tools for i
Trace report.
• Work with Object Locks
(WRKOBJLCK) command.
• View details of suspected jobs
under Work Management in
IBM Navigator for i.
• Work with System Activity
(WRKSYSACT) command.
• Display Performance Data
(DSPPFRDTA) command.
Backup and recovery Investigate areas that affect • IBM i Performance Capabilities
backup and recovery and save Reference (Save/Restore
and restore operations. Performance chapter).
Related concepts
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
Work management
Related reference
CL commands for performanceThe operating system includes several CL commands to help you manage
and maintain system performance.
Monitor metrics
To effectively monitor system performance, you must decide which aspects of system performance you
want to monitor. IBM Navigator for i offers various performance measurements, which are known as
metrics, to help you pinpoint different aspects of system performance.
IBM i Performance Capabilities Reference PDFSee the Save/Restore Performance chapter of the
Performance Capabilities Reference PDF for information about backup and recovery related performance.
Backup and recovery FAQThis topic contains questions and answers about backup and recovery
procedures and concepts.
8 IBM i: Performance
• Job Watcher
• Disk Watcher
• Performance Explorer
Related concepts
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
IBM i Job Watcher
IBM i Job Watcher provides for the collection of job data for any or all jobs, threads, and tasks on
the system. It provides call stacks, SQL statements, objects being waited on, Java JVM statistics, wait
statistics and more which are used to diagnose job related performance problems.
IBM i Disk Watcher
IBM i Disk Watcher provides for the collection of disk performance data to diagnose disk related
performance problems.
Performance Explorer
Performance Explorer collects more detailed information about a specific application, program or system
resource, and provides detailed insight into a specific performance problem. This includes the capability
both to perform several types and levels of traces and to run detailed reports.
Performance 9
Collecting information about an application's performance
An application might be performing slowly for various reasons. You can use several of the tools included in
IBM i and other licensed programs to help you get more information.
Collecting information about an application's performance is quite different from collecting information
about system performance. Collecting application information can be done only with certain performance
applications such as Performance Explorer and Job Watcher. Alternately, you can get an overview of
application performance by using IBM Performance Tools for i to track and analyze server jobs.
Note: Collecting an application's performance data can significantly affect the performance of your
system. Before beginning the collection, make sure that you have tried all other collection options.
The Start Performance Trace (STRPFRTRC) command collects multiprogramming and transaction data.
After running this command, you can export the data to a database file with the Dump Trace (DMPTRC)
command.
Related concepts
IBM i Job Watcher
IBM i Job Watcher provides for the collection of job data for any or all jobs, threads, and tasks on
the system. It provides call stacks, SQL statements, objects being waited on, Java JVM statistics, wait
statistics and more which are used to diagnose job related performance problems.
Performance Explorer
Performance Explorer collects more detailed information about a specific application, program or system
resource, and provides detailed insight into a specific performance problem. This includes the capability
both to perform several types and levels of traces and to run detailed reports.
Related reference
IBM Navigator for i Monitors
IBM Navigator for i monitors track current information about the performance of your system.
Additionally, you can use them to carry out predefined actions when a specific event occurs. Monitors
continue to monitor and perform any threshold commands or actions you specified until you stop the
monitor.
IBM Performance Tools for i
The IBM Performance Tools for i licensed program product includes many supplemental features that
supplement or extend the capabilities of the basic performance tools that are available in the operating
system.
Start Performance Trace (STRPFRTRC) commandSee the Start Performance Trace (STRPFRTRC)
command to collect Multiprogramming level (MPL) and Transaction trace data.
Java performance in IBM i
IBM i provides several configuration options and resources for optimizing the performance of Java
applications or services on the system. Use this topic to learn about the Java environment and how
to get the best possible performance from Java-based applications.
DMPTRC MBR
(member-name) LIB
(library-name)
10 IBM i: Performance
You must specify a member name and a library name in which to store the data. You can collect sample-
based data with Collection Services at the same time that you collect trace information. When you
collect sample data and trace data together like this, you should place their data into consistently named
members. In other words, the names that you provide in the CRTPFRDTA TOMBR and TOLIB parameters
should be the same as the names that you provide in the DMPTRC MBR and LIB parameters.
Related concepts
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
Related reference
Dump Trace (DMPTRC) commandSee the Dump Trace (DMPTRC) command to put information from an
internal trace table into a database file.
Dumping memory
The Dump Main Memory Information (DMPMEMINF) command dumps information about pages of main
memory to a file.
To dump memory data, issue the following command:
DMPMEMINF OUTFILE(MYLIBRARY/DMPMEMFILE)
The command to view the dump could be something like the following SQL:
Related reference
Dump Main Memory Information (DMPMEMINF) commandSee the Dump Main Memory Information
(DMPMEMINF) command to dump information about pages of main memory to a file.
Performance 11
2. Idle waits. Idle waits are a normal and expected wait condition. Idle waits occur when the thread
is waiting for external input. This input may come from a user, the network, or another application.
Until that input is received, there is no work to be done.
3. Blocked waits. Blocked waits are a result of serialization mechanisms to synchronize access to
shared resources. Blocked waits may be normal and expected. Examples include serialized access to
updating a row in a table, disk I/O operations, or communications I/O operations. However, blocked
waits may not be normal and it is these unexpected block points that are situations where wait
accounting can be used to analyze the wait conditions.
You can think of the life-time of a thread or a task in a graphical manner, breaking out the time spent
running or waiting. This graphical description is called the "run-wait time signature". At a high level, this
signature looks as follows:
Traditionally, the focus for improving the performance of an application was to have it use the CPU
as efficiently as possible. On IBM i with wait accounting, we can examine the time spent waiting and
understand what contributed to that wait time. If there are elements of waiting that can be reduced or
eliminated, then the overall performance can also be improved.
Nearly all of the wait conditions in the IBM i operating system have been identified and enumerated -
that is, each unique wait point is assigned a numerical value. This is possible because IBM has complete
control over both the licensed internal code and the operating system. As of the IBM i 6.1 release,
there are 268 unique wait conditions. Keeping track of over 250 unique wait conditions for every thread
and task would consume too much storage, so a grouping approach has been used. Each unique wait
condition is assigned to one of 32 groups, or "buckets". As threads or tasks go into and out of wait
conditions, the task dispatcher maps the wait condition to the appropriate group.
If we take the run-wait time signature, using wait accounting, we can now identify the components that
make up the time the thread or task was waiting. For example:
If the thread's wait time was due to reading and writing data to disk, locking records for serialized access,
and journaling the data, we could see the waits broken out above. When you understand the types of
waits that are involved, you can start to ask yourself some questions. For the example above, some of the
questions that could be asked are:
• Are disk reads causing page faults? If so, are my pool sizes appropriate?
• What programs are causing the disk reads and writes? Is there unnecessary I/O that can be reduced or
eliminated? Or can the I/O be done asynchronously?
• Is my record locking strategy optimal? Or am I locking records unnecessarily?
• What files are being journaled? Are all the journals required and optimally configured?
The following are the 32 wait groups or "buckets" that have been defined. The definition of the wait
groups varies from release to release and may change in the future.
1. Time dispatched on a CPU
2. CPU queuing
3. Reserved
4. Other waits
5. Disk page faults
12 IBM i: Performance
6. Disk non-fault reads
7. Disk space usage contention
8. Disk operation start contention
9. Disk writes
10. Disk other
11. Journaling
12. Semaphore contention
13. Mutex contention
14. Machine level gate serialization - call IBM support
15. Seize contention - call IBM support
16. Database record lock contention
17. Object lock contention
18. Ineligible waits
19. Main storage pool contention - call IBM support
20. Journal save while active
21. Reserved
22. Reserved
23. Reserved
24. Socket transmits
25. Socket receives
26. Socket other
27. IFS
28. PASE
29. Data queue receives
30. Idle/waiting for work
31. Synchronization Token contention
32. Abnormal contention - call IBM support
There are many of these wait groups that you may see surface if you do wait analysis on your application.
Understanding what your application is doing and why it is waiting in those situations can possibly help
you reduce or eliminate unnecessary waits.
If we take group 16 (Database record lock contention), there are actually several different enumerated
waits within this group. They are:
• Read
• Update
• Weak
• Transfer
• Check
• Conflict exit
Performance 13
Call Stacks
IBM i also manages call stacks for every thread or task. This is independent of the wait accounting
information. The call stack shows the programs that have been invoked and can be very useful in
understanding the wait condition; knowing some of the logic that led up to either holding a resource
or wanting to get access to it. The combination of holder, waiter, and call stacks provide a very powerful
capability to analyze wait conditions.
Situation
You recently upgraded your system to the newest release. After completing the upgrade and resuming
normal operations, your system performance has decreased significantly. You would like to identify the
cause of the performance problem and restore your system to normal performance levels.
Details
Several problems may result in decreased performance after upgrading the operating system. You can use
the performance management tools included in IBM i and IBM Performance Tools for i licensed program
(5770-PT1) to get more information about the performance problem and to narrow down suspected
problems to a likely cause.
1. Check CPU utilization. Occasionally, a job will be unable to access some of its required resources after
an upgrade. This may result in a single job consuming an unacceptable amount of the CPU resources.
• Use WRKSYSACT, WRKSYSSTS, WRKACTJOB, or IBM Navigator for i monitors to find the total CPU
utilization.
• If CPU utilization is high, for example, greater than 90%, check the amount of CPU utilized by active
jobs. If a single job is consuming more than 30% of the CPU resources, it may be missing file calls
or objects. You can then refer to the vendor, for vendor-supplied programs, or the job's owner or
programmer for additional support.
2. Start a performance trace with the STRPFRTRC command, and then use the system and component
reports to identify and correct the following possible problems:
• If the page fault rate for the machine pool is higher than 10 faults/second, give the machine pool
more memory until the fault rate falls below this level.
14 IBM i: Performance
• If the disk utilization is greater than 40%, look at the waiting and service time. If these values are
acceptable, you may need to reduce workload to manage priorities.
• If the page faults in the user pool are unacceptably high, you might want to automatically tune
performance.
3. Run the job summary report and refer to the Seize lock conflict report. If the number of seize or lock
conflicts is high, ensure that the access path size is set to 1TB. If the seize or lock conflicts are on
a user profile, and if the referenced user profile owns many objects, reduce the number of objects
owned by that profile.
Related concepts
Performance Tools reportsPerformance Tools reports provide information on data that was collected over
time. Use these reports to get more information about the performance and use of system resources.
Adjusting performance automatically
Most users should set up the system to make performance adjustment automatically. When new systems
are shipped, they are configured to adjust automatically.
Related reference
STRPFRTRC commandSee the Start Performance Trace (STRPFRTRC) command to collect
Multiprogramming level (MPL) and Transaction trace data.
Performance 15
The IBM Performance Management for Power Systems (PM for Power Systems) in support of IBM i
offering automates the collection, archival, and analysis of system performance data and returns reports
to help you manage system resources and capacity.
Related reference
CL commands for performanceThe operating system includes several CL commands to help you manage
and maintain system performance.
IBM Navigator for i Monitors
IBM Navigator for i monitors track current information about the performance of your system.
Additionally, you can use them to carry out predefined actions when a specific event occurs. Monitors
continue to monitor and perform any threshold commands or actions you specified until you stop the
monitor.
IBM Performance Tools for i
The IBM Performance Tools for i licensed program product includes many supplemental features that
supplement or extend the capabilities of the basic performance tools that are available in the operating
system.
Tuning performance
When you have identified a performance problem, you will want to tune the system to fix it.
The primary aim of performance tuning is to make the most efficient use of the system resources.
Performance tuning is a way to adjust the performance of the system either manually or automatically.
Many options exist for tuning your system. Each system environment is unique in that it requires you to
observe performance and make adjustments that are best for your environment; in other words, you are
required to do routine performance monitoring.
In addition, you may also want to consider some tuning options that allow processes and threads to
achieve improved affinity for memory and processor resources.
Related reference
Performance system values: Thread affinitySee the thread affinity system value to specify whether
secondary threads have affinity to the same group of processors and memory as the initial thread.
System and user defaults system values: Processor multitaskingSee the processor multitasking system
value to specify whether processor multitasking is on, off, or determined by the system.
16 IBM i: Performance
To observe the system performance, you can use the Work with System Status (WRKSYSSTS), Work with
Disk Status (WRKDSKSTS), and Work with Active Jobs (WRKACTJOB) commands. With each observation
period, you should examine and evaluate the measurements of system performance against your
performance goals.
1. Remove any irregular system activity. Irregular activities that may cause severe performance
degradation are, for example, interactive program compilations, communications error recovery
procedures (ERP), open query file (OPNQRYF), application errors, and signoff activity.
2. Use the WRKSYSSTS, WRKDSKSTS, WRKACTJOB and WRKSYSACT CL commands to display
performance data.
3. Allow the system to collect data for a minimum of 5 minutes.
4. Evaluate the measures of performance against your performance goals. Typical measurements
include:
• Interactive throughput and response time, available from the WRKACTJOB display.
• Batch throughput. Observe the auxiliary input/output (AuxIO) and CPU percentage (CPU%) values
for active batch jobs.
• Spooled throughput. Observe the auxiliary input/output (AuxIO) and CPU percentage (CPU%) values
for active writers.
5. If you observe performance data that does not meet your expectations, tune your system based on the
new data. Be sure to:
• Measure and compare all key performance measurements.
• Make and evaluate adjustments one at a time.
Review performance
Once you have set good tuning values, you should periodically review them to ensure your system
continues to do well. Ongoing tuning consists of observing aspects of system performance and adjusting
to recommended guidelines.
To gather meaningful statistics, you should observe system performance during typical levels of activity.
For example, statistics gathered while no jobs are running on the system are of little value in assessing
system performance. If performance is not satisfactory in spite of your best efforts, you should evaluate
the capabilities of your configuration. To meet your objectives, consider the following:
• Processor upgrades
• Additional storage devices and controllers
• Additional main storage
• Application modification
By applying one or more of these approaches, you should achieve your objectives. If, after a reasonable
effort, you are still unable to meet your objectives, you should determine whether your objectives are
realistic for the type of work you are doing.
Determine what to tune
If your system performance has degraded and needs tuning, you need to identify the source of the
performance problem and make specific corrections.
Related reference
Researching a performance problem
Performance 17
There are many options available to help you identify and resolve performance problems. Learn how to
use the available tools and reports that can help you find the source of the performance problem.
18 IBM i: Performance
• Machine (*MACHINE) memory pool size (QMCHPOOL system value)
• Base (*BASE) memory pool activity level (QBASACTLVL system value)
• Pool size and activity level for the shared pool *INTERACT
• Pool size and activity level for the shared pool *SPOOL
• Pool sizes and activity levels for the shared pools *SHRPOOL1-*SHRPOOL60
When dynamic adjustment is in effect (the QPFRADJ system value is set to 2 or 3), the job QPFRADJ that
runs under profile QSYS is seen as active on the system.
Related concepts
Memory pools
Performance 19
Client performance
While the system administrator often has little control of the client-side of the electronic business
network, you can use these recommendations to ensure that client devices are optimized for an electronic
business environment.
Clients consisting of a PC with a Web browser often represent the electronic business component that
administrators have the least direct control over. However, these components still have a significant effect
on the end-to-end response time for web applications.
To help ensure high-end performance, client PCs should:
• Have adequate memory. Interfaces that use complex forms and graphics and resource intensive applets
may also place demands on the client's processor.
• Use a high-speed and optimized network connection. Many communication adapters on a client PC may
function while they are not optimized for their network environment. For more information, refer to the
documentation for your communication hardware.
• Use browsers that fully support the required technologies. Moreover, browser support and performance
should be a major concern when designing the Web interface.
Network performance
The network design, hardware resources, and traffic pressure often have a significant effect on the
performance of electronic business applications. You can use this topic for information on how to optimize
network performance, and tune server communication resources.
The network often plays a major role in the response time for web applications. Moreover, the
performance impact for network components is often complex and difficult to measure because network
traffic and the available bandwidth may change frequently and are affected by influences the system
administrator may not have direct control over. However, there are several resources available to help you
monitor and tune the communication resources on your server.
Refer to the following topics for more information:
Related concepts
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
IBM Navigator for i Monitors
IBM Navigator for i monitors track current information about the performance of your system.
Additionally, you can use them to carry out predefined actions when a specific event occurs. Monitors
continue to monitor and perform any threshold commands or actions you specified until you stop the
monitor.
Tracking performance
Tracking your system performance over time allows you to plan for your system's growth and ensures that
you have data to help isolate and identify the cause of performance problems. Learn which applications to
use and how to routinely collect performance data.
Related reference
Performance data files: QAPMTCPThis Collection Services database file contains system-wide TCP/IP
data.
Performance data files: QAPMTCPIFCThis Collection Services database file contains TCP/IP data that is
related to individual TCP/IP interfaces.
IBM i Performance Capabilities Reference PDFSee the Communications Performance chapter of the
Performance Capabilities Reference PDF to help you plan for and manage communication resources.
20 IBM i: Performance
Java performance in IBM i
IBM i provides several configuration options and resources for optimizing the performance of Java
applications or services on the system. Use this topic to learn about the Java environment and how
to get the best possible performance from Java-based applications.
Java is often the language of choice for web-based applications. However, Java applications may
require some optimization, both of the IBM i environment and of the Java application, to get optimal
performance.
Use the following resources to learn about the Java environment in IBM i and the available tips and tools
for analyzing and improving Java performance.
Related concepts
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
JavaThere are several important configuration choices and tools to help you get the best performance
from Java applications.
Related reference
Collecting information about an application's performance
An application might be performing slowly for various reasons. You can use several of the tools included in
IBM i and other licensed programs to help you get more information.
IBM i Performance Capabilities Reference PDFSee the Java Performance chapter of the Performance
Capabilities Reference PDF to help you optimize the performance of Java applications and learn
performance tips for programming in Java.
Java and WebSphere Performance on IBM eServer iSeries ServersUse this IBM Redbooks publication
to learn how to plan for and configure your operating environment to maximize Java and WebSphere
performance, and to help you collect and analyze performance data.
WebSphere J2EE Application Development for the IBM eServer iSeries ServerThis IBM Redbooks
publication provides an introduction to J2EE, and offers suggestions and examples to help you
successfully implement J2EE applications on the server.
Performance 21
IBM i Performance Capabilities Reference PDFSee the Web Server and WebSphere Performance chapter
of the Performance Capabilities Reference PDF for HTTP server performance specifications, planning
information, and performance tips.
IBM HTTP Server (powered by Apache): An Integrated Solution for IBM eServer iSeries serversUse this
IBM Redbooks publication to get an in-depth description of HTTP Server (Powered by Apache) for i5/OS,
including examples for configuring HTTP Server in common usage scenarios.
AS/400 HTTP Server Performance and Capacity PlanningUse this IBM Redbooks publication to learn
about HTTP server impacts on performance tuning and planning. This publication also includes
suggestions for using Performance Management tools to collect, interpret, and respond to web server
performance data.
WebSphere performance
WebSphere Application Server is the electronic business application deployment environment of choice.
Use this topic to learn how to plan for and optimize performance in a WebSphere environment.
Managing system performance in a WebSphere environment presents several challenges to the
administrator. Web-based transactions may consume more resources, and consume them differently than
traditional communication workloads.
Refer to the following topics and resources to learn how to plan for optimal performance, and to adjust
system resources in a WebSphere environment.
Related reference
Collection Services data files: QAPMWASAPPThis Collection Services database file contains information
about applications that run on the IBM WebSphere Application Server.
Collection Services data files: QAPMWASCFGThis Collection Services database file contains configuration
information about the different server jobs that run on the IBM WebSphere Application Server.
Collection Services data files: QAPMWASEJBThis Collection Services database file contains information
about applications with enterprise JavaBeans (EJBs) running on the IBM WebSphere Application Server.
Collection Services data files: QAPMWASRSCThis Collection Services database file contains information
about pooled resources that are associated with an IBM WebSphere Application Server.
Collection Services data files: QAPMWASSVRThis Collection Services database file contains information
about the server jobs that run on the IBM WebSphere Application Server.
WebSphere Application Server Performance web siteThis website provides resources for each version
of WebSphere Application Server, including many useful performance tips and recommendations. This
resource is valuable for environments using servlets, Java Server Pages (JSPs) and Enterprise JavaBeans
(EJBs).
DB2 UDB/WebSphere Performance Tuning GuideThis IBM Redbooks publication provides an introduction
to both the WebSphere and DB2 environments, and offers suggestions, examples, and solutions to
common performance problems that can help you optimize WebSphere and DB2 performance.
Java and WebSphere Performance on IBM eServer iSeries ServersUse this IBM Redbooks publication
to learn how to plan for and configure you operating environment to maximize Java and WebSphere
performance, and to help you collect and analyze performance data.
IBM i Performance Capabilities Reference PDFSee the Web Server and WebSphere Performance chapter
of the Performance Capabilities Reference PDF for performance tips specific to WebSphere Application
Server.
22 IBM i: Performance
Performance 23
Each of the collectors has unique characteristics.
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the
primary collector of performance data. You can run this continuously to know what is happening
with your system. Collection Services data is deposited into a management collection object and
then converted and put into database files. The interval data that is collected is specified by either
application defined or user defined interval data.
IBM i Job Watcher
IBM i Job Watcher provides for the collection of job data for any or all jobs, threads, tasks on the
system. It provides call stacks, SQL statements, objects being waited on, Java JVM statistics, wait
statistics and more which are used to diagnose job related performance problems.
IBM i Disk Watcher
IBM i Disk Watcher provides for the collection of disk performance data to diagnose disk related
performance problems.
24 IBM i: Performance
Performance Explorer
Performance explorer provides for the collection of detailed data at a program and application level
to diagnose problems. It also traces the flow of work in an application and can be used to diagnose
difficult performance problems. Application-defined performance explorer trace points, such as with
Domino, NetServer, or WebSphere servers specify the data that is collected. It is intended to be used
as directed by IBM. Performance Explorer data is deposited into a management collection object and
then converted and put into database files.
The performance data contained in any of the database files can be accessed though APIs or CL
commands. The performance data contained in some of the database files can be investigated and
analyzed using one or more of a variety of tools that are further described in Viewing and analyzing data.
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
Collection Services samples system and job level performance data continuously and automatically with
minimal system overhead. It can collect data at regular time intervals of 15 seconds up to 1 hour (default
is 15 minutes).
Data is collected from many system resources, such as:
• CPU
• Memory pools
• Disk (internal and external)
• Communications
Use Collection Services performance data to:
• Monitor how your system is running
• Investigate a reported performance problem
• Understand resource usage and how it changed over time
Collection Services can be configured and managed through the IBM Navigator for i Performance interface
or CL commands.
Related tasks
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
Related reference
Start Performance Collection (STRPFRCOL) commandSee the Start Performance Collection (STRPFRCOL)
command for information on how to start data collection.
CL commands for performanceThe operating system includes several CL commands to help you manage
and maintain system performance.
Performance Management APIsSee the Performance Management APIs for information about how to use
Performance Management APIs to collect and manage performance data.
Collection Services data filesSee the Collection Services data files topic for information about the
database files that are created by Collection Services that contain performance data.
Performance 25
How Collection Services works
Collection Services stores data for each collection in a single collection object from which you can create
as many different sets of database files as you need.
Storing the data in a single collection object results in lower system overhead when collecting
performance data. If you elect to create the database files during collection, Collection Services uses
a lower priority (50) batch job to update these files. This low collection overhead makes it practical to
collect detailed performance data at short intervals on a continuous basis. Collection Services enables
you to establish a network-wide system policy for collecting and retaining performance data and to
implement that policy automatically. For as long as you retain the management collection objects, if the
need arises, you have the capability to look back and analyze performance-related events down to the
level of detail that you collected.
The following figure provides an overview of the Collection Services elements:
User interfaces
Several methods exist that allow you to access the different features of Collection Services. For
example, you can use CL commands, APIs, and the IBM Navigator for i Performance interface.
General properties
General properties define how a collection should be accomplished, and they control automatic
collection attributes.
Data categories
Data categories identify the types of data to collect. You can configure categories independently to
control what data is collected and how often the data is collected.
Collection profiles
Collection profiles provide a means to save and activate a particular category configuration.
Performance collector
The performance collector uses the general properties and category information to control the
collection of performance data. You can start and stop the performance collector, or configure it
to run automatically.
26 IBM i: Performance
Collection Object
The collection object, a system object with a type of *MGTCOL, serves as an efficient storage medium
for large quantities of performance data.
Create Performance Data (CRTPRFDTA) command
The CRTPFRDTA command processes data that is stored in the management collection object and
generates the performance database files.
Performance database
The database files store the data that is processed by the CRTPFRDTA command. The files can be
divided into these categories: Performance data files that contain time interval data, configuration
data files, and trace data files.
Performance 27
Related tasks
Creating database files
To create database files, follow these steps.
Related reference
Create Performance Data (CRTPFRDTA) commandSee the Create Performance Data (CRTPFRDTA)
command for information about creating performance database files.
28 IBM i: Performance
– Job (MI tasks and threads): This category contains information on every active task, job, and thread in
the system. The data that is collected is provided by the machine interface (MI).
– Job (operating system): This category contains information on every active job in the system. The
data that is collected is provided by the operating system.
– Disk storage: This category contains system storage unit data. It includes base storage unit
information and operational data for disk drives.
• Standard - The Standard collection profile includes the data categories that are typically needed by
the tools in IBM Performance Tools for i licensed program. The data categories in the Standard profile
include all the categories in the Minimum profile plus the following categories:
– Memory pool tuning: This category contains pool tuning configuration data for each system memory
pool.
– Subsystem: This category contains data on active subsystems and subsystem pools. Only the first
instance of this data is reported in the database if more than one instance is encountered.
– SNADS: This category contains transaction boundary information specific to active SNADS jobs in the
system.
– Local response time: This category contains response time information for workstations that are
connected to 5254 controllers. Response time data is reported for each workstation and is saved in a
set of response time buckets.
– APPN: This category contains data on the system's APPN support. The data contains both general
information and data classified according to transaction type and work activity.
– SNA: This category contains data on the system's SNA support. Controller, task, and session data are
reported for each active T2 task.
– TCP/IP (base): This category contains system-wide performance information for TCP/IP.
– User-defined transaction data: This category contains data for application-defined transactions
rather than IBM-defined transactions. You can create your own user-defined transactions.
– IBM Domino for i: This category is included in this profile when the IBM Domino for i licensed
program is installed on the system.
– IBM HTTP Server for i (powered by Apache): This category is included in this profile when the IBM
HTTP Server for i licensed program is installed on the system.
– External storage: This category contains non-standardized data for disk units that are externally
attached to an IBM i partition.
– System internal data: This category contains internal data for the system.
– Removable storage: This category contains data about removable storage devices that are connected
to the system, more specifically tape device data.
– SQL: This category contains performance information for SQL, more specifically SQL Plan Cache data.
• Standard plus protocol - The Standard plus protocol collection profile includes communications
protocol data categories and the data categories that are typically needed by the tools in IBM
Performance Tools for i licensed program. The data categories in the Standard plus protocol profile
include all the categories in the Standard profile with the addition of the following categories:
– Network server: This category contains information about network servers. For Integrated xSeries
Servers, data is reported for CPU utilization. For virtual I/O adapters on hosting partitions (partitions
that provide the physical resources), data is provided about the I/O activity that occurs within this
partition due to the virtual device support that it is providing on behalf of guest partitions.
– Communications (base): This category contains base protocol information for each communication
line that is available for use (varied on).
– Communications (station): This category contains station information for certain communication
lines. Data is reported for each station that is available for use (varied on). Protocols supporting this
data are Token Ring, Ethernet, DDI, Frame Relay, and X.25.
Performance 29
– Communications (SAP): This category contains Service Access Point (SAP) information for certain
communication lines. Data is reported for each configured SAP within lines available for use (varied
on). Protocols supporting this data are Token Ring, Ethernet, DDI, and Frame Relay.
– Data port services: This category contains performance data that is obtained from data port
services. Data port services is Licensed Internal Code that supports the transfer of large volumes
of data between a source system and one of any number of specified target systems in an IBM i
cluster environment. Data port services is used by Licensed Internal Code clients, such as, remote
independent auxiliary storage pool (ASP) mirroring.
– TCP/IP interface: This category contains information for each active TCP/IP interface.
– RDMA: This category contains performance data about network redundancy groups and Remote
Direct Memory Access (RDMA) links.
• Enhanced capacity planning - The data categories in the Enhanced capacity planning profile include
all the categories in the Standard plus protocol profile with the addition of the PEX Data - Processor
Efficiency data category. The PEX Data - Processor Efficiency data category contains the cycles
per instruction for Performance Explorer (PEX) data. Data is collected to enhance capacity planning
capabilities or for other purposes. Special considerations apply when you use this category:
– A Performance Explorer definition, QPMIPEXPEI, is created. If the Performance Explorer definition
exists, it is deleted and re-created.
– This category requires Collection Services to start a Performance Explorer (PEX) collection (session-
ID QPMINTPEXD). This collection can conflict with other Performance Explorer collections.
– Do not end or start the QPMINTPEXD session manually because it affects the validity of the data
collected.
– When collection of this category stops, it also ends the Performance Explorer collection for session
QPMINTPEXD.
IBM no longer recommends using this profile.
• Custom - This option allows for customization of not only the data categories that are collected, but also
the specification of the interval. You can have different categories of data that is collected at different
intervals.
Related tasks
Creating a Collection Services custom profile
Create a custom Collection Services profile by doing the following.
30 IBM i: Performance
System monitor database files and collections
The set of QAPMSMxxx database files provide system monitor metrics. Monitored metrics are most often
calculated as percentages (ex: CPU utilization) or rates (ex: bytes per second). Monitored metrics are also
system level metrics, which means data for multiple entities (jobs, disk, …) is grouped, summarized, or
averaged.
System monitor database files can be created for any Collection Services collection, even when system
monitoring is not enabled.
• Use the “Create Standard Summary Data (CRTPFRSUM)” parameter on the Configure Performance
Collection (CFGPFRCOL) command. This parameter enables the creation of the system monitor files for
the system created standard database file collection.
• Use the “Create Standard Summary Data (CRTPFRSUM)” parameter on the Create Performance Data
(CRTPFRDTA) command. This parameter enables the creation of system monitor files for the user
created standard database file collection.
Note: When the system monitor database files for a standard database file collection are created, the
interval in the QAPMSMxxx files is the same as specified for CRTPFRDTA to be consistent with the rest of
the collection.
When Collection Services system monitoring is enabled, a second create performance data job
(CRTPFRDTA2) is submitted to create a second Collection Services file based collection that consists of
the system monitor database files. This job (CRTPFRDTA2) operates similar to the existing CRTPFRDTA job
and uses the configured collection library. The following are the differences between the system monitor
CRTPFRDTA2 job and the CRTPFRDTA job:
• The interval for the CRTPFRDTA2 job is fixed at 15 seconds. The interval for the CRTPFRDTA job is the
configured default collection interval (usually 15 minutes).
• The CRTPFRDTA2 job only creates database files for the categories that are identified as a system
monitor category in the configuration. The CRTPFRDTA job generates database files for all categories
that are defined in the configured collection profile.
• The collection name for CRTPFRDTA2 is based on the *MGTCOL name except that the name begins with
“R” rather than “Q”.
In the collection that is created by the CRTPFRDTA2 job, both the system monitoring database files and
the normal collection database files are created for the collection categories that are selected for system
monitoring for the following reasons:
• Metrics that are defined in the system monitor files require that CRTPFRDTA process the categories that
contain the data. If the source data for a metric category is not processed, nothing can be output for the
metric.
• To support drill down capabilities within Performance Data Investigator.
Performance 31
To effectively monitor system performance, you must decide which aspects of system performance you
want to monitor. IBM Navigator for i offers various performance measurements, which are known as
metrics, to help you pinpoint different aspects of system performance.
Configure Performance Collection (CFGPFRCOL) commandSee the Configure Performance Collection
(CFGPFRCOL) command for information about configuring Collection Services.
QypsChgColSrvAttributes APIThe Change Collection Services Attributes (QypsChgColSrvAttributes) API
changes system or global collection attributes of Collection Services.
Collection Services data files: QAPMSMCMNThis Collection Services database file contains summarized
metrics from communication protocol data (*CMNBASE collection category) that may be used in support
of system monitoring.
Collection Services data files: QAPMSMDSKThis Collection Services database file contains summarized
metrics from disk data (*DISK collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMJMIThis Collection Services database file contains summarized
metrics from job data (*JOBMI collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMJOSThis Collection Services database file contains summarized
metrics from job data (*JOBOS collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMPOLThis Collection Services database file contains summarized
metrics from pool data (*POOL collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMSYSThis Collection Services database file contains summarized
metrics from system data (*SYSLVL collection category) that may be used in support of system
monitoring.
32 IBM i: Performance
Data retention for historical data collections
The amount of historical data that you are allowed to keep depends on whether PM for Power Systems
(PM Agent) is active.
• If PM for Power Systems is not active, you can keep up to 7 days of detailed data and 1 month of
summary data.
• If PM for Power Systems is active, then you can keep up to 60 days of detailed data and 50 years of
summary data.
Expired historical data is deleted by the expiration thread that runs in the Collection Services collector
job (QYPSPFRCOL). This thread is responsible for handling the deletion of all expired Collection Services
collections. The collector thread deletes any records in the historical data files that are older than the
summary and detailed data retention periods. The summary and detailed data retention periods are
defined in the Collection Services historical configuration settings.
Related concepts
Graph History concepts
Configure and generate historical performance data for use by the Graph History task in IBM Navigator
for i.
Related tasks
Configuring historical data for Graph History
Configure the historical performance data settings in Collection Services to generate historical
performance data for use by the Graph History task in IBM Navigator for i.
Creating historical data for Graph History
To create historical performance data immediately for use in Graph History, follow these steps.
Viewing historical data with Graph History
Use the Graph History task in IBM Navigator for i to view historical performance data.
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
Related reference
Configure Performance Collection (CFGPFRCOL) commandSee the Configure Performance Collection
(CFGPFRCOL) command for information about configuring Collection Services.
QypsChgColSrvAttributes APIThe Change Collection Services Attributes (QypsChgColSrvAttributes) API
changes system or global collection attributes of Collection Services.
Historical data files
Performance 33
After you register the category, Collection Services includes it in the list of available collection
categories.
4. Add the category to your Collection Services profile, and then cycle Collection Services
5. Develop a program to query the collection object.
• Retrieve active management collection object name: QpmRtvActiveMgtcolName (Used only for
querying the collection object in real-time)
• Retrieve management collection object attributes: QpmRtvMgtcolAttrs
• Open management collection object: QpmOpenMgtcol
• Close management collection object: QpmCloseMgtcol
• Open management collection object repository: QpmOpenMgtcolRepo
• Close management collection object repository: QpmCloseMgtcolRepo
• Read management collection object data: QpmReadMgtcolData
Your customized collection program now runs at each collection interval, and the collected data is
archived in the collection objects.
You can also implement the Java versions of these APIs. The required Java classes are included in
ColSrv.jar, in the integrated file system (IFS) directory QIBM/ProdData/OS400/CollectionServices/lib.
Java applications should include this file in their classpath. For more information about the Java
implementation, download the javadocs in a .zip file.
Query the collection object in real-time
If your application needs to query the collection object in real-time, it will need to synchronize the queries
with Collection Services. To do this, the application should create a data queue and register it with
Collection Services. Once registered, the collector sends a notification for each collection interval and for
the end of the collection cycle. The application should maintain the data queue, including removing the
data queue when finished, and handling abnormal termination. To register and deregister the data queue,
refer to the following APIs:
• Add collector notification: QypsAddCollectorNotification
• Remove collector notification: QypsRmvCollectorNotification
Related reference
QpmCloseMgtcol APIThe Close Management Collection Object (QpmCloseMgtcol) API closes a
management collection object that was previously opened by the Open Management Collection Object
(QpmOpenMgtcol) API.
QpmCloseMgtcolRepo APIThe Close Management Collection Object Repository (QpmCloseMgtcolRepo)
API closes a repository of a management collection object that was previously opened by the Open
Management Collection Object Repository (QpmOpenMgtcolRepo) API.
QpmOpenMgtcol APIThe Open Management Collection Object (QpmOpenMgtcol) API opens a specified
management collection object for processing and returns a handle to the open management collection
object.
QpmOpenMgtcolRepo APIThe Open Management Collection Object Repository (QpmOpenMgtcolRepo)
API opens a specified repository of a management collection object for processing.
QpmReadMgtcolData APIThe Read Management Collection Object Data (QpmReadMgtcolData) API
returns information about a specific record in a repository of a management collection object.
QpmRtvActiveMgtcolName APIThe Retrieve Active Management Collection Object Name
(QpmRtvActiveMgtcolName) API returns the object name and library name of an active management
collection object.
QpmRtvMgtcolAttrs APIThe Retrieve Management Collection Object Attributes (QpmRtvMgtcolAttrs) API
returns information about attributes of a management collection object and repositories of a management
collection object.
QypsAddCollectorNotification APIThe Add Collector Notification (QypsAddCollectorNotification) API
registers with a collector to provide notifications to a specified data queue for a collection event.
34 IBM i: Performance
QypsDeregCollectorDataCategory APIThe Deregister Collector Data Category
(QypsDeregCollectorDataCategory) API removes a user-defined data category from Collection Services.
QypsRmvCollectorNotification APIThe Remove Collector Notification (QypsRmvCollectorNotification) API
removes a notification registration from a collector for a specified data queue and collection event.
QypsRegCollectorDataCategory APIThe Register Collector Data Category
(QypsRegCollectorDataCategory) API adds a user-defined data category to one or more collector
definitions of Collection Services.
Request Description
Start collection The data collection program should initialize any interfaces or resources used
during data collection. Optionally, it may also initialize a work area, provided by
Collection Services, that preserves state information between collection intervals.
If you want to include a control record prior to the collected data, the data
collection program may also write a small amount of data to the data buffer.
Typically, this control record would be used during data processing to help
interpret the data.
Collection interval Collection Services sends an interval request for each collection interval. The data
collection program should collect data and return it in the data buffer. Collection
Services then writes that data to the interval record in the collection object. If the
amount of data is too large for the data buffer, the data collection program should
set a "More data" flag. This action causes Collection Services to send another
interval request with a modifier indicating that it is a continuation. Collection
Services resets the more data flag before each call. This process is repeated until
all the data is moved into the collection object.
End of collection When the collection for the category containing the data collection program
ends, Collection Services sends this request. The data collection program should
perform any cleanup and can optionally return a collection control record. The data
collection program should also send a return code that indicates the result of the
collection.
Performance 35
Request Description
Clean up Collection Services sends this request if an abnormal termination is necessary.
and terminate Operating system resources are freed automatically when the data collection
(Shutdown) program job ends, but any other shutdown operations should be performed by
the data collection program. The data collection program can receive this request
at any time.
For a detailed description of these parameters, the work area, data buffer, and return codes, refer to the
header file QPMDCPRM, which is located in QSYSINC.
Record Description
Control This optional record can be the first or last record that results from the data
collection program, and may occur in both positions. Typically, it should contain
any information needed to interpret the record data.
Interval Each collection interval creates an interval record, even if it is empty. The interval
record contains the data written to the data buffer during the collection interval. It
must not exceed 4 GB in size.
Stop Collection Services automatically creates this record to indicate the end of a data
collection session. If the collections for the user-defined category were restarted
without ending or cycling Collection Services, you can optionally include a control
record followed by additional interval records after the stop record.
extern "C"
void DCPentry( Qpm_DC_Parm_t *request, char *dataBuffer,
char *workArea, int *returnCode )
{
static char testData[21] = "Just some test stuff";
int i;
36 IBM i: Performance
request->formatName, request->categoryName );
printf( " rsvd1: %4.4X; req type: %d; req mod: %d; buffer len: %d;\n",
*(short *)(request->rsvd1), request->requestType,
request->requestModifier, request->dataBufferLength);
printf( " prm offset: %d; prm len: %d; work len: %d; rsvd2: %8.8X;\n",
request->parmOffset, request->parmLength, request->workAreaLength,
*(int *)(request->rsvd2) );
printf( " rec key: \"%8.8s\"; timestamp: %8.8X %8.8X;\n",
request->intervalKey,
*(int *)(request->intervalTimestamp),
*(int *)(request->intervalTimestamp + 4) );
printf( " return len: %d; more data: %d; rsvd3: %8.8X %8.8X;\n",
request->bytesProvided, request->moreData,
*(int *)(request->rsvd3),
*(int *)(request->rsvd3 + 4) );
switch ( request->requestType )
{
/* Write control record in the beginning of collection */
case PM_DOBEGIN:
printf( "doBegin(%d)\n", request->requestModifier );
switch ( request->requestModifier)
{
case PM_CALL_NORMAL:
memcpy( dataBuffer, testData, 20 );
*(int *)workArea = 20;
request->moreData = PM_MORE_DATA;
request->bytesProvided = 20;
break;
case PM_CALL_CONTINUE:
if ( *(int *)workArea < 200 )
{
memcpy( dataBuffer, testData, 20 );
*(int *)workArea += 20;
request->moreData = PM_MORE_DATA;
request->bytesProvided = 20;
}
else
{
*(int *)workArea = 0;
request->moreData = PM_NO_MORE_DATA;
request->bytesProvided = 0;
}
break;
default:
*returnCode = -1;
return;
}
break;
/* Write control record in the end of collection */
case PM_DOEND:
printf( "doEnd(%d)\n", request->requestModifier );
switch ( request->requestModifier)
{
case PM_CALL_NORMAL:
memcpy( dataBuffer, testData, 20 );
*(int *)workArea = 20;
request->moreData = PM_MORE_DATA;
request->bytesProvided = 20;
break;
case PM_CALL_CONTINUE:
if ( *(int *)workArea < 200 )
{
memcpy( dataBuffer, testData, 20 );
*(int *)workArea += 20;
request->moreData = PM_MORE_DATA;
request->bytesProvided = 20;
}
else
{
*(int *)workArea = 0;
request->moreData = PM_NO_MORE_DATA;
request->bytesProvided = 0;
}
break;
default:
*returnCode = -1;
return;
Performance 37
}
break;
switch ( request->requestModifier)
{
case PM_CALL_NORMAL:
*(time_t *)(workArea + 4) = time( NULL );
*(int *)workArea = 1;
request->moreData = PM_MORE_DATA;
break;
case PM_CALL_CONTINUE:
*(int *)workArea += 1;
if ( *(int *)workArea < 20 )
request->moreData = PM_MORE_DATA;
else
{
*(time_t *)(workArea + 8) = time( NULL );
printf( "doCollect() complete in %d secs (%d bytes transferred)\n",
*(time_t *)(workArea + 8) - *(time_t *)(workArea + 4), 10000 * 20 );
request->moreData = PM_NO_MORE_DATA;
}
break;
default:
*returnCode = -1;
return;
}
break;
/* Clean-up and terminate */
case PM_DOSHUTDOWN:
printf( "doShutdown\n" );
*returnCode = 0;
return;
break;
default:
*returnCode = -1;
return;
break;
}
}/* DCPentry() */
Related concepts
Collection program recommendations and requirements
Collection Services calls the data collection program once during the start of a collection cycle, once for
each collection interval, and again at the end of the collection cycle.
38 IBM i: Performance
int CCSID = 0;
int RC = 0;
Qyps_USER_CAT_PROGRAM_ATTR *pgmAttr;
Qyps_USER_CAT_ATTR catAttr;
char collectorName[11] = "*PFR ";
char categoryName[11] = "TESTCAT ";
char collectorDefn[11] = "*CUSTOM "; /* Register to *CUSTOM profile only */
if ( argc > 2 )
{
int len = strlen( argv[2] );
QypsRegCollectorDataCategory( collectorName,
categoryName,
collectorDefn,
&CCSID,
(char*)pgmAttr,
(char*)&catAttr,
&RC
);
}
else
if( argc >= 2 && *argv[1] == 'D' )
QypsDeregCollectorDataCategory( collectorName, categoryName, &RC );
else
printf("Unrecognized option\n");
}/* main() */
Performance 39
Java sample code
import com.ibm.iseries.collectionservices.*;
class testmco2
{
public static void main( String argv[] )
{
String objectName = null;
String libraryName = null;
String repoName = null;
MgtcolObj mco = null;
int repoHandle = 0;
int argc = argv.length;
MgtcolObjAttributes
attr = null;
MgtcolObjRepositoryEntry
repoE = null;
MgtcolObjCollectionEntry
collE = null;
int i,j;
if ( argc < 3 )
{
System.out.println("testmco2 objectName libraryName repoName");
System.exit(1);
}
objectName = argv[0];
libraryName = argv[1];
repoName = argv[2];
if ( ! objectName.equals( "*ACTIVE" ) )
mco = new MgtcolObj( objectName, libraryName );
else
try
{
mco = MgtcolObj.rtvActive();
} catch ( Exception e)
{
System.out.println("rtvActive(): Exception " + e );
System.exit(1);
}
System.out.println("Object name = " + mco.getName() );
System.out.println("Library name = " + mco.getLibrary() );
try
{
attr = mco.rtvAttributes( "MCOA0100" );
} catch ( Exception e)
{
System.out.println("rtvAttributes(): MCOA0100: Exception " +
e );
System.exit(1);
}
try
{
40 IBM i: Performance
attr = mco.rtvAttributes( "MCOA0200" );
} catch ( Exception e)
{
System.out.println("rtvAttributes(): MCOA0200: Exception " + e );
System.exit(1);
}
if ( repoName.equals("NONE") )
return;
try
{
mco.open();
} catch ( Exception e)
{
System.out.println("open(): Exception " + e );
System.exit(1);
}
try
{
repoHandle = mco.openRepository( repoName, "MCOD0100" );
} catch ( Exception e)
{
System.out.println("openRepository(): Exception " + e );
mco.close();
System.exit(1);
}
System.out.println("repoHandle = " + repoHandle );
readOptions.option = MgtcolObjReadOptions.READ_NEXT;
readOptions.recKey = null;
readOptions.offset = 0;
readOptions.length = 0;
Performance 41
mco.closeRepository( repoHandle );
mco.close();
}/* main() */
User-defined transactions
Collection Services and Performance Explorer collect performance data that you define in your
applications.
With the provided APIs, you can integrate transaction data into the regularly scheduled sample
data collections using Collection Services, and get trace-level data about your transaction by running
Performance Explorer.
For detailed descriptions and usage notes, refer to the following API descriptions:
• Start Transaction (QYPESTRT, qypeStartTransaction) API
• End transaction (QYPEENDT, qypeEndTransaction) API
• Log transaction (QYPELOGT, qypeLogTransaction) API (Used only by Performance Explorer)
• Add trace point (QYPEADDT, qypeAddTracePoint) API (Used only by Performance Explorer)
Note: You only need to instrument your application once. Collection Services and Performance Explorer
use the same API calls to gather different types of performance data.
42 IBM i: Performance
number of SQL statements used for the transaction, or other incremental measurements. Your application
should use the Start Transaction API to indicate the beginning of a new transaction, and should include a
corresponding End Transaction API to deliver the transaction data to Collection Services.
//**********************************************************************
// tnstst.C
//
// This example program illustrates the use
// of the Start/End Transaction APIs (qypeStartTransaction,
Performance 43
// qypeEndTransaction).
//
//
// This program can be invoked as follows:
// CALL lib/TNSTST PARM('threads' 'types' 'transactions' 'delay')
// where
// threads = number of threads to create (10000 max)
// types = number of transaction types for each thread
// transactions = number of transactions for each transaction
// type
// delay = delay time (millisecs) between starting and
// ending the transaction
//
// This program will create "threads" number of threads. Each thread
// will generate transactions in the same way. A thread will do
// "transactions" number of transactions for each transaction type,
// where a transaction is defined as a call to Start Transaction API,
// then a delay of "delay" millisecs, then a call to End Transaction
// API. Thus, each thread will do a total of "transactions" * "types"
// number of transactions. Each transaction type will be named
// "TRANSACTION_TYPE_nnn" where nnn ranges from 001 to "types". For
// transaction type n, there will be n-1 (16 max) user-provided
// counters reported, with counter m reporting m counts for each
// transaction.
//
// This program must be run in a job that allows multiple threads
// (interactive jobs typically do not allow multiple threads). One
// way to do this is to invoke the program using the SBMJOB command
// specifying ALWMLTTHD(*YES).
//
//**********************************************************************
#define _MULTI_THREADED
// Includes
#include "pthread.h"
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "qusec.h"
#include "lbcpynv.h"
#include "qypesvpg.h"
// Constants
#define maxThreads 10000
//**********************************************************************
//
// Transaction program to run in each secondary thread
//
//**********************************************************************
44 IBM i: Performance
error_code_t errCode;
return NULL;
}
//**********************************************************************
//
// Main program to run in primary thread
//
//**********************************************************************
pthread_t threadHandle[maxThreads];
tnsPgmParm_t tnsPgmParm;
int rc;
int i;
Performance 45
// Verify 4 parms passed
if (argc != 5)
{
printf("Did not pass 4 parms\n");
return;
}
// Verify parms
if (threads > maxThreads)
{
printf("Too many threads requested\n");
return;
}
} /* end of Main */
import com.ibm.iseries.collectionservices.PerformanceDataReporter;
// parameters:
// number of TXs per thread
// number of threads
// log|nolog
// enable|disable
// transaction seconds
static
46 IBM i: Performance
{
int i;
}/* static */
// process parameters
if ( args.length >= 5 )
try
{
numberOfTXPerThread = Integer.parseInt( args[0] );
numberOfThreads = Integer.parseInt( args[1] );
if ( args[2].equalsIgnoreCase( "log" ) )
log = true;
else
if ( args[2].equalsIgnoreCase( "nolog" ) )
log = false;
else
{
System.out.println( "Wrong value for 3rd parameter!" );
System.out.println( "\tshould be log|nolog" );
return;
}
if ( args[3].equalsIgnoreCase( "enable" ) )
enable = true;
else
if ( args[3].equalsIgnoreCase( "disable" ) )
enable = false;
else
{
System.out.println( "Wrong value for 4th parameter!" );
System.out.println( "\tshould be enable|disable" );
return;
}
Performance 47
} catch (Exception e)
{
System.out.println( "Oops! Cannot process parameters!" );
return;
}
else
{
System.out.println( "Incorrect Usage." );
System.out.println( "The correct usage is:" );
System.out.println( "java TestTXApi numberOfTXPerThread numberOfThreads
log|nolog enable|disable secsToDelay");
System.out.println("\tlog will make the program cut 1 log transaction per start / end
pair");
System.out.println("\tdisable will disable performance collection to minimize
overhead");
System.out.print("\nExample: \"java TestTXApi 10000 100 log enable 3\" will call " );
System.out.println("cause 10000 transactions for each of 100 threads");
System.out.println("with 3 seconds between start and end of transaction");
System.out.println("Plus it will place additional log call and will enable
reporting." );
return;
}
t.runTests( numberOfThreads );
}/* main() */
}/* prepareTests() */
}/* runTests() */
48 IBM i: Performance
{
private int ordinal;
private int numberOfTxPerThread;
private boolean log;
private boolean enable;
private int secsToDelay;
}/* constructor */
}/* run() */
Performance 49
specified time intervals, IBM i Job Watcher samples anywhere from one thread per job to all threads per
job. IBM i Job Watcher gathers a variety of performance data, including detailed wait statistics for jobs,
tasks, and threads.
There are 32 wait buckets which accumulate wait state data. These static wait buckets, used by both
Collection Services and IBM i Job Watcher, provide a stable view of the wait state data. In Collection
Services, data from these buckets is reported in files QAPMJOBWT and QAPMJOBWTG. In Job Watcher,
data from these buckets is reported in QAPYJWTDE and QAPYJWSTS.
Related concepts
A job's lifeTo understand the basics of IBM i work management, follow a simple batch job as it moves
through the system.
IBM i Job Watcher
IBM i Job Watcher provides for the collection of job data for any or all jobs, threads, and tasks on
the system. It provides call stacks, SQL statements, objects being waited on, Java JVM statistics, wait
statistics and more which are used to diagnose job related performance problems.
The basics of IBM i Wait Accounting
Wait Accounting is the patented technology built into the IBM i operating system that tells you what a
thread or task is doing when it appears that it is not doing anything.
Related reference
Collection Services data files: QAPMJOBWTThis Collection Services database file contains information
about job, task, and thread wait conditions.
Collection Services data files: QAPMJOBWTDThis Collection Services database file contains a description
of the counter sets found in file QAPMJOBWT.
Collection Services data files: QAPMJOBWTGThis Collection Services database file contains information
about job, task, and thread current wait conditions that is not available in the QAPMJOBWT file.
Job AccountingThis experience report provides a summary of information to make it easier to determine
which Work Management interfaces to use when dealing with job attributes.
The size of one day's worth of data is directly proportional to the number of intervals collected per
collection period. For example, changing the interval rate from 15 minutes to 5 minutes increases the
number of intervals by a factor of 3 and increases the size by the same factor.
To continue this example, the following table shows the size of one *MGTCOL object produced each day
by Collection Services at each interval rate, using the default standard plus protocol profile.
50 IBM i: Performance
Interval rate Intervals per collection Multiplier Size in MB
15 minutes 96 1 500
5 minutes 288 3 1500
1 minutes 1440 15 7500
30 seconds 2880 30 15000
15 Seconds 5760 60 30000
The size of the *MGTCOL object, in this example, can vary from 500 MB to 30 GB depending on the rate of
collection. You can predict a specific system's disk consumption for one day's collection interval through
actual observation of the size of the *MGTCOL objects created, using the default collection interval of 15
minutes and the standard plus protocol profile as the base and then using the multiplier from the above
table to determine the disk consumption at other collection intervals. For example, if observation of the
*MGTCOL object size reveals that the size of the object for a day's collection is 50 MB for 15-minute
intervals, then you could expect Collection Services to produce *MGTCOL objects with a size of 3 GB when
collecting data at 15-second intervals.
Note: Use caution when considering a collection interval as frequent as 15 seconds. Frequent collection
intervals can adversely impact system performance.
Retention period
The retention period also plays a significant role in the amount of disk resource that Collection Services
consumes. The default retention period is one day. However, practically speaking, given the default
values, the *MGTCOL object is deleted on the third day of collection past the day on which it was created.
Thus, on the third day of collection there is two days' worth of previously collected data plus the current
day's data on the system. Using the table above, this translates into having between 1 GB and 1.5 GB of
disk consumption at 15-minute intervals, and 60 to 90 GB of disk consumption at 15-second intervals on
the system during the third day and beyond.
The formula to calculate disk consumption based on the retention period value is:
Note: 2.5 corresponds to two days of previous collection data, and an average of the current day (2 days +
1/2 day).
Using the above tables and formula, a retention period of 2 weeks gives you a disk consumption of 8.25
GB at 15-minute intervals and 495 GB at 15-second intervals for the example system.
It is important to understand the disk consumption by Collection Services to know the acceptable
collection interval and retention period for a given system. Knowing this can ensure that disk consumption
will not cause system problems. Remember to consider that a system monitor or a job monitor can
override a category's collection interval to graph data for a monitor. A system administrator must ensure
that monitors do not inadvertently collect data at intervals that cause excess data consumption.
Performance 51
• Enabled the collection of performance data for the partition you want it collected on. You only need
to collect this data on one partition, and this partition must be an IBM i partition. The CPU utilization
information that is collected will reflect work done in partitions that are running AIX and Linux as well as
IBM i, but AIX and Linux do not support collecting this data.
Enabling the collection of this performance data requires the setting of a configuration parameter on
the HMC or Integrated Virtualization Manager (IVM). On the HMC, there is an "Allow performance
information collection" checkbox on the processor configuration tab. Select this checkbox on the IBM
i partition that you want to collect this data. If you are using IVM, you use the chgsyscfg command,
specifying the allow_perf_collection (permission for the partition to retrieve shared processor pool
utilization) parameter. Valid values for the parameter are 0, do not allow authority (the default) and 1,
allow authority.
Once the performance data collection support is enabled, Collection Services will collect this additional
information. At each collection interval, Collection Services will collect partition configuration and
utilization information from the hypervisor. The data is stored in the Collection Services database file
QAPMLPARH. You also have the ability to get physical processor utilization in the Collection Services
database file QAPMSYSPRC.
Displaying the data
You can use the Performance Data Investigator tool found in IBM Navigator for i to view the data collected
graphically on the web. These charts can be found in the "Physical System" folder under the Collection
Services content package. Some examples of charts using this data are:
• Logical Partitions Overview
• Donated Processor Time by Logical Partition
• Uncapped Processor Time Used by Logical Partition
• Physical Shared Processor Pool Utilization
• Physical Processors Utilization by Physical Processor
• Dedicated Processor Utilization by Logical Partition
• Physical Processors Utilization by Processor Status Overview
• Physical Processor Utilization by Processor Status Detail
Related tasks
Performance Data Investigator
Performance Data Investigator provides a web-based GUI over performance data with interactive charts
and tables. You can view and analyze performance data for each of the collectors (Collection Services,
IBM i Job Watcher, IBM i Disk Watcher, Performance Explorer, Database SQL Plan Cache, Database SQL
Performance Monitor, and Historical Data).
Related reference
Collection Services data files: QAPMLPARHThis Collection Services database file contains logical partition
configuration and utilization data as it is known to the hypervisor.
Collection Services data files: QAPMSYSPRCThis Collection Services database file reports utilization data
for a system's physical processor units that are based on data that is obtained from the hypervisor.
52 IBM i: Performance
Short lifespan threads and tasks
Collection Services captures performance data for every job, task, and secondary thread that used
processor time during a sample interval. This data is reported via records in the QAPMJOBMI file.
There are times when secondary threads and tasks are created that do very little work and terminate
quickly; lifespans are generally less than one second. This can be a problem when it happens frequently
and is on-going. It can significantly increase the size of collection objects and QAPMJOBMI file members.
It also results in increased CPU utilization to capture data and generate the files as well as more resource
being used by tools that consume this data.
Although resources consumed within a specific short lifespan thread or task is not significant, they as a
group, can be a significant factor in total system utilization. Consequently, they cannot simply be ignored.
Beginning in IBM i 7.1, Collection Services will accumulate data for tasks and secondary threads whose
lifespan is less than a specific threshold. Short lifespan secondary threads will be accumulated by job
for any job that has such threads. Short lifespan tasks are accumulated by resource affinity domain. This
accumulated data will be reported at sample time similar to how other tasks or secondary threads are
reported.
The QAPMJOBMI file has a new field "Short lifespan entry count". This field will have a value that
is greater than zero for records that contain data for short lifespan threads or tasks. Its value is the
number of such entities that were accumulated within the interval for the indicated job or resource
affinity domain. The QAPMCONF file reports the short lifespan thresholds used during collection. See the
description for GKEY = "F1" in the QAPMCONF article.
By default the threshold that will be used for both tasks and threads is 1000 milliseconds (terminating
threads and tasks whose lifespans are less than 1000 milliseconds will not be individually reported).
Should there be a need to disable this processing or you want different thresholds, environment variables
can be used to accomplish that:
Variable QPM_TASK_SL_THRESHOLD can be created to control short lifespan task processing
Variable QPM_THREAD_SL_THRESHOLD can be created to control short lifespan secondary thread
processing
The value associated with the environment variable is the threshold in milliseconds that should be
applied. A value of 0 or null will cause all threads or tasks to be reported. You must create these
environment variables as system level variables so that Collection Services collector job, QYPSPFRCOL,
will see them. The values are obtained only once at the start of a collection; you must cycle an active
collection after making changes for the new values to be used.
The following is an example of creating the environment variables and setting the default value of 1000
milliseconds:
ADDENVVAR ENVVAR(QPM_TASK_SL_THRESHOLD) VALUE(1000) LEVEL(*SYS)
ADDENVVAR ENVVAR(QPM_THREAD_SL_THRESHOLD) VALUE(1000) LEVEL(*SYS)
Performance 53
Wait Accounting is the patented technology built into the IBM i operating system that tells you what a
thread or task is doing when it appears that it is not doing anything.
Related tasks
Managing IBM i Job Watcher
Manage IBM i Job Watcher by using IBM Navigator for i.
Related reference
Add Job Watcher Definition (ADDJWDFN)See the Add Job Watcher Definition (ADDJWDFN) command for
information about specifying the performance data that is to be collected during a Job Watcher collection.
Remove Job Watcher Definition (RMVJWDFN)See the Remove Job Watcher Definition (RMVJWDFN)
command for information about removing a Job Watcher definition from the system.
Start Job Watcher (STRJW)See the Start Job Watcher (STRJW) command for information about starting a
Job Watcher collection.
End Job Watcher (ENDJW)See the End Job Watcher (ENDJW) command for information about ending a
Job Watcher collection.
54 IBM i: Performance
Performance Explorer
Performance Explorer collects more detailed information about a specific application, program or system
resource, and provides detailed insight into a specific performance problem. This includes the capability
both to perform several types and levels of traces and to run detailed reports.
Performance Explorer is a data collection tool that helps the user identify the causes of performance
problems that cannot be identified by collecting data using Collection Services or by doing general trend
analysis. Two reasons to use Performance Explorer include:
• Isolating performance problems to the system resource, application, program, procedure, or method
that is causing the problem
• Analyzing the performance of applications
The AS/400 Performance Explorer Tips and Techniques book provides additional examples of the
Performance Explorer functions and examples of the enhanced Performance Explorer trace support.
Performance Explorer is a tool that helps find the causes of performance problems that cannot be
identified by using tools that do general performance monitoring. As your computer environment grows
both in size and in complexity, it is reasonable for your performance analysis to gain in complexity
as well. The Performance Explorer addresses this growth in complexity by gathering data on complex
performance problems.
Note: Performance Explorer is the tool you need to use after you have tried the other tools. It gathers
specific forms of data that can more easily isolate the factors involved in a performance problem;
however, when you collect this data, you can significantly affect the performance of your system.
This tool is designed for application developers who are interested in understanding or improving the
performance of their programs. It is also useful for users knowledgeable in performance management to
help identify and isolate complex performance problems.
Related reference
Performance Tools PDFThis IBM Redbooks publication explains how to use performance tools to collect
data about the performance of a system, job, or program. It also explains how to analyze and print the
data to help identify and correct any problems.
Performance 55
When Performance Explorer is running, it creates only the files that are needed for the collection.
Note: You can collect Performance Explorer data and Collections Services data at the same time.
To learn more about Performance Explorer, refer to any of the following Performance Explorer topics.
Related concepts
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
Related tasks
Configuring Performance Explorer
To collect detailed trace information, you need to tailor Performance Explorer to work optimally with the
application process from which the trace is being taken.
56 IBM i: Performance
Statistics type definitions
Identifies applications and IBM programs or modules that consume excessive CPU use or that perform a
high number of disk I/O operations. Typically, you use the statistical type to identify programs that should
be investigated further as potential performance bottlenecks.
• Good for first order analysis of IBM i programs, procedures, and MI complex instructions.
– Gives number of invocations
– Gives both inline and cumulative CPU usage in microseconds
– Gives both inline and cumulative number of synchronous and asynchronous I/O
– Gives number of calls made
• Works well for short or long runs
• Size of the collected data is fairly small and constant for all runs
• Run time collection overhead of ILE procedures may be a problem due to the frequency of calls.
Although run time is degraded, the collected statistics are still accurate because Performance Explorer
removes most of the collection overhead from the data.
• Uses combined or separated data areas. The MRGJOB parameter on the ADDPEXDFN command
specifies whether all program statistics are accumulated in one data area, or kept separate (for
example, one data area for each job).
The statistics can be structured in either a hierarchical or flattened manner.
• A hierarchical structure organizes the statistics into a call tree form in which each node in the tree
represents a program procedure run by the job or task.
• A flattened structure organizes the statistics into a simple list of programs or procedures, each with its
own set of statistics.
Here is an example of a Performance Explorer statistics definition called MYSTATS that will show CPU and
disk resource usage on a per program or procedure level.
Performance 57
JOB(*ALL) /*All Jobs */
PGM((MYLIB/MYPGM MYMODULE MYPROCEDURE)) /* The name of the program to monitor. */
INTERVAL(1) /* 1-millisecond samples will be taken. */
• Job profile (specify the following on the ADDPEXDFN command: TYPE(*PROFILE) and PRFTYPE(*JOB))
– Gives detailed breakdown of where you are spending time in the set of jobs or tasks of the collection.
– Size of collection is relatively small but not constant. The size increases as the length of the run
increases.
– Can profile all jobs and tasks on the system or can narrow the scope of data collected to just a few
jobs or tasks of interest.
– Can vary overhead by changing sample interval. An interval of 2 milliseconds seems a good first
choice for benchmarks.
Here is an example of a Performance Explorer job profile definition called ALLJOBPROF that will show
usage for all your jobs.
Trace definitions
Gathers a historical trace of performance activity generated by one or more jobs on the system. The trace
type gathers specific information about when and in what order events occurred. The trace type collects
detailed reference information about programs, Licensed Internal Code (LIC) tasks, IBM i job, and object
reference information.
• Some common trace events are:
– Program and procedure calls and returns
– Storage, for example, allocate and deallocate.
– Disk I/O, for example, read operations and write operations.
– Java method, for example, entry and exit.
– Java, for example, object create and garbage collection.
– Journal, for example, start commit and end commit.
– Synchronization, for example, mutex lock and unlock or semaphore waits.
– Communications, for example, TCP, IP, or UDP.
• Longer runs collect more data.
Here is an example of a Performance Explorer trace definition called DISKTRACE that will show usage for
all disk events.
58 IBM i: Performance
MAXSTG (100000) /*Maximum storage. Set to 100000 because the default of
10000 KB is often too small for the large number of heap events that can be
generated when tracing all jobs and all tasks.*/
TRCTYPE(*HEAP) /* Selects all heap events from the STGEVT
(storage events) parameter. */
Related concepts
Performance Explorer reports
After you have collected performance data with a Performance Explorer session, you can view it by
running the included reports or by querying the database files directly.
Related tasks
Configuring Performance Explorer
To collect detailed trace information, you need to tailor Performance Explorer to work optimally with the
application process from which the trace is being taken.
Related reference
Add Performance Explorer Definition (ADDPEXDFN) commandSee the Add Performance Explorer
Definition (ADDPEXDFN) command for information about specifying the performance data that is to be
collected during a Performance Explorer collection.
DSPFFD FILE(xxxxxxxxx)
where xxxxxxxxx is the name of the file that you want to display.
Performance 59
Type of information contained in file File name
Performance Explorer Java method information QAYPEJVMI
data
Performance Explorer Java name information data QAYPEJVNI
Licensed Internal Code (LIC) bracketing data QAYPELBRKT
Machine interface (MI) complex instructions QAYPELCPLX
collected on
Jobs collected on QAYPELJOB
Licensed Internal Code (LIC) modules to collect QAYPELLIC
data on
Metrics to collect data on QAYPELMET
Machine interface (MI) program, module, or QAYPELMI
procedures collected on
Task names to collect data on QAYPELNAMT
Task number to collect data on QAYPELNUMT
Configured tasks QAYPELTASK
Machine interface (MI) program bracketing data QAYPEMBRKT
Machine interface (MI) complex instructions QAYPEMICPX
mapping
Addresses of machine interface (MI) pointer QAYPEMIPTR
Machine interface (MI) user event data QAYPEMIUSR
Portable Application Solutions Environment (PASE) QAYPEPASE
event data
Page fault event data QAYPEPGFLT
Program profile data QAYPEPPANE
Licensed Internal Code (LIC) address resolution QAYPEPROCI
mapping
Resource management process event data QAYPERMPM
Resource management seize lock event data QAYPERMSL
Reference information QAYPEREF
Miscellaneous resolution data QAYPERINF
Database level indicator QAYPERLS
General information QAYPERUNI
Segment address range (SAR) data QAYPESAR
Segment address resolution mapping QAYPESEGI
Basic statistics data QAYPESTATS
Synchronization event data QAYPESYNC
Process and task resolution mapping QAYPETASKI
Trace job equivalent event data QAYPETBRKT
60 IBM i: Performance
Type of information contained in file File name
Common trace data for all events QAYPETIDX
Trace index data (by time and task) QAYPETIDXL
Trace index data (by time) QAYPETID2L
Task switch event data QAYPETSKSW
User-defined bracketing hook data QAYPEUSRDF
Performance Explorer stores its collected data in the QAVPETRCI file, which is located in the QPFR library.
Type the following command to view the contents for a single record:
DSPFFD FILE(QPFR/QAVPETRCI)
Related concepts
Performance Explorer definitions
Performance 61
The parameters and conditions that determine what data Performance Explorer collects and how it
collects it are configured and stored using Performance Explorer definitions. This topic explains how to
use these definitions and provides a sample illustrating a simple definition.
Related reference
Performance Explorer database files
The data that Performance Explorer collects is stored in Performance Explorer database files.
Performance Tools PDFThis IBM Redbooks publication explains how to use performance tools to collect
data about the performance of a system, job, or program. It also explains how to analyze and print the
data to help identify and correct any problems.
Print Performance Explorer Report (PRTPEXRPT) commandSee the Print Performance Explorer Report
(PRTPEXRPT) command for information about printing a formatted listing of the data within a
Performance Explorer collection.
62 IBM i: Performance
The parameters and conditions that determine what data Performance Explorer collects and how it
collects it are configured and stored using Performance Explorer definitions. This topic explains how to
use these definitions and provides a sample illustrating a simple definition.
Performance Explorer concepts
Performance Explorer works by collecting detailed information about a specified system process or
resource. This topic explains how Performance Explorer works, and how best to use it.
Related reference
Add PEX filter (ADDPEXFTR) commandSee the Add Performance Explorer Filter (ADDPEXFTR) command
for information about adding a Performance Explorer (PEX) filter to the system.
Start Performance Explorer (STRPEX) commandSee the Start Performance Explorer (STRPEX) command
for information about starting a Performance Explorer collection.
Print Performance Explorer Report (PRTPEXRPT) commandSee the Print Performance Explorer Report
(PRTPEXRPT) command for information about printing a formatted listing of the data within a
Performance Explorer collection.
Performance 63
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
Related concepts
IBM Navigator for iSee the IBM Navigator for i topic to learn about more functions available within IBM
Navigator for i.
Investigate Data
Selecting the Investigate Data task launches the powerful Performance Data Investigator tool. With this
tool, you can view and analyze data that is stored in performance collections in chart or table form.
From the Investigate Data main page, you select the perspective and collection you want to analyze. Each
collector (Collection Services, Job Watcher, Disk Watcher, and Performance Explorer) has an associated
content package that contains predefined perspectives of that data collection type. There are also IBM-
shipped content packages for Health Indicators, Monitors, and Database. The Health Indicators package
contains perspectives that show the general health of your partition and allows user-defined thresholds
to be configured. It is also possible to have custom content packages and perspectives that were created
and saved by a user in the list.
The following image shows an example of the hierarchical format of the list of content packages that are
found on the Investigate Data page:
64 IBM i: Performance
Each content package has a list of perspectives under it to provide a different interpretation (rendering) of
the data. A perspective defines one page of charts or tables that can be used to render the data you want
to analyze.
The Investigate Data page can be used to select the perspective and collection you want to analyze by
following these steps:
1. Expand the content package that you want to work with by clicking the square next to it, or by
selecting the content package name directly.
2. Perspectives are stored hierarchically in the content package. To navigate to a subfolder, click the
square next to the folder name.
3. When you find a perspective that you want to view, select it by clicking it. On the opposite side of the
page, you see a brief description of the perspective. At the bottom of the page, you see two option lists
to help you select a collection.
4. Choose the library that you want to work with using the Collection Library list. Selecting the library
causes the Collection Name list to update with collections in the chosen library. Only collections that
are valid for the selected perspective is included in the list.
5. Select the collection that you want to work with using the Collection Name list.
6. Click Display to view the collection data that is rendered in the chosen perspective.
Performance 65
Only collections that are created in IBM i 6.1 or later or were converted to the 6.1 format (by using
the Performance Convert Collection task in IBM Navigator for i or the Convert Performance Collection
(CVTPFRCOL) command) are available to analyze in Performance Data Investigator.
You might need to install some or all of the following, depending upon the level of function that you need:
• IBM Performance Tools for i (5770-PT1) Option 1 - Manager Feature
– Performance Explorer content package
– IBM i Disk Watcher functions and content package
– Database content package
– Batch Model functions and content package
• IBM Performance Tools for i (5770-PT1) Option 3 - Job Watcher
– IBM i Job Watcher functions and content package
Related concepts
The basics of IBM i Wait Accounting
Wait Accounting is the patented technology built into the IBM i operating system that tells you what a
thread or task is doing when it appears that it is not doing anything.
Collecting and displaying CPU utilization for all partitions
When using multiple partitions, it can be important to understand the overall utilization of processing
capability, across all partitions, regardless of whether the partition is running IBM i, AIX®, or Linux®. IBM i
provides a way to collect and display this data.
66 IBM i: Performance
perspectives. Each perspective accessed through drilldown allows you to narrow your custom view
context. By selecting points on a chart or rows in a table, you are indicating which data sets should
be used to narrow the scope of future views. For instance, you may notice a potential performance
problem while viewing the CPU Utilization and Waits Overview perspective. You can narrow the scope of
interest by zooming in on a specific Date-Time range. You can then select another perspective from the
action list menu, such as CPU Utilization by Thread or Task, to gain additional relevant information. The
process of selecting data points and drilling down to another perspective is repeatable. You can use the
History menu at the top of the perspective to keep track of perspectives you have displayed as well as a
way to return to them.
• Because you have the ability to modify the way data is shown, a Save action is provided so that you can
reference the modified perspective in the future.
• By clicking Options, you can specify persistent user preferences to be used within Performance Data
Investigator. One of these options, "Enable Design Mode" will give you the ability to create your own
content packages and perspectives.
• There are many other interactive chart and table features and actions available to help make
Performance Data Investigator flexible and easy to use, including the ability to Export the data or Modify
the SQL used to produce the perspective.
Save Perspective
The Save Perspective feature allows you to save a modified perspective for future use.
As you investigate data in a collection through drilldown and context modification, the current context
perspective is modified. By saving this modified perspective, you will be able to return to it in the
future and quickly render any collection data (of the appropriate type) to the specifics of the perspective
created.
To perform the Save action from a customized table or chart, click on the "Save As" button at the bottom,
or use the "Save As" action from the Perspective menu at the top . The Save perspective page lets
you specify a name and description to help identify this new perspective. Once the save is completed
successfully, the perspective page is again shown with an additional message indicating a successful save
occurred and a URL that will let you return directly to this same perspective in the future.
The following are key points for this feature:
• The URL returned from the save can be shared with others, as long as they are using the IBM Navigator
for i on the same system.
• The saved perspective will be available from the Investigate Data page of Performance Data
Investigator. The saved perspective can be saved in either an unlocked package on the system and
visible to all users or in a user's private custom content package. The custom content package is located
under "Custom Perspectives - USERNAME" content package on the main Investigate Data page.
• The content package is stored in IFS at "QIBM/UserData/OS400/iSeriesNavigator/config/PML/CCP" in a
file named "CCP_USERNAME.PML". It should be backed up if you want to protect the saved perspectives
on the system.
• The saved perspectives can be used to analyze a different performance collection, simply by choosing
the collection from the library and name dropdown boxes. The context information must apply to the
new collection to be properly rendered. If an empty chart appears, verify and alter your perspective
context using the Change Context action.
Related tasks
Change Context
The Change Context action allows you to inspect and modify the current context information for the chart
or table view.
Options
From the Options page, you can set persistent user preferences unique to Performance Data Investigator.
The options that can be set on this page are:
Performance 67
• Use Patterns - Specifies whether to use patterns where applicable in charts. This option is selected by
default.
• Show Charts - Sets the default to show charts whenever possible rather than tables. This option is
selected by default.
• Enable Design Mode - Enables advanced features that allow design and development of new content
packages. This option is not selected by default.
• Show Help - Enables help messages for many tasks. This option is selected by default.
• Show SQL error messages - Enables SQL error messages to be shown. This option is selected by default.
• Set Table Size - Allows you to specify the number of visible rows and columns shown for a table.
The default library specifies the library that will be used when a collection is selected. It can be set to one
of the following:
• Collection Services configured library
• Last visited library
• A specified library
The System Monitor options that can be set on this page are:
• Show thresholds: Specifies whether to show the thresholds in system monitor charts. This option is not
selected by default.
Refresh Perspectives
Use Refresh Perspectives to reload all the content packages on the system.
The Investigate Data page of Performance Data Investigator has a Refresh Perspectives button at the
bottom. Click this button will force the reload of all the content packages on the system so that your view
is updated. This is valuable when manually developing new content packages on the system.
Refresh Perspectives is only available when design mode is enabled.
68 IBM i: Performance
Chart Features
There are many features available for Chart Views that allow you to further investigate your data and
modify the context of the chart view.
The interactive capability of charts makes them a powerful tool for analyzing performance data. Being
able to visualize graphed performance data can make peaks and valleys of activity stand out. The ability
to drill down into further details and specific time intervals is very useful. The following chart icons are
interactors which can aid you with analysis:
• By clicking the arrow icon, you will be able to select a point or bar on the chart. Selecting data
points allows Performance Data Investigator to help narrow future analysis based on the context you
find interesting. By selecting points on the chart and then an action from the action list menu, you can
drill down for further analysis on that data. This is the default interactor.
• By clicking the hand icon, you will be able to pan the chart by clicking and dragging around the
chart image. This interactor is useful when the chart is zoomed in close to a portion of the chart and you
want to move around without adjusting your zoom factor.
• By clicking the talk bubble icon, you will enable or disable tooltip information. Tooltips are
displayed when hovering over any chart data points and can be defined to show a number of interesting
bits of information. It is disabled by default.
• By clicking the icon that looks like a magnifying glass with a dotted rectangle behind it, you will
enable the zoom in action interactor. Zooming can be performed by clicking a point and dragging a
window around the portion of the chart you wish to further investigate. When you release the mouse
button, the perspective chart will be re-rendered to display only the area you have selected.
• By clicking the magnifying glass icon with the minus sign ("-") in it, you will incrementally zoom out
from your current zoom level until you return to the entire chart view for the perspective.
• By clicking the square icon with arrows pointing out in all directions, you reset the chart zoom to
the maximum level. This will show you the entire chart view for the perspective.
Only one chart interactor can be active at a time.
In addition, the "Show as Table" action is added to the chart action list menu. This action will change your
current view to render it as a table.
Table Features
There are many features available for Table Views that help you to further investigate your data.
Tables retain the capability to select interesting portions of your data by selecting entire rows. Tables can
easily be filtered, sorted, or searched for specific information. Switching back to a chart will reflect any
ordering and filtering changes made to the table.
There are several actions available that are specific to tables. The following are available by either clicking
the table icon features or by selecting a specific action from the action list menu.
• Select All - Selects all check boxes for all rows of the Select column. Selected rows can then be
manipulated by Performance Data Investigator through the actions on the action list menu.
• Deselect All - Clears all check marks in the Select column.
• Filter Row Show or Hide Toggle - The Filter row is hidden by default. Select this icon to show the filter
row to be able to filter the data. This allows you to refine the data displayed by specific parameters of
the column values.
• Clear All Filters - Removes any custom filters.
• Edit Sort - Allows you to sort the columns displayed in the table view based on the values in up to three
columns. Click this icon to perform the edit sort to the table view. The edit sort query will be displayed.
Performance 69
Select up to three column headings for first, second, and third sort positions. Then select ascending or
descending order for each sort to display data sorted in that manner of priority.
• Clear All Sorts - Removes all custom sorts that have been enabled.
The following actions are available only on the action list menu for tables:
• Show as chart - This action will change your current view to render it as a chart.
– This action is not available if "Show Charts" is not selected on the Options page.
– If you try to select the action "Show as chart" while viewing a table that does not have a data series
defined, this action will require you to define a data series before continuing.
• Columns - Add or remove columns to the table view. You can also rearrange the column order by moving
the column headings up or down.
• Show find toolbar - Allows you to perform a search within the table view.
• Restore defaults - Restores the table to the default sort and filtering.
The following actions are available by clicking icons found on the table:
• Select - By clicking the check box next to any row you can select a row in the table. Selecting a row
allows Performance Data Investigator to help narrow future analysis based on the context you find
interesting. The action selected from the action list menu will be performed on the selected rows.
• Sorting - By clicking the sort indicator (circumflex "^") in any column header, the table can be sorted in
ascending or descending order. You can also use the Edit Sort or Clear All Sorts icons to manipulate your
current sort criteria.
• Filtering - By clicking the Show Filter Row table icon, a filter row will be shown under the column
headings for the table. This allows you to refine the data displayed by specific parameters of the column
values. The filter row is hidden by default. To filter the rendered data based on a condition for a column,
click the "Filter" link for that column.
Export
The Export View function allows you to export a chart or table to a file location for later reference. Data
can be exported to an image (charts only), comma delimited, or tab delimited file.
Selecting the Export action from a chart or table view brings you to the Export page. From this page, you
can inspect and modify the following fields for the perspective view that you wish to save:
• Title - This is the title that will be used at the top of the file saved.
• Format - Select the format for saving the perspective view:
For a table, the choices are: comma delimited (*csv) or tab delimited (*.txt) file formats
For a chart, you may select from: image (*.png), comma delimited (*.csv), or tab delimited (*.txt)
formats
• Data Range - Allows you to change the data range in the view exported. The choices available are:
All data - This will export all of the data available by the current view.
Displayed data - This will export only the current visible data of the view.
User-defined range - When this option is selected, you can specify a first and last record number.
Record number refers to the index of a data element among all data for that data series. This will
export the specified range.
When OK is selected from the Export page, a new browser window may open to download the chart
or table view. You may also need to respond to a message bar on your browser to allow the file to be
downloaded. The file download window will allow you to select Open or Save the data to your client.
70 IBM i: Performance
Modify SQL
The Modify SQL action allows you to inspect and modify the SQL statements used to retrieve the data
from the performance collection in the current context.
A perspective view is rendered with the selected collections data through a defined SQL statement.
Modify SQL allows you the opportunity to change the query used to retrieve the data. Experience with
SQL development is advised before modifying the SQL statements. You can use this feature to understand
which database fields Performance Data Investigator uses to calculate the complex metrics displayed in a
view.
Be aware, the queries rely on SQL aliases to represent the specific database members needed to target
the performance collection selected. The aliases are made in QTEMP and are named by concatenating the
library, file, and member names together into a single alias name. If you want to run these queries outside
of Performance Data Investigator, the aliases will need to be recreated in your own interactive session.
Once the SQL statement has been modified and the perspective has been displayed again, you can then
use the "Save As" action to save the updated perspective into a custom content package for future use.
The Modify SQL panel also features a Reset button which will reset the SQL statement to what it was
when the panel was loaded. There is also a check box "Allow Collection Choice" which when checked
(default behavior) will allow the query to run on any collection, not just your currently selected collection.
Related tasks
Creating and Editing a View
When design mode is enabled, an Edit View action is available if you are currently viewing a table or chart.
It is also possible to create, edit, and add new views to an unlocked perspective.
Creating a Perspective
When design mode is enabled, a "New Perspective" button will appear on the main Investigate Data page
as well as on the "Saving a custom perspective" panel. You can create a new custom perspective into an
unlocked content package or perspective group of your choice.
Performance 71
The IBM Performance Management for Power Systems (PM for Power Systems) in support of IBM i
offering automates the collection, archival, and analysis of system performance data and returns reports
to help you manage system resources and capacity.
Change Context
The Change Context action allows you to inspect and modify the current context information for the chart
or table view.
The information displayed on this panel represents the context mined from prior perspectives which
affect the current table or chart. Altering the values will affect the current table or chart and future
perspectives which mine the same data. It will not, however, affect any prior perspectives. If you exit
the current perspective, any changes made using Change Context are lost, and the values will be reset
to their prior, mined values. Typically, if you plan to continue analysis and drilldown, closing the current
perspective and selecting new data points is more preferable to changing the context directly.
Related tasks
Save Perspective
The Save Perspective feature allows you to save a modified perspective for future use.
72 IBM i: Performance
Related tasks
Modify SQL
The Modify SQL action allows you to inspect and modify the SQL statements used to retrieve the data
from the performance collection in the current context.
Data Series
The Data Series panel allows you to view and modify the data series used for a chart view, or define a new
data series to be used by a chart.
Thresholds
Thresholds provide a way to quickly glance at a chart and have a visual indicator of whether the values
shown are within guidelines, or whether action should be taken.
Creating a Perspective
When design mode is enabled, a "New Perspective" button will appear on the main Investigate Data page
as well as on the "Saving a custom perspective" panel. You can create a new custom perspective into an
unlocked content package or perspective group of your choice.
Performance 73
Creating a Custom Package
When design mode is enabled, you can create your own custom content packages. These packages can
contain perspectives tailored to your needs.
The interactive ability to create custom charts and tables can be very useful to an advanced user who
has a need to see performance data in a different manner from the content packages provided by IBM.
A content package typically contains an independently defined set of perspectives with some common
purpose. To create a new content package folder, click on the "New Package..." icon at the top of the main
Investigate Data panel, or at the top of the perspectives list in the "Saving a custom perspective panel".
This will launch a new panel where you can specify a name and description for your new package.
Once a package is created, you can use the Edit button to edit the package information. While in the
Edit mode, you can also select a default perspective for your package (if there are perspectives currently
defined in your package). Also, you can select the "Locked" checkbox if you wish to lock your package
(and all of its descendants) so that it cannot be edited. If the package is left unlocked after creation, you
can use the Edit button to edit the package information or the Delete button to delete it.
Any content packages you create will appear in the main perspective hierarchy list. Once a package is
created, you can add new perspective groups and perspectives to it.
Creating a Folder
When design mode is enabled, a "New folder" button will appear on the main Investigate Data page as
well as on the "Saving a custom perspective" panel. This will allow you to organize any perspectives
created into logical groupings.
To create a perspective group, you must have either an unlocked content package or perspective group
selected in the tree. Clicking the "New folder" button will launch a panel that will allow you to type in a
name and description for the new folder. If a content package is selected when the "New folder" action
is taken, the perspective group will be created under the selected content package. If the "New folder"
action is taken while a perspective group is selected, the new perspective group will be nested below the
selected perspective group.
Once a perspective group is created, you can use the Edit button to edit the folder information. While
in the Edit mode, you can also select a default perspective for your perspective group (if there are
perspectives currently defined in your package). Also, you can select the "Locked" check box if you want
to lock your folder (and all of its descendants) so that it cannot be edited. If the folder is left unlocked
after creation, you can use the Edit button to edit the perspective group information or the Delete button
to delete it.
Any perspective groups you create will appear in the main perspective hierarchy list under the associated
content package. Once a perspective group is created, you can add new perspective groups and
perspectives to it. You may also choose to use the Move Up or Move Down buttons to arrange the order of
the perspective groups in your content package.
Creating a Perspective
When design mode is enabled, a "New Perspective" button will appear on the main Investigate Data page
as well as on the "Saving a custom perspective" panel. You can create a new custom perspective into an
unlocked content package or perspective group of your choice.
A perspective is one rendered panel of data, typically in the form of one or more charts or tables. A
perspective contains one or more (up to twelve) views. A view is a single chart or table. To create a
perspective, you must have either an unlocked content package or perspective group selected in the
perspective tree. You can then select the "New Perspective" button. This will launch a new panel where
you can specify a name and description for your perspective. You can select the "Locked" checkbox if you
wish to lock your perspective so that it cannot be edited. Also on this panel, you can add views to the
perspective.
If a perspective is created unlocked, you can use the "Edit" button to edit the perspective information.
While in edit mode, you can change the name, description, or locked status (from unlocked to locked). You
can also add, edit, or delete views associated with the perspective. An unlocked perspective can also be
deleted using the "Delete" button.
74 IBM i: Performance
Performance Data Investigator uses XML files to store how perspectives are defined (such as how it mines
and renders data). The files are referred to as "Performance Markup Language" (PML). You can view or
edit the PML directly for an unlocked perspective by clicking on the "Advanced Edit" button from the main
Investigate Data page as well as on the "Saving a custom perspective" panel.
Any perspectives you create will appear in the main perspective hierarchy list under the associated
content package or perspective group.
Related tasks
Modify SQL
The Modify SQL action allows you to inspect and modify the SQL statements used to retrieve the data
from the performance collection in the current context.
Creating and Editing a View
When design mode is enabled, an Edit View action is available if you are currently viewing a table or chart.
It is also possible to create, edit, and add new views to an unlocked perspective.
Data Series
The Data Series panel allows you to view and modify the data series used for a chart view, or define a new
data series to be used by a chart.
The Data Series attributes are as follows. Some attributes may be locked in the case where the chart
already has the attribute specified.
• Domain - Specifies the field to be used for the independent axis used for this chart. If another data
series exists for the current chart, this field is locked to match the existing domain value.
• Range - Allows you to specify Range values to be used for this chart. The Available list shows all
possible ranges. The Add button adds the ranges selected in the Available list to the data series.
The Remove button removes the ranges selected in the Selected table from the data series. Use the
dropdown menus in the Selected table to specify the color, background, and pattern of each range.
• Type - Select the chart type (line or bar with variations).
• Breakdown - Each distinct value of the chosen breakdown field will produce a unique data series which
represents all of the range values for that breakdown value. This is the mechanism used to create
individual data series for each unique job on a system over time. For example, given the following data
set:
A line chart with a domain of Interval Number and a range of Job CPU, would result with the following
data points: (1, 10), (1, 15), (2, 20), (2, 30), (3, 30), (3, 45).
Specifying a breakdown dimension of Job Name, would produce two lines with the following data
points: Series 1 = (1, 10), (2, 20), (3, 30) and Series 2 = (1, 15), (2, 30), (3, 45).
It is a complicated, but powerful feature that may take a little experimenting to fully understand.
• Tooltip fields - By selecting fields from this list, each point in the data series will include in its tool tip the
value of the fields you select for the current domain.
Performance 75
The data series functions are only available for chart views. It may be necessary to add a data series when
switching from a table view to a chart view. If you try to select the action "Show as chart" while viewing
a table that has does not have a data series defined, this action will require you to define a data series
before continuing.
Related tasks
Creating and Editing a View
When design mode is enabled, an Edit View action is available if you are currently viewing a table or chart.
It is also possible to create, edit, and add new views to an unlocked perspective.
Thresholds
Thresholds provide a way to quickly glance at a chart and have a visual indicator of whether the values
shown are within guidelines, or whether action should be taken.
A threshold represents a boundary which, once crossed, indicates the data has reached a new state.
Threshold values are persistent and are stored according to user profile. The following can be specified for
a threshold:
• Name - The name to be displayed for this threshold
• Field - This is the field for which this threshold is defined.
• Color - This specifies the color to be used when drawing this threshold on the chart.
• Current Value - The current value represents the threshold value currently specified by the user. This
value can be easily changed, and will persist across application sessions. Future thresholds defined with
the same name and field will also use the same value. To reset the current value to the default value,
click the "Reset to Default Value" button.
• Default Value - The default value represents the value supplied when the content package was created.
This value will be used when the user does not intentionally override it by specifying a Current Value. To
force the default value to a new value for this threshold, click "Update to Current Value".
Related tasks
Creating and Editing a View
When design mode is enabled, an Edit View action is available if you are currently viewing a table or chart.
It is also possible to create, edit, and add new views to an unlocked perspective.
Graph History
Graph History allows you to view and analyze Collection Services Historical Data.
The Performance > Graph History task provides a graphical view of the historical performance data
collection created by Collection Services. You can view historical performance data by using the Graph
History task in IBM Navigator for i. To view the historical data, you must use Collection Services to collect
data and enable historical data creation in the Collection Services configuration.
76 IBM i: Performance
Summary data is the system level or summarized metrics that are useful in identifying trends or
detecting changes in a system over a long time. Summary data is created when the option to create
historical data is turned on.
• Detail data
Detail data is the data from which metrics are derived and other relevant supplementary data. Detail
data is created when the historical detail data creation option is also on. The creation of detail data is
on by default, but can be turned off. This data is used to look deeper into a problem that was identified
by looking at summary historical data. Only the top number of contributors for each metric are stored as
historical detail data, as defined by the value of the historical detail data filter.
The amount of historical data that you are allowed to keep depends on whether PM for Power Systems
(PM Agent) is active.
• If PM for Power Systems is not active, you can keep up to 7 days of detail data and 1 month of summary
data.
• If PM for Power Systems is active, then you can keep up to 60 days of detail data and 50 years of
summary data.
Related concepts
Collection Services support for historical data
Collection Services can be configured to create and maintain a historical performance data collection.
The Performance > Graph History task in IBM Navigator for i can be used to analyze the historical
performance data in chart or table form.
Related tasks
Configuring historical data for Graph History
Configure the historical performance data settings in Collection Services to generate historical
performance data for use by the Graph History task in IBM Navigator for i.
Creating historical data for Graph History
To create historical performance data immediately for use in Graph History, follow these steps.
Viewing historical data with Graph History
Use the Graph History task in IBM Navigator for i to view historical performance data.
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
Related reference
Historical data files
Performance 77
9. Click OK.
Historical data is now automatically created for you at every Collection Services cycle time.
Related concepts
Graph History concepts
Configure and generate historical performance data for use by the Graph History task in IBM Navigator
for i.
Collection Services support for historical data
Collection Services can be configured to create and maintain a historical performance data collection.
The Performance > Graph History task in IBM Navigator for i can be used to analyze the historical
performance data in chart or table form.
Related tasks
Creating historical data for Graph History
To create historical performance data immediately for use in Graph History, follow these steps.
Viewing historical data with Graph History
Use the Graph History task in IBM Navigator for i to view historical performance data.
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
78 IBM i: Performance
Viewing historical data with Graph History
Use the Graph History task in IBM Navigator for i to view historical performance data.
The Graph History task is included in IBM Navigator for i and is used to display the historical performance
data collection created by Collection Services. To view the historical performance data, follow these
steps:
1. Select Performance > Graph History from your IBM Navigator for i window. An initial chart is
displayed based on the default settings.
2. Select what you want to display by changing the fields in the context pane.
3. Click Refresh to display the graph.
After you launch Graph History, the chart displays a series of graphed collection points. These collection
points on the graph line are identified by two different graphics that correspond to the two levels of data:
• A square collection point represents a data point where detail data is available for displaying the top
contributors and properties information.
• A circular collection point represents a data point where only summary data is available, so no top
contributors or properties information can be shown.
To view the top contributors of a particular data point, select a square data point to display the Top
Contributors chart. To view the properties of one of the top contributors, select the bar next to a top
contributor to display the Properties pane for that top contributor.
Related concepts
Graph History concepts
Configure and generate historical performance data for use by the Graph History task in IBM Navigator
for i.
Collection Services support for historical data
Collection Services can be configured to create and maintain a historical performance data collection.
The Performance > Graph History task in IBM Navigator for i can be used to analyze the historical
performance data in chart or table form.
Related tasks
Configuring historical data for Graph History
Configure the historical performance data settings in Collection Services to generate historical
performance data for use by the Graph History task in IBM Navigator for i.
Creating historical data for Graph History
To create historical performance data immediately for use in Graph History, follow these steps.
Performance 79
Adding a report definition
To add a performance data report definition, follow these steps.
1. Select Performance > All Tasks > Performance Data Reports from your IBM Navigator for i window.
2. Click Add Definition.
Manage collections
Select the Manage Collections task to launch the collections table to view and work with the collections
on your system. The collections from Collection Services, IBM i Job Watcher, IBM i Disk Watcher,
Performance Explorer, and Batch Model are visible here.
Related reference
CL commands for performanceThe operating system includes several CL commands to help you manage
and maintain system performance.
Viewing a collection
To view a collection in IBM Navigator for i, follow one of the following series of steps.
1. Select Performance > Investigate Data from your IBM Navigator for i window.
2. Expand the content package that you are interested in.
3. Keep expanding the nodes in the tree until you navigate to the perspective that you would like to use.
4. Select the perspective.
5. Select the collection library.
6. Select the collection name.
7. Click Display.
You can also view a Collection Services, Disk Watcher, Job Watcher, or Performance Explorer file based
collection by doing the following steps:
1. Select Performance > Manage Collections from your IBM Navigator for i window. This action launches
the list of data collections on your system.
2. Select the collection that you want to view.
3. From the Actions menu, select Investigate Data. This action launches the Performance Data
Investigator tool. The selected collection data is rendered using the default perspective that is defined
by the content package.
You can also view a Batch Model file based collection by doing the following steps:
1. Select Performance > All Tasks > Sizing > Batch Model > Batch Models from your IBM Navigator for i
window. This action launches the list of data collections on your system.
2. Select a Batch Model file based collection with a status of Complete that you want to view.
3. From the Actions menu, select Investigate Results. This action launches the Performance Data
Investigator tool. The selected collection data is rendered using the default perspective that is defined
by the content package.
80 IBM i: Performance
Related concepts
Investigate Data
Selecting the Investigate Data task launches the powerful Performance Data Investigator tool. With this
tool, you can view and analyze data that is stored in performance collections in chart or table form.
Copying a collection
To copy a collection, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select the collection that you want to copy.
3. From the Actions menu, select Copy.
Related reference
Copy Performance Collection (CPYPFRCOL) commandSee the Copy Performance Collection (CPYPFRCOL)
command for information about copying a performance collection.
Deleting a collection
To delete a collection, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select the collection that you want to delete.
3. From the Actions menu, select Delete.
Related reference
Delete Performance Collection (DLTPFRCOL) commandSee the Delete Performance Collection
(DLTPFRCOL) command for information about deleting a performance collection.
Saving a collection
To save a collection, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select the collection that you want to save.
3. From the Actions menu, select Save.
Related reference
Save Performance Collection (SAVPFRCOL) commandSee the Save Performance Collection (SAVPFRCOL)
command for information about saving a performance collection.
Restoring a collection
To restore a collection, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. From the Actions menu, select Maintain Collections > Restore.
Related reference
Restore Performance Collection (RSTPFRCOL) commandSee the Restore Performance Collection
(RSTPFRCOL) command for information about restoring a performance collection.
Converting a collection
To convert a collection that was collected in a previous release, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select the collection that you want to convert.
3. From the Actions menu, select Maintain Collections > Convert.
Related reference
Convert Performance Collection (CVTPFRCOL) commandSee the Convert Performance Collection
(CVTPFRCOL) command for information about converting a performance collection.
Performance 81
Viewing collection properties
To view collection properties, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select the collection that you want to view the properties for.
3. From the Actions menu, select Properties.
82 IBM i: Performance
When you use Collection Services to collect performance data, you control what data is collected and how
often it is collected.
Related reference
Configure Performance Collection (CFGPFRCOL) commandSee the Configure Performance Collection
(CFGPFRCOL) command for information about configuring Collection Services.
Performance 83
Related tasks
Creating database files from Collection Services data
Use this information to manually or automatically create database files from Collection Services data.
Related reference
Create Performance Data (CRTPFRDTA) commandSee the Create Performance Data (CRTPFRDTA)
command for information about creating Collection Services database files.
84 IBM i: Performance
1. Select Performance > All Tasks > Collectors > Disk Watcher from your IBM Navigator for i window.
2. Click Add Disk Watcher Definition.
Related reference
Add Disk Watcher Definition (ADDDWDFN)See the Add Disk Watcher Definition (ADDDWDFN) command
for information about adding a Disk Watcher definition from the system.
Performance 85
1. Select Performance > All Tasks > Collectors > Job Watcher from your IBM Navigator for i window.
2. Click Stop Job Watcher.
Related reference
End Job Watcher (ENDJW)See the End Job Watcher (ENDJW) command for information about ending a
Job Watcher collection.
Batch Model
The Batch Model tool models the system utilization and run times of IBM i batch workloads.
You can use the Batch Model tool to help analyze and predict batch job performance on the IBM i and
help answer the question: “What can I do to my system to meet my overnight batch runtime requirements
(also known as the batch window)?”
A Batch Model collection is created based on existing performance data that is collected by IBM i
Collection Services. A Batch Model collection can then be changed and analyzed for various “what if”
scenarios, such as workload growth and hardware upgrades.
Investigate the results of Batch Model to view the batch window in a form that shows workload start/
stop times, dependencies between workloads, and amount of resources used. View the Batch Model
perspectives to locate times in the batch window when more efficient job scheduling can improve total
system throughput.
The Batch Model functions and content package require the installation of IBM Performance Tools for i
(5770-PT1) Option 1 - Manager Feature.
86 IBM i: Performance
Creating a batch model
A Batch Model collection is created based on existing performance data that is collected by IBM i
Collection Services. During the process of creating a batch model, the measured data is analyzed and a
model is built over the measured data.
To create a batch model, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select a Collection Services file based collection that you want to create a batch model for.
3. From the Actions menu, select Create Batch Model.
You can also create a batch model by following these steps:
1. Select Performance > All Tasks > Sizing > Batch Model from your IBM Navigator for i window.
2. Click Create Batch Model.
Note: The Create Batch Model action submits a batch job to create the model. Creating a batch model
can sometimes be a long running operation. The creation is done when the status of the Batch Model
collection is in Complete state.
Performance 87
Calibrating a batch model
Calibrating a batch model must be done to re-create the model after changes were made to the batch
model calibration.
To calibrate a batch model, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select a Batch Model file based collection with a status of Calibration Changed that you want to
calibrate.
3. From the Actions menu, select Calibrate.
You can also calibrate a batch model by doing the following steps:
1. Select Performance > All Tasks > Sizing > Batch Model from your IBM Navigator for i window.
2. Click Calibrate Batch Model.
Note: The Calibrate Batch Model action submits a batch job to re-create the model. Calibrating a batch
model can sometimes be a long running operation. The calibration is done when the status of the Batch
Model collection is in Complete state.
88 IBM i: Performance
Merging batch models
Merge batch models to merge all the workloads from two different data collections. This action is helpful
when data needs to be merged that was collected during different time periods or from different systems.
To merge two batch models, follow these steps.
1. Select Performance > Manage Collections from your IBM Navigator for i window.
2. Select a Batch Model file based collection that you want to merge.
3. From the Actions menu, select Merge.
You can also merge two batch models by doing the following steps:
1. Select Performance > All Tasks > Sizing > Batch Model from your IBM Navigator for i window.
2. Click Merge Batch Model.
Performance 89
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
IBM Navigator for iSee the IBM Navigator for i topic to learn about more functions available within IBM
Navigator for i.
Monitor concepts
Monitors track near real-time performance data. Additionally, they continually monitor your system to run
a selected command when a specified threshold is reached. Find out how monitors work, what they can
monitor, and how they can respond to a performance situation.
System monitors use performance metrics that are stored in database files that are generated and
maintained by Collection Services. You can use Performance Data Investigator to view the performance
data that is gathered by the monitor. You can change the frequency of the data collection in the monitor
properties. The settings in the monitor properties override the settings in Collection Services if the
monitor requires the data to be collected more frequently.
Note: When monitors override the settings of Collection Services to gather performance data more
frequently, the settings are not undone when a monitor stops. You must manually change the Collection
Services settings if you no longer want to collect the data as often.
You can use monitors to track and research many different elements of system performance and can
have many different monitors that run simultaneously. Using multiple monitors together can provide a
sophisticated tool for observing and managing system performance. For example, when a new interactive
application is implemented, you might use a system monitor to prioritize a job's resource utilization and a
message monitor to alert you if a specified message occurs.
Monitors must be manually restarted after a partition IPL, they will not automatically restart.
90 IBM i: Performance
Collection Services data files: QAPMSMJOSThis Collection Services database file contains summarized
metrics from job data (*JOBOS collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMPOLThis Collection Services database file contains summarized
metrics from pool data (*POOL collection category) that may be used in support of system monitoring.
Collection Services data files: QAPMSMSYSThis Collection Services database file contains summarized
metrics from system data (*SYSLVL collection category) that may be used in support of system
monitoring.
Monitor metrics
To effectively monitor system performance, you must decide which aspects of system performance you
want to monitor. IBM Navigator for i offers various performance measurements, which are known as
metrics, to help you pinpoint different aspects of system performance.
When you configure a monitor, you can use any metric, a group of metrics, or all the metrics from the list
to be included in your monitor. Metric types that you can use in your monitor include the following.
Performance 91
Table 2.
Metric groups Metric description
CPU Utilization The percentage of available processing unit time
that is consumed by jobs on your system. Choose
from the following types of CPU Utilization metrics
for use in your monitors:
• CPU Utilization (Average): Configured CPU
percent unscaled.
• CPU Utilization (Interactive Jobs): Configured
CPU percent that is consumed by interactive
jobs.
• CPU Utilization (Uncapped): Uncapped CPU
percent unscaled. The amount of unscaled
system CPU consumed relative to the maximum
uncapped CPU the partition could consume
based on the number of the virtual processors
that are assigned to the partition and the
capacity of the shared virtual pool.
• CPU Utilization (SQL): SQL CPU percent
unscaled. The amount of unscaled system
CPU consumed performing work that is done
on behalf of SQL operations relative to the
configured CPU time (processor units) available
to the partition.
Interactive Response Time (Average and The response time that interactive jobs experience
Maximum) on your system.
Transaction Rate (Interactive) The number of transactions per second completed
on your system by interactive jobs (Job Type = 'I').
Batch Logical Database I/O The average number of logical database input/
output (I/O) operations that are currently
performed by batch jobs (Job Type = 'B') on the
system.
Disk Arm Utilization (Average and Maximum) The disk unit percent busy for all disks.
Disk Arm Utilization for System ASP (Average and The disk unit percent busy for disks in the system
Maximum) ASP.
Disk Arm Utilization for User ASP (Average and The disk unit percent busy for all disks in user
Maximum) ASPs.
Disk Arm Utilization for Independent ASP (Average The disk unit percent busy for disks in independent
and Maximum) ASPs.
Disk Storage Utilization (Average and Maximum) The percentage of disk storage that is full on your
system during the time you collect the data.
Disk Storage Utilization for System ASP (Average The percentage of disk storage that is full in the
and Maximum) system ASP during the time you collect the data.
Disk Storage Utilization for User ASP (Average and The percentage of disk storage that is full in user
Maximum) ASPs during the time you collect the data.
Disk Storage Utilization for Independent ASP The percentage of disk storage that is full in
(Average and Maximum) independent ASPs during the time you collect the
data.
92 IBM i: Performance
Table 2. (continued)
Metric groups Metric description
Communications Line Utilization (Average and The amount of data that was sent and received on
Maximum) all your system communication lines.
LAN Utilization (Maximum and Average) The amount of data that was sent and received on
all your local area network (LAN) communication
lines.
Machine Pool Faults The number of faults per second occurring in the
machine pool on the system.
User Pool Faults (Maximum and Average) The number of faults per second per pool.
Temporary Storage Used The total amount of temporary storage
(megabytes) in use within the system. This metric
includes both system and user temporary storage.
Spool File Creation Rate The number of spool files that are created per
second.
Shared Processor Pool Utilization (Virtual and The amount of CPU consumed in the shared pool
Physical) by all partitions that are using the pool relative to
the CPU available within the pool.
Disk Response Time (Read and Write) The response time that disk units experienced on
your system.
HTTP Requests Received Rate The number of requests received per second for all
HTTP servers.
HTTP Requests Received (Maximum) The largest number of HTTP requests received by a
single server.
HTTP Responses Sent Rate The number of responses sent per second for all
HTTP servers.
HTTP Responses Sent (Maximum) The largest number of HTTP responses sent by a
single server.
HTTP Non-Cached Requests Processed (Average The number of non-cached requests processed for
and Maximum) HTTP servers.
HTTP Error Responses Sent (Average and The number of error responses sent for HTTP
Maximum) servers.
HTTP Non-Cached Requests Processing Time The processing time for non-cached requests for
(Total and Highest Average) HTTP servers.
HTTP Cached Requests Processing Time (Total and The processing time for cached requests for HTTP
Highest Average) servers.
If you need more help, click the Help button on the Create New System Monitor-Metrics page. After you
become familiar with the IBM Navigator for i metrics, which metrics you select depend on the information
needs of your computing environment. After you select metrics that target the information you are trying
to see, you are ready to view and change detailed metric information for each metric you selected for your
monitor.
Performance 93
Investigate system monitor data
Use this information to learn how to view system monitor performance data in Performance Data
Investigator in IBM Navigator for i.
You can use system monitors to gather and display near real-time performance data from your system.
System monitors use Performance Data Investigator to chart the performance data that is gathered by the
monitor. To view system monitor performance data, perform the following steps:
1. In IBM Navigator for i, select Monitors > System Monitors.
2. Select the name of the monitor whose performance data you want to view. From the Actions menu,
select Investigate Monitor Data.
3. Select the name of the perspective you want to view. This action launches the perspective in
Performance Data Investigator.
4. Use the Refresh button to update the perspective to show new data as it is collected for a monitor that
is active.
5. Click Done when you are done viewing the data.
Related tasks
Performance Data Investigator
Performance Data Investigator provides a web-based GUI over performance data with interactive charts
and tables. You can view and analyze performance data for each of the collectors (Collection Services,
IBM i Job Watcher, IBM i Disk Watcher, Performance Explorer, Database SQL Plan Cache, Database SQL
Performance Monitor, and Historical Data).
Related reference
Monitor metrics
To effectively monitor system performance, you must decide which aspects of system performance you
want to monitor. IBM Navigator for i offers various performance measurements, which are known as
metrics, to help you pinpoint different aspects of system performance.
94 IBM i: Performance
Scenarios: IBM Navigator for i monitors
Use this information to see how you can use some of the different types of monitors to look at specific
aspects of your system's performance.
The monitors included in IBM Navigator for i provide a powerful set of tools for researching and managing
system performance. For an overview of the types of monitors that are provided by IBM Navigator for i,
see “IBM Navigator for i Monitors” on page 89.
See the following scenarios for detailed usage examples and sample configurations:
Situation
As a system administrator, you need to ensure that the system has enough resources to meet the current
demands of your users and business requirements. For your system, CPU utilization is an important
concern. You would like the system to alert you if the CPU utilization gets too high and to temporarily hold
any lower priority jobs until more resources become available.
To accomplish this, you can set up a system monitor that sends you a message if CPU utilization exceeds
80%. Moreover, it can also hold all the jobs in the QBATCH job queue until CPU utilization drops to 60%, at
which point the jobs are released, and normal operations resume.
Configuration example
To set up a system monitor, you need to define what metrics you want to track and what you want the
monitor to do when the metrics reach specified levels. To define a system monitor that accomplishes this
goal, complete the following steps:
1. In IBM Navigator for i, select Monitors > System Monitors. From the Actions menu, select Create
New System Monitor...
2. On the General page, enter a name and description for this monitor. Click Next.
3. Add and edit the CPU Utilization (Average) metric properties by performing the following steps:
a. To add the metric, select CPU Utilization (Average) from the list of Available Metrics, and click
Add. CPU Utilization (Average) is now listed under Metrics to monitor.
b. To edit the metric properties, click the CPU Utilization (Average) metric in the Metrics to monitor
list. This action opens the Configure Metric page where you can edit the properties of the metric.
c. For Collection interval, specify how often you would like to collect the data. This action overrides
the Collection Services setting. For this example, specify 30 seconds.
d. For Threshold 1, enter the following values to send an inquiry message if the CPU Utilization is
greater than or equal to 80%:
i) Select Enable threshold.
ii) For the threshold trigger value, specify >= 80 (greater than or equal to 80 percent busy).
iii) For Duration, specify 1 interval.
iv) For the IBM i command, specify the following:
v) For the threshold reset value, specify < 60 (less than 60 percent busy). This action resets the
monitor when CPU utilization falls below 60%.
e. For Threshold 2, enter the following values to hold all the jobs in the QBATCH job queue when CPU
utilization stays above 80% for five collection intervals:
i) Select Enable threshold.
ii) For the threshold trigger value, specify >= 80 (greater than or equal to 80 percent busy).
Performance 95
iii) For Duration, specify 5 intervals.
iv) For the IBM i command, specify the following:
HLDJOBQ JOBQ(QBATCH)
v) For the threshold reset value, specify < 60 (less than 60 percent busy). This action resets the
monitor when CPU utilization falls below 60%.
vi) For Duration, specify 5 intervals.
vii) For the IBM i command, specify the following:
RLSJOBQ JOBQ(QBATCH)
This command releases the QBATCH job queue when CPU utilization stays below 60% for five
collection intervals.
4. Click OK to save the metric properties.
5. Click Next to view the monitor summary page.
6. Click Finish to save the monitor.
7. From the list of system monitors, right-click the new monitor and select Start.
Results
The new monitor collects the CPU utilization, with new data points added every 30 seconds, according
to the specified collection interval. The monitor automatically carries out the specified threshold actions
whenever CPU utilization reaches 80%. The monitor will continue to run and perform threshold actions
until you stop the monitor.
Note: This monitor tracks only CPU utilization. However, you can include any number of the available
metrics in the same monitor, and each metric can have its own threshold values and actions. You can also
have several system monitors that run at the same time.
Situation
As a system administrator, you need to be aware of inquiry messages as they occur across your system.
You can set up a message monitor to display any inquiry messages in your message queue that occur on
your system.
Configuration example
To set up a message monitor, you need to define the types of messages you would like to watch for and
what you would like the monitor to do when these messages occur. To set up a message monitor that
accomplishes this goal, complete the following steps:
1. In IBM Navigator for i, select Monitors > Message Monitors. From the Actions menu, select Create
New Message Monitor.
2. On the General page, enter a name and description for this monitor. Click Next.
3. On the Message Queue page, enter the following values:
a. For Message Queue to Monitor, specify QSYSOPR.
b. For Library, specify QSYS.
c. Click Next.
4. On the Message Set page, perform the following steps:
a. On the Message Set 1 tab, click Add.
b. On the Add A Message Set page, enter the following values:
96 IBM i: Performance
i) Select Add a user defined set of messages.
ii) For Message Type, select Inquiry.
iii) Click OK.
c. Select Set the message trigger and reset.
d. For Trigger at the following message count, specify 1.
e. Click Next.
5. Click Next to view the monitor summary page.
6. Click Finish to save the monitor.
7. From the list of message monitors, right-click the new monitor and select Start.
Results
The new message monitor displays any inquiry messages sent to QSYSOPR.
Note: This monitor responds to only inquiry messages sent to QSYSOPR. However, you can include two
different sets of messages in a single monitor, and you can have several message monitors that run at the
same time. Message monitors can also carry out IBM i commands when specified messages are received.
IBM Performance Management for Power Systems (PM for Power Systems) -
support for IBM i
The IBM Performance Management for Power Systems (PM for Power Systems) in support of IBM i
offering automates the collection, archival, and analysis of system performance data and returns reports
to help you manage system resources and capacity.
The PM for Power Systems offering includes the Performance Management Agent (PM Agent). The
PM Agent is a function of the operating system that provides automated collection of nonproprietary
Collection Services data, reduces the data, and sends the data to IBM. When you send your data to IBM,
you eliminate the need to store all the trending data yourself. IBM stores the data for you and provides
you with a series of reports and graphs that show your server's growth and performance. You can access
your reports electronically using a traditional browser.
This offering, when used with the IBM Systems Workload Estimator, allows you to better understand
how your business trends relate to the timing of required hardware upgrades, such as central processing
unit (CPU) or disk. The IBM Systems Workload Estimator can size a systems consolidation or evaluate
upgrading a system with logical partitions, by having PM Agent send the data for multiple systems or
partitions to the IBM Systems Workload Estimator.
Related concepts
Collection Services
Collection Services provides for the collection of system and job level performance data. It is the primary
collector of performance data.
Related tasks
Size Next Upgrade
Use the Size Next Upgrade action to send data from your current session to Workload Estimator for use in
sizing a future system using current performance statistics.
Related reference
PM for Power Systems web siteFor more information, see the PM for Power Systems website.
PM Agent concepts
Learn about the functions and benefits PM Agent can provide and about important implementation
considerations.
PM Agent uses Collection Services to gather the nonproprietary performance and capacity data from your
server and then sends the data to IBM. This information can include CPU utilization and disk capacity,
response time, throughput, application and user usage. When you send your data to IBM, you eliminate
the need to store all the trending data yourself. IBM stores the data for you and provides you with a series
Performance 97
of reports and graphs that show your server's growth and performance. You can access your reports
electronically using a traditional browser.
The most important requirement for establishing an accurate trend of the system utilization, workload,
and performance measurements is consistency. Ideally, performance data should be collected 24 hours
per day. Because of the relationship between PM Agent and Collection Services, you need to be aware of
the implications that can occur when you are using PM Agent.
Here are some guidelines to help you define your collections when you are using PM Agent:
• Collect data continuously with Collection Services.
PM Agent satisfies this requirement by collecting data 24 hours a day with Collection Services. PM
Agent collects performance data at 15-minute intervals. PM Agent uses the 15-minute interval default,
but does not change what the Collection Services interval is set to. A 15-minute interval is the
recommended interval.
• Select the Standard plus protocol profile.
Standard plus protocol is the default value for the collection profile. The collection profile indicates what
data is collected. The collection does not cycle (unless required to do so for other reasons). This action
is done to gather information for PM Agent reports.
• Avoid making interim changes to collection parameters when PM Agent is active.
For example, when you configure Collection Services in IBM Navigator for i, the Create database
files during collection field is checked as the default value. The change takes effect immediately. The
collection does not cycle (unless required to do so for other reasons).
Related reference
Collection Services collection profiles
Descriptions of the Collection Services collection profiles. The collection profiles define what is collected.
Configuring PM Agent
To start using PM Agent, you need to activate it, set up a transmission method, and customize the data
collection and storage.
PM Agent automates the collection of performance data through Collection Services. You can specify
which library to put the data in as long as the library resides on the base auxiliary storage pool (ASP). The
library should not be moved to an independent auxiliary storage pool because an independent auxiliary
storage pool can be varied off, which stops the PM Agent collection process. PM Agent creates the library
during activation if the library does not already exist.
To begin using PM Agent, you need to perform the following tasks:
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
You must start PM Agent to take advantage of its data collecting capabilities. You can start PM Agent by
using the following method:
Issue the Configure PM Agent (CFGPMAGT) command
Run the Configure PM Agent (CFGPMAGT) command from the command line.
You can proceed to the next step in the setup process, which is to set up the Service Agent transmission
used to send data to IBM.
Related concepts
Setting up PM Agent transmission of data
You gather and send the performance data using the inventory function of the Electronic Service Agent.
Related tasks
Deactivating PM Agent
98 IBM i: Performance
Learn how you can stop PM Agent.
Performance 99
• Press F6 (Create) to identify which servers will send their data to your host server.
• Complete the fields and press Enter.
PM Agent automatically schedules the transmission of data from the primary server to IBM the day after
data is received from a remote server. If the automatic scheduling does not fit your work management
scheme, you can manually schedule the transmission of the data from the primary server.
Here is a tip that you should keep in mind when scheduling the transmission of your data. Throughout the
week, evenly schedule the transmission of data to the host server. This action minimizes the performance
impact on the host server. For example, in a network of twelve servers, you might have three groups of
four systems. You can schedule each group to send their data on Monday, Wednesday, and Friday. This
evenly distributes the amount of data that is sent to the host server.
Once you have configured your servers, you are ready to do the other tasks to manage PM Agent.
Related reference
Managing PM Agent
Now that you have set up your network, you can perform a variety of tasks with PM Agent.
Performance 101
Table 4. Primary server
Keyword(Value) Description
DEVD(Q1PRMxxx) Specifies the name of the device description. The name that is used
here matches the device description name for the remote system.
RMTLOCNAME(Q1PRMxxx) Specifies the name of the remote location. The name that is used
here matches the LCLLOCNAME value of the remote server, where
xxx is unique for each remote location.
ONLINE(*YES) Specifies whether this device is varied online when the system is
started or restarted.
LCLLOCNAME(Q1PLOC) Specifies the local location name. This value matches the
RMTLOCNAME of the remote server.
CTL(aaaaaa) Specifies the name of the attached controller, where aaaaaa is a
controller that attaches to the remote server.
MODE(Q1PMOD) Specifies the mode name.
APPN(*NO) Specifies if device is APPN-capable.
3. Vary on the devices (Vary Configuration (VRYCFG) command) after you define the APPC devices. On the
remote server, type VRYCFG. Press F4 to prompt for the parameters.
VRYCFG CFGOBJ(Q1PLOC)
CFGTYPE(*DEV)
STATUS(*OFF)
DLTDEVD DEVD(Q1PLOC)
CRTDEVAPPC DEVD(Q1PLOC)
RMTLOCNAME(Q1PLOC)
ONLINE(*NO)
LCLLOCNAME(name of remote system)
RMTNETID(remote netid of primary (or central) system)
CTL(name of controller that the device will be attached to)
AUT(*EXCLUDE)
CRTOBJAUT OBJ(Q1PLOC)
OBJTYPE(*DEVD)
USER(QUSER)
AUT(*CHANGE)
VRYCFG CFGOBJ(Q1PLOC)
CFGTYPE(*DEV)
STATUS(*ON)
Related tasks
Working with remote servers in an APPC network
The host server receives PM Agent data from other servers and then sends the data to IBM. The remote
server sends PM Agent data to the host server.
Related reference
Create Controller Description (APPC) (CRTCTLAPPC) commandSee the Create Controller Description
(APPC) (CRTCTLAPPC) command for information about creating a controller description for an advanced
program-to-program communications (APPC) controller.
Change Controller Description (APPC) (CHGCTLAPPC) commandSee the Change Controller Description
(APPC) (CHGCTLAPPC) command for information about changing a controller description for an advanced
program-to-program communications (APPC) controller.
Display Controller Description (DSPCTLD) commandSee the Display Controller Description (DSPCTLD)
command for information about displaying a controller description.
Managing PM Agent
Now that you have set up your network, you can perform a variety of tasks with PM Agent.
Customizing PM Agent
Now that you have set up your network, you may need to customize PM Agent to fit your needs.
The Work with PM Agent Customization display provides you with the ability to:
Establish global parameters for the operation of PM Agent software
The global parameters allow you to customize the following items. See the online help for a description of
these fields:
• Priority limits
• Trend and shift schedules
Define your PM Agent Global Parameters
Performance 103
To customize the global parameters, do the following steps:
1. Type GO PMAGT from the command line.
2. Type a 3 from the PM Agent menu to display the Work with PM Agent Customization display and press
Enter.
See Managing PM Agent for other tasks that you can perform with PM Agent.
Related reference
Managing PM Agent
Now that you have set up your network, you can perform a variety of tasks with PM Agent.
Managing PM Agent
Now that you have set up your network, you can perform a variety of tasks with PM Agent.
After you have set up your network to use PM Agent, you can perform the following tasks:
Related reference
End PM Agent (Q1PENDPM) APIThe End PM Agent (Q1PENDPM) API is used to end the Performance
Management Agent (PM Agent) jobs.
Deactivating PM Agent
Learn how you can stop PM Agent.
To stop PM Agent from running, use the following method:
Run the End PM Agent (Q1PENDPM) API from the command line to deactivate PM Agent.
Related tasks
Activating PM Agent
PM Agent is a part of the operating system and you must activate it to use its collecting capabilities.
Omitting items from IBM Performance Management for Power Systems (PM for Power Systems) analysis
Learn how to omit jobs, users, and communications lines when performing an analysis with IBM
Performance Management for Power Systems (PM for Power Systems))
The PM for Power Systems software application summary includes an analysis of items for batch jobs,
users, and communication lines. However, some jobs, users, or communication lines are not appropriate
for such an analysis. For example, you may want to exclude jobs with longer than normal run times, such
as autostart jobs, in the run-time category.
You can omit groups of batch jobs and users from the analysis by using the generic omit function. For
example, to omit all jobs starting with MYAPP specify: MYAPP*
To work with omissions, do the following steps:
1. Type GO PMAGT from the command line.
2. Type a 4 from the PM Agent Menu and press Enter. The Work with Omissions display appears.
3. Type the appropriate option number depending on which item you want to omit.
• Type a 1 to work with jobs.
• Type a 2 to work with users.
• Type a 3 to work with communications lines.
4. Type a 1 in the appropriate field to omit either a user or a job from a particular category. In the case of
a communications line, type the name of the line and then type a 1 in the appropriate field.
Performance 105
1. Type GO PMAGT from the command line.
2. Type a 2 (Work with automatically scheduled jobs).
3. Type a 2 (Change) next to the Q1PPMSUB job.
4. Change the date or time to a future date and time.
5. Press Enter. This change will momentarily stop PM Agent from verifying that Collection Services is
collecting data. You must end what is currently being collected.
Note: PM Agent will not start, cycle, or change Collection Services until the date and time to which you set
the Q1PPMSUB job has been reached.
Related tasks
Scheduling jobs with PM Agent
Learn how to schedule jobs with PM Agent.
Viewing IBM Performance Management for Power Systems (PM for Power Systems) reports
See examples of the IBM Performance Management for Power Systems (PM for Power Systems) reports
and explanations of how to interpret those reports.
The output of the IBM Performance Management for Power Systems (PM for Power Systems) offering is
a set of management reports and graphs. The purpose of the reports and graphs is to give management
a clear understanding of the current performance of their servers and an accurate growth trend. To view
the reports and learn about some of their benefits and uses, visit the IBM Performance Management for
Power Systems (PM for Power Systems) web site.
Related reference
PM for Power Systems web siteFor more information, see the PM for Power Systems website.
Tool Description
Display Performance Display Performance Data allows you to interactively display sample
Data performance data. To interactively display sample performance data, you
can use the Display Performance Data (DSPPFRDTA) command or select
the Display performance data option on the IBM Performance Tools menu.
Display Performance Data is discussed in detail in the Performance Tools PDF.
Reports The reports organize Collection Services performance data and trace data
in a logical and useful format. The reports are discussed in detail in the
Performance Tools PDF.
Graphics function The Performance Tools graphics function allows you to work with
performance data in a graphical format. You can display the graphs
interactively, or you can print, plot, or save the data to a graphics data format
(GDF) file for use by other utilities. This tool is discussed in detail in the
Performance Tools PDF.
Performance 107
Tool Description
IBM i Job Watcher The Job Watcher functions and content package in the IBM Navigator for i
Performance interface is included in IBM Performance Tools for i (5770-PT1)
Option 3 - Job Watcher.
IBM i Disk Watcher The Disk Watcher functions and content package in the IBM Navigator for i
Performance interface is included in IBM Performance Tools for i (5770-PT1)
Option 1 - Manager Feature.
Performance Explorer The Performance Explorer content package in the IBM Navigator for i
Performance interface is included in IBM Performance Tools for i (5770-PT1)
Option 1 - Manager Feature.
Database The Database content package in the IBM Navigator for i Performance
interface is included in IBM Performance Tools for i (5770-PT1) Option 1 -
Manager Feature.
Batch Model The Batch Model functions and content package in the IBM Navigator for i
Performance interface is included in IBM Performance Tools for i (5770-PT1)
Option 1 - Manager Feature.
Related concepts
Performance Tools reports
Performance Tools reports provide information on data that has been collected over time. Use these
reports to get additional information about the performance and use of system resources.
IBM Navigator for i Performance interface
Use the IBM Navigator for i Performance interface to view, collect, and manage performance data by
bringing together various performance information and tools into one centralized place.
Related reference
Work with System Activity (WRKSYSACT) commandSee the Work with System Activity (WRKSYSACT)
command for information about how to interactively work with the jobs and tasks currently running in the
system.
Performance Tools PDFThis IBM Redbooks publication explains how to use performance tools to collect
data about the performance of a system, job, or program. It also explains how to analyze and print the
data to help identify and correct any problems.
Performance 109
Example
Partition A has a capacity of 0.3 processor units and is defined to use one virtual processor. The collection
interval time is 300 seconds. The system is using 45 seconds of CPU (15 seconds by interactive jobs and
30 seconds by batch jobs). In this example, the available CPU time is 90 seconds (.3 of 300 seconds). The
total CPU utilization is 50%.
Prior to V5R3, when the numbers were scaled, system CPU usage is reported as 150 seconds. 150
seconds divided by 300 seconds of interval time results in 50% utilization. The interactive utilization is
15 seconds divided by 300 seconds, which is 5%. The batch utilization is 30 seconds divided by 300
seconds, which is 10%. The HVLPTASK is getting charged with 35% utilization (150 seconds minus 45
seconds), or 105 seconds divided by 300 seconds. These percentages give us a total of 50%.
Beginning in V5R3, the 45 seconds of usage is no longer scaled but is reported as is. The calculated CPU
time that is derived from the reported consumed CPU time divided by the reported available capacity
is 50% (45 seconds divided by 90 seconds). The interactive utilization percentage is 17% (15 seconds
divided by 90 seconds). The batch utilization percentage is 33% (30 seconds divided by 90 seconds).
Performance 111
RSTLICPGM LICPGM(xxxxPT1) DEV(tape-device-name) OPTION(1)
• If you have purchased the Agent feature, use the following command:
• In addition to installing either the Manager or the Agent feature, if you have purchased IBM i5/OS Job
Watcher, use the following command:
If you have several CD-ROMs to install, the following situation may occur. After installing the first one,
you may receive a message saying that the licensed program is restored but no language objects were
restored. If this occurs, insert the next CD-ROM and enter the following:
Another method for installing the Performance Tools program is to type GO LICPGM and use the menu
options.
Performance Tools is a processor-based program. The usage type is concurrent, and the program is
installed with a usage limit *NOMAX.
This program is discussed in detail in the Performance Tools book.
Related reference
Performance Tools PDFThis IBM Redbooks publication explains how to use performance tools to collect
data about the performance of a system, job, or program. It also explains how to analyze and print the
data to help identify and correct any problems.
Performance 113
Table 6. Overview of Performance Tools reports (continued)
Report Description What is shown How you use the
information
Lock Report Uses trace data to File, record, or object Problem analysis.
provide information contention by time; the Reduction or elimination
about lock and seize holding job or object of object contention.
conflicts during system name; the requesting
operation. With this job or object name
information you can
determine if jobs are
being delayed during
processing because
of unsatisfied lock
requests or internal
machine seize conflicts.
These conditions are
also called waits. If they
are occurring, you can
determine which objects
the jobs are waiting for
and the length of the
wait.
Batch Job Trace Report Uses trace data to Job class time-slice end Problem analysis and
show the progression and trace data batch job progress
of different job types
(for example, batch
jobs) traced through
time. Resources utilized,
exceptions, and state
transitions are reported.
Job Interval Report Uses Collection Jobs by interval Job data
Services data to
show information on
all or selected
intervals and jobs,
including detail and
summary information
for interactive jobs and
for noninteractive jobs.
Because the report can
be long, you may want
to limit the output by
selecting the intervals
and jobs you want to
include.
Performance explorer and Collection Services are separate collecting agents. Each one produces its own
set of database files that contain grouped sets of collected data. You can run both collections at the same
time.
--------------------------------------------------------------------------------------------------------------------------------
-
Non-Interactive
Workload
Job Number Logical DB --------- Printer --------- Communications CPU Per
Performance 115
Logical
Type Of Jobs I/O Count Lines Pages I/O Count Logical I/O I/O/
Second
---------- --------- ---------------- ------------- ----------- -------------- -----------
----------
Batch 18,151 1,030,253,068 18,656,603 544,032 1,531,738 .0001
95,526.4
Spool 70 1,066 14,933 369 0
.0285 .0
AutoStart 56 426,047 1,692,060 41,502 178,288 .0008
39.5
COLLECTION 1 2,910 0 0 0
.0171 .2
SQL 192 3,252,232 3,519 88 0 .0003
301.5
MGMTCENTRAL 2 12,229 0 0 0
.0046 1.1
Total 18,903 1,033,969,357 20,367,115 585,991
1,713,007
Average .0003
95,871.0
Average CPU Utilization . . . . . . . . . . . .: 61.0
CPU 1 Utilization . . . . . . . . . . . . . . .: 55.4
CPU 2 Utilization . . . . . . . . . . . . . . .: 57.9
CPU 3 Utilization . . . . . . . . . . . . . . .: 61.5
CPU 4 Utilization . . . . . . . . . . . . . . .: 62.2
CPU 5 Utilization . . . . . . . . . . . . . . .: 62.0
CPU 6 Utilization . . . . . . . . . . . . . . .: 60.1
CPU 7 Utilization . . . . . . . . . . . . . . .: 61.7
CPU 8 Utilization . . . . . . . . . . . . . . .: 63.1
CPU 9 Utilization . . . . . . . . . . . . . . .: 55.4
CPU 10 Utilization. . . . . . . . . . . . . . .: 56.0
CPU 11 Utilization. . . . . . . . . . . . . . .: 59.9
CPU 12 Utilization. . . . . . . . . . . . . . .: 60.6
CPU 13 Utilization. . . . . . . . . . . . . . .: 60.9
CPU 14 Utilization. . . . . . . . . . . . . . .: 62.5
CPU 15 Utilization. . . . . . . . . . . . . . .: 63.7
CPU 16 Utilization. . . . . . . . . . . . . . .: 64.1
CPU 17 Utilization. . . . . . . . . . . . . . .: 54.7
CPU 18 Utilization. . . . . . . . . . . . . . .: 57.3
CPU 19 Utilization. . . . . . . . . . . . . . .: 59.8
CPU 20 Utilization. . . . . . . . . . . . . . .: 60.6
CPU 21 Utilization. . . . . . . . . . . . . . .: 61.6
CPU 22 Utilization. . . . . . . . . . . . . . .: 62.9
CPU 23 Utilization. . . . . . . . . . . . . . .: 63.9
CPU 24 Utilization. . . . . . . . . . . . . . .: 64.7
CPU 25 Utilization. . . . . . . . . . . . . . .: 57.0
CPU 26 Utilization. . . . . . . . . . . . . . .: 55.2
CPU 27 Utilization. . . . . . . . . . . . . . .: 66.2
CPU 28 Utilization. . . . . . . . . . . . . . .: 61.1
CPU 29 Utilization. . . . . . . . . . . . . . .: 62.4
CPU 30 Utilization. . . . . . . . . . . . . . .: 63.2
CPU 31 Utilization. . . . . . . . . . . . . . .: 66.2
CPU 32 Utilization. . . . . . . . . . . . . . .: 66.4
Performance 117
Use the following commands to print reports for trace data that you collected with the Start Performance
Trace (STRPFRTRC) and Trace Internal (TRCINT) commands:
• Print Transaction Report (PRTTNSRPT)
• Print Lock Report (PRTLCKRPT)
• Print Job Trace Report (PRTTRCRPT)
Note: You must use the End Performance Trace (ENDPFRTRC) command to stop the collection of
performance trace data and then optionally write performance trace data to a database file before you
can print the Transaction reports.
Related reference
CL commands for performanceThe operating system includes several CL commands to help you manage
and maintain system performance.
Performance 119
Async I/O Per Second
(Job Interval) The average number of asynchronous disk I/O operations started per second by the
selected noninteractive jobs during the interval.
Async Max
(Transaction) Listed under Average DIO/Transaction, the maximum number of asynchronous DBR,
NDBR, and WRT I/O requests encountered for any single transaction by that job. If the job is not an
interactive or autostart job type, the total disk I/O for the job is listed here.
Async Sum
(Transaction) Listed under Average DIO/Transaction, the sum of the averages of the asynchronous
DBR, NDBR, and WRT requests (the average number of asynchronous I/O requests per transaction for
the job).
Asynchronous DBR
(System, Job Interval, Pool Interval) The average number of asynchronous database read operations
on the disk per transaction for the job during the intervals. This is calculated by dividing the
asynchronous database read count by the transactions processed. This field is not printed if the
jobs in the system did not process any transactions. For the Resource Utilization section of the System
Report, it is the number of asynchronous database read operations per second.
Note: The asynchronous I/O operations are performed by system asynchronous I/O tasks.
Asynchronous DBW
(System, Job Interval) The average number of asynchronous database write operations on the
disk per transaction for the selected jobs during the interval. This is calculated by dividing the
asynchronous database write count by the transactions processed. This field is not printed if the
jobs in the system did not process any transactions. For the Resource Utilization section of the System
Report, it is the number of asynchronous database read operations per second.
Note: The asynchronous I/O operations are performed by system asynchronous I/O tasks.
Asynchronous disk I/O per transaction
(System) The average number of asynchronous physical disk I/O operations per interactive
transaction.
Asynchronous NDBR
(System, Job Interval, Pool Interval) The average number of asynchronous nondatabase read
operations per transaction for the jobs in the system during the interval. This is calculated from the
asynchronous nondatabase read count divided by the transactions processed. This field is not printed
if the jobs in the system did not process any transactions. For the Resource Utilization section of the
System Report, it is the asynchronous nondatabase read operations per second.
Note: The asynchronous I/O operations are performed by system asynchronous I/O tasks.
Asynchronous NDBW
(System, Job Interval, Pool Interval) The average number of asynchronous nondatabase write
operations per transaction for the jobs in the system during the interval. This is calculated from the
asynchronous nondatabase write count divided by the transactions processed. This field is not printed
if the jobs in the system did not process any transactions. For the Resource Utilization section of the
System Report, it is the number of asynchronous nondatabase write operations per second.
Note: The asynchronous I/O operations are performed by system asynchronous I/O tasks.
Avail Local Storage (K)
(Resource Interval) The number of kilobytes of free local storage in the IOP.
Available Storage
(Component) Available local storage (in bytes). The average number of bytes of available main storage
in the IOP. The free local storage is probably not joined because it has broken into small pieces.
Average
(Transaction) The average value of the item described in the column for all transactions.
Performance 121
Avg Rsp (Sec)
(Transaction) The average transaction response time in seconds.
Avg Rsp /Tns
(Transaction) The average response per transaction (in seconds) for the transactions that fell into the
given category.
Avg Rsp Time
(Component) Average transaction response time.
Avg Sec Locks
(Transaction) The average length of a lock in seconds attributed to interactive or noninteractive
waiters.
Avg Sec Seizes
(Transaction) The average length of a seize in seconds attributed to interactive or noninteractive
waiters.
Avg Time per Service
(Resource Interval) The amount of time a disk arm uses to process a given request.
Avg Util
(System, Resource Interval) On the Disk Utilization Summary of the Resource Report, the average
percentage of available time that disks were busy. It is a composite average for all disks on the
system. On the Communications Summary of the System Report, the average percentage of line
capacity used during the measured time interval.
Batch asynchronous I/O per second
(System) The average number of asynchronous physical disk I/O operations per second of batch
processing.
Batch CPU seconds per I/O
(System) The average number of system processing unit seconds used by all batch jobs for each I/O
performed by a batch job.
Batch CPU Utilization
(Component) Percentage of available processing unit time used by the jobs that the system considers
to be batch.
Note: For a multiple-processor system, this is the average use across all processors.
Batch impact factor
(System) Batch workload adjustment for modeling purposes.
Batch permanent writes per second
(System) The average number of permanent write operations per second of batch processing.
Batch synchronous I/O per second
(System) The average number of synchronous physical disk I/O operations per second of batch
processing.
BCPU / Synchronous DIO
(Transaction) The average number of batch processor unit seconds per synchronous disk I/O
operation.
Bin
(Transaction) The number of binary overflow exceptions.
Binary Overflow
(Component) Number of binary overflows per second.
BMPL - Cur and Inl
(Transaction) The number of jobs currently in the activity level (beginning current multiprogramming
level), and the number of jobs on the ineligible queue (beginning ineligible multiprogramming level)
for the storage pool that the job ran in when the job left the wait state (the beginning of the
transaction).
Note: Multiprogramming level (MPL) is used interchangeably with activity level.
Performance 123
Cmn
(Job Interval) The number of communications I/O operations performed by the selected interactive
jobs during the interval.
Cmn I/O
(Component) Number of communications operations (Get, Put).
Cmn I/O Per Second
(Job Interval) The average number of communications I/O operations performed per second by the
selected noninteractive jobs during the interval.
Collision Detect
(Resource Interval) The number of times that the terminal equipment (TE) detected that its
transmitted frame had been corrupted by another TE attempting to use the same bus.
Commit Ops
(Component) Commit operations performed. Includes application and system-provided referential
integrity commits.
Communications I/O Count
(System) Number of communications I/O operations.
Communications I/O Get
(System) Number of communication get operations per transaction.
Communications I/O Put
(System) Number of communication put operations per transaction.
Communications Lines
(System, Component, Job Interval, Pool Interval) For the Report Selection Criteria, the list
of communications lines selected to be included (SLTLINE parameter) or excluded (OMTLINE
parameter). These are the communications line names you specify.
Control Units
(System, Component, Job Interval, Pool Interval) The list of control units selected to be included
(SLTCTL parameter) or excluded (OMTCTL parameter). These are the controller names you specify.
Count
(Transaction, Lock) The number of occurrences of the item in the column. For example, in a lock
report, it is the number of locks or seizes that occurred.
CPU
(Transaction) The total processing unit seconds used by the jobs with a given priority.
CPU
(Job Trace) The approximation of the CPU used on this trace entry. This is a calculated value based on
the time used and the CPU model being run.
CPU /Tns
(Transaction, Job Interval) The amount of available processing unit time per transaction in seconds.
CPU Model
(System) The processing unit model number.
CPU per I/O Async
(System) CPU use per asynchronous I/O.
CPU per I/O Sync
(System) CPU use per synchronous I/O.
CPU per Logical I/O
(System) Processing unit time used for each logical disk I/O operation.
CPU QM
(Transaction) The simple processing unit queuing multiplier.
CPU Sec
(Transaction) The processing unit time used by the job in this state.
CPU Sec /Sync DIO
(Transaction) The ratio of CPU seconds divided by synchronous disk I/O requests for each type of job.
Performance 125
example, in CPU by Priority for All Jobs for Total Trace Period (System Summary Data), it is the unit
time used by the jobs with a priority higher or equal to the given priority.
Cum Pct Tns
(Transaction) Cumulative CPU percent per transaction. For system summary data, it is the cumulative
CPU percentage of all transactions that have an average response time per transaction equal to or
less than the given category. For Interactive Program Transactions Statistics, it is the cumulative
CPU percentage of all transactions through the listed program. For Job Statistics section, it is the
cumulative CPU percentage of total transactions through the listed job. For Interactive Program
Statistics section, it is the cumulative CPU percentage of all transactions through the listed program.
Cum Util
(System) Cumulative CPU use (a running total).
Note: This is taken from the individual jobs and may differ slightly from the total processing unit use
on the workload page.
Cur Inl MPL
(Transaction) The number of jobs waiting for an activity level (ineligible) in the storage pool.
Cur MPL
(Transaction) The number of jobs holding an activity level in the storage pool.
Current User
(Job) The user under which the job was running at the end of each interval.
DASD Ops/Sec
(Component) Disk operations per second.
DASD Ops Per Sec Reads
(Resource) Number of reads per second
DASD Ops Per Sec Writes
(Resource) Number of writes per second
Datagrams Received
(Component) The total number of input datagrams received from interfaces. This number includes
those that were received in error.
DB
(Job Trace) The number of physical database reads that occurred for the entry.
DB Fault
(System, Component) Average number of database faults per second.
DB Pages
(System, Component) Average number of database pages read per second.
DB Read
(Transaction) When listed in Physical I/O Counts column, it is the number of database read requests
while the job was in that state. When listed in the Sync Disk I/O Rqs/Tns column, it is the average
number of synchronous database read requests per transaction.
DB READS
(Job Trace) The number of physical database reads that occurred.
DB Write
(Transaction) When listed in the Sync Disk I/O Rqs/Tns column, it is the average number of
synchronous database write requests per transaction.
DB Wrt
(Transaction) When listed in the Physical I/O Counts column, it is the number of database write
requests while the job was in that state. When listed in the Synchronous Disk I/O Counts column, it is
the number of synchronous database write requests per transaction.
DDM I/O
(Component, Job Interval) The number of logical database I/O operations for a distributed data
management (DDM) server job.
Performance 127
MB
Millions of bytes available on the disk.
Percent
Percent of space available on the disk.
Disk Controllers
(System) The number of disk storage controllers for this IOP.
Disk Feature
(System) The type of disk (9332, 9335, and so on).
Disk I/O Async
(System, Component) Total number of asynchronous disk I/O operations.
Disk I/O Logical
(Component) The number of logical disk operations, such as gets and puts.
Disk I/O per Second
(System) Average number of physical disk I/O operations per second.
Disk I/O Reads /Sec
(Resource Interval) The average number of disk read operations per second by the disk IOP.
Disk I/O Requests
(Transaction) The total number of synchronous and asynchronous disk I/O requests issued by the jobs
during the trace period.
Disk I/O Sync
(System, Component) Total number of synchronous disk I/O operations.
Disk I/O Writes /Sec
(Resource Interval) The average number of disk write operations per second by the disk IOP.
Disk IOPs
(System) The number of disk IOP controllers.
Disk mirroring
(System) Indicates whether disk mirroring is active.
Disk Space Used
(Resource Interval) The total disk space used in gigabytes for the entire system.
Disk transfer size (KB)
(System) The average number of kilobytes transferred per disk operation.
Disk utilization
(System) The fraction of the time interval that the disk arms were performing I/O operations.
Dsk CPU Util
(System, Resource Interval) The percentage of CPU used by the disk unit.
Dtgm Req Transm Dscrd
(Component) The percentage of IP datagrams that are discarded because of the following reasons:
• No route was found to transmit the datagrams to their destination.
• Lack of buffer space.
Dtgm Req for Transm Tot
(Component) The total number of IP datagrams that local IP user-protocols supplied to IP in requests
for transmission.
Elapsed Seconds
(Transaction, Component) The elapsed time in seconds. For the Batch Job Analysis section of the
Transaction Report, it is the number of seconds elapsed from when the job started to when the
job ended. For the Concurrent Batch Job Statistics section of the Transaction Report, it is the total
elapsed time of all jobs in that job set.
Performance 129
operations. The counts are meaningful when the processing unit time required to process them
affects system performance. A variation in the counts may indicate a system change that could affect
performance. For example, a large variation in seize or lock counts may indicate a job scheduling
problem or indicate that contention exists between an old application and a new one that uses the
same resources.
Note: To see the seize and lock counts, you should collect the trace data by using the Start
Performance Trace (STRPFRTRC) command. Run the Print Transaction Report (PRTTNSRPT) to list
the objects and jobs that are holding the locks.
Exceptional wait
(System) The average exceptional wait time, in seconds, per transaction. An exceptional wait is that
portion of internal response time that cannot be attributed to the use of the processor and disk. An
exceptional wait is caused by contention for internal resources of the system, for example, waiting for
a lock on a database record.
Constant
The portion of exceptional wait time held constant as throughput increases.
Variable
The portion of exceptional wait time that varies as throughput increases.
Excp
(Component, Transaction) For the Component Report, it is the total number of program exceptions
that occurred per second. For the Transaction Report, a Y in this column means that the transaction
had exceptions. The types of exceptions that are included are process access group exceptions, and
decimal, binary, and floating point overflow. See the Transition Report to see which exceptions the
transaction had.
Excp Wait
(Transaction) The amount of exceptional wait time for the jobs in the job set in seconds.
Excp Wait /Tns
(Transaction) The average exceptional wait time, in seconds, per transaction. This value is the sum of
those waits listed under the Exceptional Wait Breakdown by Job Type part.
Excp Wait Sec
(Transaction) The total amount of exceptional wait time in seconds for the job.
Excs ACTM /Tns
(Transaction) The average time, in seconds, of the excess activity level time per transaction (for
example, time spent in the active state but not using the processing unit). If enough activity levels
are available and there is plenty of interactive work of higher priority to do, a job waits longer for
processing unit cycles. If the value is greater than .3, look at jobs that correspond to particular
applications for more information. By looking at these jobs, you might be able to determine which
application's jobs are contributing most to this value. Use the Transaction and Transition Reports for
these jobs for additional information. The formula for excessive activity-level time is shown below:
Active Time - [
(multiplier X CPU X Beginning Activity Level) +
(Number of synchronous disk I/O operations X .010)]
Note: If the beginning activity level is greater than 1, the multiplier equals 0.5. If the beginning activity
level is any other value, the multiplier equals 1.
EXIT
(Job Trace) The instruction number in the program where the program gave up control.
Expert Cache
(System, Component) Directs the system to determine which objects or portions of objects should
remain in a shared main storage pool based on the reference patterns of data within the object. Expert
cache uses a storage management tuner, which runs independently of the system dynamic tuner, to
examine overall paging characteristics and history of the pool. Some values that you might see in this
column are associated with the Work with Shared Pools (WRKSHRPOOL) command:
Table 7.
Function ID Description
DATA Data trace record
CALL Call external
Performance 131
Table 7. (continued)
Function ID Description
XCTL Transfer control
EVENT Event handler invocation
EXTXHINV External exception handler invocation
INTXHINV Internal exception handler invocation
INTXHRET Return from internal exception handler
INVEXIT Invocation exit
RETURN Return external
ITRMXRSG Invocation ended due to resignaling exception
EXTXHRET Return external or from a procedure instruction
PTRMTPP Termination phase end
PTRMUNX End process due to an unhandled exception
NOTUSED This type is a non-valid trace type
ITERM Invocation ended
CANCLINV Cancel invocation instruction
Functional Areas
(System, Component, Transaction, Job Interval, Pool Interval) For Report Selection Criteria, the
list of functional areas selected to be included (SLTFCNARA parameter) or excluded (OMTFCNARA
parameter).
/H
(System, Resource Interval) The line speed of the protocol reported as half duplex. This indicator
applies to the line speeds for an Ethernet (ELAN) token-ring (TRLAN) line, or an asynchronous transfer
mode line.
HDW
(Transaction) Listed in the Wait Code column, Hold Wait (job suspended or system request). The job
released a lock it had on the object named on the next detail line of the report (OBJECT --). The job
that was waiting for the object is named on this line (WAITER --) along with the amount of time the job
spent waiting for the lock to be released.
High Srv Time
(Resource Interval) The highest average service time in seconds for a disk arm in the system.
High Srv Unit
The disk arm with the highest service time.
High Util
(Resource Interval) The percentage of use for the disk arm that has the highest utilization.
High Util Unit
(Component, Resource Interval) The disk arm with the highest utilization.
High Utilization Disk
(Component) Percent of utilization of the most utilized disk arm during this interval.
High Utilization Unit
(Component) Disk arm that had the most utilization during this interval.
Holder Job Name
(Transaction) The name of the job that held the object.
Performance 133
the percent of CPU used in the IOP. The percent does not necessarily mean that the IOP is doing any
data transfers. Some of the percent can be attributed to overhead of an active line.
IOP Name/Line
(System, Resource Interval) Input/output (IOP) processor resource name and model number line.
IOP Name(Model)
(Resource Interval) The input/output processor (IOP) identification and the model number in
parentheses.
IOP Name
(System, Component) Input/Output processor (IOP) resource name.
IOP Name Network Interface
(Resource Interval) The IOP name of the network interface.
IOP Processor Util Comm
(Component, Resource) Utilization of IOP due to communications activity.
IOP Processor Util LWSC
(Component, Resource) Utilization of IOP due to local workstation activity.
IOP Processor Util DASD
(Component, Resource) Utilization of IOP due to DASD activity.
IOP Processor Util Total
(Component, Resource Interval) The total percent of utilization for each local workstation, disk, and
communications IOP.
IOP Util
(System) For the Disk Utilization section of the System Report, it is the percentage of utilization for
each input/output processor (IOP).
Note: For the multifunction I/O processors, this is utilization due to disk activity only, not
communications activity. For the System Model Parameter section it is the fraction of the time interval
the disk IOP was performing I/O operations.
Itv End
(Component, Transaction, Job Interval, Pool Interval, Resource Interval) The time (hour and minute)
when the data was collected. For the Exception Occurrence Summary and Interval Counts of the
Component Report, it is the ending time for the sample interval in which Collection Services recorded
the exception.
Job Maximum A-I
(Pool Interval) The highest number of active-state to ineligible-state transitions by a selected job in
the pool or subsystem.
Job Maximum A-W
(Pool) The highest number of active-to-wait state transitions by a selected job in the pool or
subsystem.
Job Maximum CPU Util
(Pool Interval) The highest percentage of available processing unit time used by a selected job in the
pool or subsystem.
Job Maximum Phy I/O
(Pool Interval) The highest number of physical disk input and output operations by a selected job in
the pool or subsystem.
Job Maximum Rsp
(Pool Interval) The highest response time in seconds per transaction by a selected job in the pool or
subsystem. The response time is the amount of time spent waiting for and using the resources divided
by the number of transactions.
Job Maximum Tns
(Pool Interval) The highest number of transactions by a selected job in the pool or subsystem.
Performance 135
I
Interactive. Interactive includes twinaxial data link control (TDLC), 5250 remote workstation, and
3270 remote workstation. For the Transaction Report, this includes twinaxial data link control
(TDLC), 5250 remote workstation, 3270 remote workstation, SNA pass-through, and 5250 Telnet.
L
Licensed Internal Code task
M
Subsystem monitor
P
SNA pass-through and 5250 Telnet pass-through. On the Transaction Report, these jobs appear as
I (interactive).
R
Spool reader
S
System
W
Spool writer, which includes the spool write job, and if Advanced Function Printing (AFP) is
specified, the print driver job.
WP
Spool print driver (Transaction only)
X
Start system job
Possible job subtype values include the following:
D
Batch immediate job
E
Evoke (communications batch)
J
Pre-start job
P
Print driver job
T
Multiple requester terminal (MRT) (System/36 environment only)
3
System/36
Noninteractive job types include:
• Autostart
• Batch
• Evoke
• IBM i Access-Bch
• Server
• Spool
• Distributed data management (DDM) server
Special interactive job categories include:
• Interactive
• Multiple requester terminal (MRT)
• Pass-through
Performance 137
transaction summary PGMNAME data. However, if the Wait Code column has a value, the program in
the column labeled Last is the one that caused the trace record. If there is no program name in a
column, the program name was the same as the previous one in the column, and the name is omitted.
Length of Wait
(Lock) The number of milliseconds the requester waited for the locked object.
Lgl I/O /Sec
(Job Interval) The average number of logical disk I/O operations performed per second by the job
during the interval. This is calculated from the logical disk I/O count divided by the elapsed time.
Library
(System, Transaction) The library that contains the object.
LIBRARY
(Job Trace) The library name that contains the program associated with the trace entry.
Line Count
(Job Interval) The number of lines printed by the selected noninteractive jobs during the interval.
Line Descriptn
(Resource Interval) Line description name.
Line Errors
(Resource Interval) The total of all detected errors. Check the condition of the line if this value
increases greatly over time.
Line Speed
(System, Resource Interval) The line speed in kilobits (1 kilobit = 1000 bits) per second.
Line Type/Line Name
(Component, System) The type and name of the line description that is used by the interface. For
interfaces that do not use a line descriptions, the Line Name field will be shown as *LOOPBACK, *OPC,
or *VIRTUALIP with no Line Type specified.
Line Util
(Resource Interval) The percent of available line capacity used by transmit and receive operations.
Line Util Trans/Recd
(Resource Interval) The percent used of the data transmission capacity of the communications line.
The number of bits transmitted and the number of bits received, during the interval, divided by the
line speed.
LKRL
(Transaction) Lock Released. The job released a lock it had on the object named on the next detail line
of the report (OBJECT --). The job that was waiting for the object is named on this line (WAITER --)
along with the amount of time the job spent waiting for the lock to be released.
LKW
(Transaction) Listed in the Wait Code column, Lock Wait. If there are a number of these, or you see
entries with a significant length of time in the ACTIVE/RSP* column, additional analysis is necessary.
The LKWT report lines that precede this LKW report line show you what object is being waited on, and
who has the object.
LKWT
(Transaction) Listed in the Wait Code column, Lock Conflict Wait. The job is waiting on a lock conflict.
The time (*/ time /*) is the duration of the lock conflict and, though not equal to the LKW time, should
be very close to it. The holder of the lock is named at the right of the report line (HOLDER --). The
object being locked is named on the next report line (OBJECT --).
Local End Code Violation
(Resource Interval) The number of times an unintended code violation was detected by the terminal
equipment (TE) for frames received at the interface for the ISDN S/T reference point.
Local Not Ready
(Resource Interval) The percent of all receive-not-ready frames that were transmitted by the host
system. A large percentage often means the host cannot process data fast enough (congestion).
Performance 139
Main storage (MB)
(System) The total main storage size, as measured in megabytes. These codes are in the wait code
column, but they are not wait codes. They indicate transaction boundary trace records.
Max Util
(System) Consistent use at or above the threshold value given will affect system performance and
cause longer response times or less throughput.
Maximum
(Transaction) The maximum value of the item that occurred in the column.
Member
(System, Transaction) For the System Report, this is the name of the performance data member that
was specified on the TOMBR parameter of the Create Performance Data (CRTPFRDTA) command. For
the Transaction Report, the member that was involved in the conflict.
Minimum
(Transaction) The minimum value of the item that occurred in the column.
MRT Max Time
(System) The time spent waiting, after MRTMAX is reached, by jobs routed to a multiple requester
terminal.
Note: No value appears in this column if job type is not MRT.
MSGS
(Job Trace) The number of messages sent to the job during each transaction.
MTU size (bytes)
(System) The size of the largest datagram that can be sent or received on the interface. The size is
specified in octets (bytes). For interfaces that are used for transmitting network datagrams, this is the
size of the largest network datagram that can be sent on the interface.
Nbr A-I
(Transaction) The number of active-to-ineligible state transitions by the job. This column shows the
number of times that the job exceeded the time-slice value assigned to the job, and had to wait for an
activity-level slot before the system could begin processing the transaction. If a value appears in this
column, check the work that the job was doing, and determine if changes to the time-slice value are
necessary.
Nbr Disk Units
(System) The number of disk units assigned to the reported partition.
Nbr Evt
(Transaction) The number of event waits that occurred during the job processing.
Nbr Jobs
(Transaction) The number of jobs.
Nbr Sign offs
(Transaction) The number of jobs that signed off during the interval.
Nbr Sign ons
(Transaction) The number of jobs that signed on during the interval.
Nbr Tns
(Transaction) The number of transactions in a given category.
Note: The values for transaction counts and other transaction-related information shown on the
reports you produce using the Print Transaction Report (PRTTNSRPT) command may vary from the
values shown on the reports you produce using the Print System Report (PRTSYSRPT) and Print
Component Report (PRTCPTRPT) commands. These differences are caused because the PRTTNSRPT
command uses trace data as input, while the PRTSYSRPT and PRTCPTRPT commands use sample
data as input.
If there are significant differences in the values for transaction-related information shown on these
reports, do not use the data until you investigate why these differences exist.
Performance 141
Number of Jobs
(System) Number of jobs.
Number of Packets Received with Errors
(System) The total number of packets that were received with errors or discarded for other reasons.
For example, a packet could be discarded to free up buffer space.
Number Seizes
(Transaction) The number of seizes attributed to interactive or noninteractive waiters.
Number Sze Cft
(Transaction) The number of seize/lock conflicts that occurred during the job processing. If this
number is high, look at the Transaction and Transition Reports for the job to see how long the conflicts
lasted, the qualified name of the job that held the object, the name and type of object being held, and
what the job was waiting for.
Number Sze Conflict
(Transaction) The number of times the job had a seize conflict.
Number Tns
(System, Transaction) Total number of transactions processed. For example, in the System Report it
is the total number of transactions processed by jobs in this pool. In the Transaction Report it is the
number of transactions associated with the program.
Number Traces
(Batch Job Trace) Number of traces.
Number Transactions
(System) Total number of transactions processed.
Object File
(Transaction) The file that contains the object.
Object Library
(Transaction) The library that contains the object.
Object Member
(Transaction) The member that was involved in the conflict.
Object Name
(Lock) The name of the locked object.
Object RRN
(Transaction) The relative record number of the record involved in the conflict.
Object Type
(Transaction, Lock) The type of the locked object. The following are possible object types:
AG
Access group
CB
Commit block
CBLK
Commit block
CD
Controller description
CLS
Class
CMD
Command
CTLD
Controller description
CTX
Context
Performance 143
PROG
Program
PRTIMG
Print image
QDAG
Composite piece - access group
QDDS
Composite piece - data space
QDDSI
Composite piece - data space index
QTAG
Temporary - access group
QTDS
Temporary - data space
QTDSI
Temporary - data space index
SBSD
Subsystem description
TBL
Table
Omit Parameters
(System, Component, Transaction, Job Interval, Pool Interval) The criteria used to choose the
data records to be excluded from the report. The criteria are generally specified using an OMTxxx
parameter of the command. Only nondefault values (something other than *NONE) are printed. If a
parameter was not specified, it does not appear on the report.
Op per Second
(System) Average number of disk operations per second.
Other Wait /Tns
(Transaction) The average time, in seconds, spent waiting that was not in any of the previous
categories per transaction. For example, the time spent waiting during a save/restore operation when
the system requested new media (tape or diskette).
Outgoing Calls Pct Retry
(Resource Interval) The percentage of outgoing calls that were rejected by the network.
Outgoing Calls Total
(Resource Interval) The total number of outgoing call attempts.
Over commitment ratio
(System) The main storage over commitment ratio (OCR).
PAG
(Transaction) The number of process access group faults.
PAG Fault
(Component, Job Interval) In the Exception Occurrence Summary of the Component Report, it is the
total number of times the program access group (PAG) was referred to, but was not in main storage.
The Licensed Internal Code no longer uses process access groups for caching data. Because of this
implementation, the value will always be 0 for more current releases. In the Exception Occurrence
Summary of the Component Report, it is the number of faults involving the process access group per
second.
Page Count
(Job Interval) The number of pages printed by the selected noninteractive jobs during the interval.
Performance 145
Pct Tns
(Transaction) The percentage of the total transactions. For the System Summary section of the Job
Summary Report, the transactions are within the given trace period with the given purge attribute. For
the Interactive Program Transaction Statistics section of the Job Summary Report, the percentage of
transactions that were associated with a program. For the Job Statistics section, it is the percentage
of total transactions that were due to this job. For the Interactive Program Statistics section, it is all
transactions that were associated to a program.
Pct UDP Datagrams Error
(Component) The percentage of User Datagram Protocol (UDP) datagrams for which there was no
application at the destination port or that could not be delivered for other reasons.
Percent Errored Seconds
(Resource Interval) The percentage of seconds in which at least one Detected Access Transmission
(DTSE) in or out error occurred.
Percent Frames Received in Error
(Resource Interval) The percent of all received frames that were received in error. Errors can occur
when the host system has an error or cannot process received data fast enough (congestion).
Percent Full
(System) Percentage of disk space capacity in use.
Percent I Frames Trnsmitd in Error
(Resource Interval) The percent of transmitted information frames that required retransmission.
Retransmissions can occur when a remote device has an error or cannot process received data fast
enough (congestion).
Percent Severely Errored Seconds
(Resource Interval) The percent of seconds in which at least three Detected Access Transmission
(DTSE) in or out errors occurred.
Percent transactions (dynamic no)
(System) A measure of system main storage utilization. The percent of all interactive transactions that
were done with the purge attribute of dynamic NO.
Percent transactions (purge no)
(System) A measure of system main storage utilization. The percent of all interactive transactions that
were done with the purge attribute of NO.
Percent transactions (purge yes)
(System) A measure of system main storage utilization. The percent of all interactive transactions that
were done with the purge attribute of YES.
Percent Util
(System) Average disk arm utilization (busy). Consistent use at or above the threshold value provided
for disk arm utilization affects system performance, which causes longer response times or less
throughput.
Note: The percent busy value is calculated from data measured in the I/O processor. When comparing
this value with percent busy reported by the Work with Disk Status (WRKDSKSTS) command, some
differences may exist. The WRKDSKSTS command estimates percent busy based on the number of
I/O requests, amount of data transferred, and type of disk unit.
The system-wide average utilization does not include data for mirrored arms in measurement
intervals for which such intervals are either in resuming or suspended status.
Perm Size
(Component) Kilobytes placed within the permanent area; these are traditional journal entries which
can be retrieved and displayed.
Perm Write
(Component, Job Interval) The number of permanent write operations performed for the selected jobs
during the interval.
Permanent writes per transaction
(System) The average number of permanent write operations per interactive transaction.
Performance 147
PROGRAM
(Job Trace) The name of the program for the entry.
PROGRAM CALL
(Job Trace) The number of non-QSYS library programs called during the step. This is not the number
of times that the program named in the PROGRAM NAME field was called.
PROGRAM DATABASE I/O
(Job Trace) The number of times the IBM-supplied database modules were used during the
transaction. The database module names have had the QDB prefix removed (PUT instead of QDBPUT).
The type of logical I/O operation performed by each is as follows:
GETDR
Get direct
GETSQ
Get sequential
GETKY
Get by key
GETM
Get multiple
PUT, PUTM
Add a record
UDR
Update, delete, or release a record
PROGRAM INIT
(Job Trace) The number of times that the IBM-supplied initialization program was called during the
transaction. For RPG programs this is QRGXINIT, for COBOL it is QCRMAIN. Each time the user
program ends with LR (RPG) or END (COBOL), the IBM-supplied program is also called. This is not
the number of times the program named in the PROGRAM NAME field was initialized. QCRMAIN
is used for functions other than program initialization (for example, blocked record I/O, some data
conversions).
Program Name
(Transaction) For the Job Summary section of the Transaction Report, the name of the program
in control at the start of the transaction. Other programs may be used during the transaction. For
the Transaction Report section, the name of the program active at the start of the transaction. If
ADR=UNKNWN (address unknown) is shown under the column, the program was deleted before the
trace data was dumped to the database file. If ADR=000000 is shown under the column, there was
not enough trace data to determine the program name, or there was no program active at that level in
the job when the trace record was created.
PROGRAM NAME
(Job Trace) The name of the last program called that was not in the library QSYS before the end of a
transaction.
Protocol
(System) Line protocol.
• SDLC
• ASYNC
• BSC
• X25
• TRLAN
• ELAN (Ethernet)
• IDLC
• DDI
• FRLY
Performance 149
Reset Packets Trnsmitd
(Resource Interval) The number of reset packets transmitted by the network.
Response
(System) Average system response (service) time.
Response Sec Avg and Max
(Transaction) The average (AVG) and maximum (MAX) transaction response time, in seconds, for
the job. The average response time is calculated as the sum of the time between each pair of
wait-to-active and active-to-wait transitions divided by the number of pairs that were encountered for
the job. The MAX response time is the largest response time in the job.
Response Seconds
(System) Average response time in seconds per transaction.
Responses sent
(System, Component) The number of responses of all types sent by the server.
Rsp
(Component) Average interactive transaction response time in seconds.
Rsp Time
(Component, Resource Interval) The average external response time (in seconds). For the Local Work
Station IOP Utilizations section of the Resource Interval Report, it is the response time for work
stations on this controller. For the Remote Work Stations section of the Component Report, it is the
response time for this work station.
Rsp Timer Ended
(Resource Interval) The number of times the response timer ended waiting for a response from a
remote device.
Rsp/Tns
(Component, Transaction, Job Interval) The average response time (seconds) per transaction. For
the Job Summary section of the Job Interval Report, it is the response time per transaction for the
selected interactive jobs during the interval (the amount of time spent waiting for or using the system
resources divided by the number of transactions processed). This number will not be accurate unless
at least several seconds were spent processing transactions.
S/L
(Transaction) Whether the conflict was a seize (S) or lock (L) conflict.
SECONDS
(Job Trace) The approximate time the job was waiting or active.
Segments Pct Rtrns
(Component) The percentage of segments retransmitted. This number is the TCP segments that were
transmitted and that contain one or more previously transmitted octets (bytes).
Segments Rcvd per Second
(Component) The number of segments received per second. This number includes those received in
error and those received on currently established connections.
Segments Sent per Second
(Component) The number of segments sent per second. This number includes those sent on currently
established connections and excludes those that contain only retransmitted octets (bytes).
Seize and Lock Conflicts
(Batch Job Trace) Number of seize conflicts and lock waits.
Seize Conflict
(Component) Number of seize exceptions per second. For more detailed information, issue the Start
Performance Trace (STRPFRTRC) command, and use the PRTTNSRPT or PRTLCKRPT commands. This
count could be very high, even under normal system operation. Use the count as a monitor. If there
are large variations or changes, explore these variations in more detail.
Seize Hold Time
(Transaction) The amount of time that the transaction held up other jobs in the system by a seize or
lock on an object.
Performance 151
SHARE CLS
(Job Trace) The number of shared closes for all types of files.
SHARE OPN
(Job Trace) The number of shared opens for all types of files.
SMAPP ReTune
(Component) System-managed access path protection tuning adjustments.
SMAPP System
(Component) SMAPP-induced journal entries deposited in system-provided (default) journals.
SMAPP User
(Component) SMAPP-induced journal entries deposited in user-provided journals.
SOTn
(Transaction) Listed in the Wait Code column, Start of transaction n. These codes are in the wait code
column, but they are not wait codes. They indicate transaction boundary trace records.
Spool CPU seconds per I/O
(System) The average number of system processing unit seconds used by all spool jobs for each I/O
performed by a spool job.
Spool database reads per second
(System) The average number of read operations to database files per second of spool processing.
Spool I/O per second
(System) The average number of physical disk I/O operations per second of spool processing.
SQL CPU Util
Percentage of available CPU time used to perform work done on behalf of SQL operations. This field
applies to all systems running IBM i 7.2 or later.
Srv Time
(Component) Average disk service time per request in seconds not including the disk wait time.
SSL Inbound Connections
(System) AThe number of SSL inbound connections accepted by the server.
Start
(Transaction) The time the job started.
Started
(Transaction) The time of the first record in the trace data, in the form HH.MM.SS (hours, minutes,
seconds).
State
(Transaction) The three possible job states are:
• W--(Wait state) not holding an activity level.
• A--(Active or wait state) holding an activity level.
• I--(Ineligible state) waiting for an activity level.
The table below shows the possible job state transitions. For example, from W to A is yes, which
means it is possible for a job to change from the wait state to the active state.
Table 8.
To state
A W I
From state A yes yes yes
W yes yes
I yes
Performance 153
Sum
The sum of the averages of the synchronous DB READ, DB WRITE, NDB READ, and NDB WRITE
requests (the average number of synchronous I/O requests per transaction for the job).
Sync I/O /Elp Sec
(Transaction) The average number of synchronous disk I/O requests for all jobs, per second of elapsed
time used by the jobs.
Sync I/O /Sec
(Job Interval) The average number of synchronous disk I/O operations performed per second by the
job during the interval. This is calculated from the synchronous disk I/O count divided by the elapsed
time.
Sync I/O Per Second
(Job Interval) The average number of synchronous disk I/O operations performed per second by the
selected noninteractive jobs during the interval.
Synchronous DBR
(System, Transaction, Job Interval, Pool Interval) The average number of synchronous database read
operations. It is the total synchronous database reads divided by the total transactions. For the Pool
Interval and Job Interval Reports, it is calculated per transaction for the job during the intervals. For
the System Report, it is calculated per second. For the Transaction (Job Summary) it is calculated
per transaction. Listed under Average DIO/Transaction, the average number of synchronous database
read requests per transaction. This field is not printed if the jobs in the system did not process any
transactions.
Synchronous DBW
(System, Transaction, Job Interval, Pool Interval) The average number of synchronous database write
operations. It is the total synchronous database writes divided by the total transactions. For the Pool
Interval and Job Interval Reports, it is calculated per transaction for the job during the intervals. For
the System Report, it is calculated per second. For the Transaction (Job Summary) it is calculated
per transaction. Listed under Average DIO/Transaction, the average number of synchronous database
read requests per transaction. This field is not printed if the jobs in the system did not process any
transactions.
Synchronous DIO / Act Sec
(System, Transaction) The number of synchronous disk I/O operations per active second. The active
time is the elapsed time minus the wait times.
Synchronous DIO / Ded Sec
(Transaction) The estimated number of synchronous disk I/O operations per second as if the job were
running in dedicated mode. Dedicated mode means that no other job would be active or in contention
for resources in the system.
Synchronous DIO / Elp Sec
(Transaction) The number of synchronous disk I/O operations per elapsed second.
Synchronous Disk I/O Counts
(Transaction) The next five columns provide information about the number of synchronous disk I/O
requests per transaction:
DB Read
The number of synchronous database read requests per transaction.
DB Wrt
The number of synchronous database write requests per transaction.
NDB Read
The number of synchronous nondatabase read requests per transaction.
NDB Wrt
The number of synchronous nondatabase write requests per transaction.
Sum
The sum of the synchronous DB Read, DB Wrt, NDB Read, and NDB Wrt requests (the number of
synchronous I/O requests per transaction).
Performance 155
Teraspace EAO
(Component) Listed in the Exception Occurrence summary and Interval Counts. A teraspace effective
address overflow (EAO) occurs when computing a teraspace address that crosses a 16-boundary. A
quick estimate indicates that a 1% performance degradation would occur if there were 2,300 EAOs
per second.
Thread
(Job Summary, Transaction, Transition) A thread is a unique flow of control within a process. Every job
has an initial thread associated with it. Each job can start one or more secondary threads. The system
assigns the thread number to a job as follows:
• The system assigns thread IDs sequentially. When a job is started that uses a job structure that
was previously active, the thread ID that is assigned to the initial thread is the next number in the
sequence.
• The first thread of a job is assigned a number.
• Any additional threads from the same job are assigned a number that is incremented by 1. For
example:
A thread value greater than 1 does not necessarily mean the job has had that many threads active
at the same time. To determine how many threads are currently active for the same job, use the
WRKACTJOB, WRKSBSJOB, or WRKUSRJOB commands to find the multiple three-part identifiers
with the same job name.
Threads active
(System) The number of threads doing work when the data was sampled.
Threads idle
(System) The number of idle threads when the data was sampled.
Time
(Transaction) The time when the transaction completed, or when a seize or lock conflict occurred.
Also, a column heading that shows the time the transition from one state to another occurred, in the
HH.MM.SS.mmm arrangement.
TIME
(Job Trace) The time of day for the trace entry. The time is sequentially given in hours, minutes,
seconds, and microseconds.
Tns
(Component, Pool Interval) The total number of transactions processed by the selected jobs in the
pool or subsystem.
Tns Count
(Component, Job Interval) The number of transactions performed by the selected interactive jobs
during the interval.
Tns/Hour
(Component, Transaction, Job Interval) The average number of transactions per hour processed by
the selected interactive jobs during the interval.
Tns/Hour Rate
(System) Average number of transactions per hour.
TOD of Wait
(Lock) The time of day of the start of the conflict.
Tot
(Transaction) Listed in Physical I/O Counts column, the total number of DB Read, DB Wrt, NDB Read,
and NDB Wrt requests.
In shared processor partitions, individual CPU utilization rows are not printed.
Note: This value is taken from a system counter. Other processing unit uses are taken from the
individual job work control blocks (WCBs). These totals may differ slightly. For uncapped partitions,
Total CPU utilization might exceed 100 percent.
Total CPU Utilization (Interactive Feature)
(System) The CPU Utilization (Interactive Feature) shows the CPU utilization for all jobs doing 5250
workstation I/O operations relative to the capacity of the system for interactive work. Depending on
the system and associated features purchased, the interactive capacity is equal to or less than the
total capacity of the system.
Total CPU Utilization (SQL)
(System) Shows you the SQL activity on your systems. This field applies to all systems running IBM i
7.2 or later.
Total Data Characters Received
(Resource Interval) The number of data characters received successfully.
Total Data Characters Transmitted
(Resource Interval) The number of data characters transmitted successfully.
Total Datagrams Requested for Transmission
(Component) The percentage of IP datagrams that are discarded because of the following reasons:
• No route was found to transmit the datagrams to their destination.
• Lack of buffer space.
Total fields per transaction
(System) The average number of display station fields either read from or written to per interactive
transaction.
Total Frames Recd
(Resource Interval) The number of frames received, including frames with errors and frames that are
not valid.
Total I Frames Trnsmitd
(Resource Interval) The total number of information frames transmitted.
Performance 157
Total I/O
(System) Sum of the read and write operations.
Total PDUs Received
(Resource Interval) The number of protocol data units (PDUs) received during the time interval.
Note: A protocol data unit (PDU) for asynchronous communications is a variable-length unit of data
that is ended by a protocol control character or by the size of the buffer.
Total Physical I/O per Second
(Resource Interval) The average number of physical disk I/O operations performed per second by the
disk arm.
Total Responses
(Component, Resource Interval) The total number of transactions counted along with the average
response time for all active work stations or devices on this controller for the report period.
Total Seize/Wait Time
(Component) The response time in milliseconds for each job.
Total Tns
(Component) Number of transactions processed in this pool.
Transaction Response Time (Sec/Tns)
(Transaction) The response time in seconds for each transaction. This value includes no
communications line time. Response times measured at the work station exceed this time by the
data transmission time (the time required to transmit data from the work station to the processing
unit and to transmit the response data back to the work station from the processing unit).
Transactions per hour (local)
(System) The interactive transactions per hour attributed to local display stations.
Transactions per hour (remote)
(System) The interactive transactions per hour attributed to remote display stations.
Transient Size
(Component) Kilobytes placed within the journal transient area; these are hidden journal entries
produced by the system.
Transmit/Receive/Average Line Util
(Resource Interval) In duplex mode, the percentage of transmit line capacity used, the percentage of
receive line capacity used, and the average of the transmit and receive capacities.
TSE
(Transaction) Listed in the Wait Code column, Time Slice End. The program shown in the stack entry
labeled LAST is the program that went to time slice end.
Typ
(Component, Transaction) The system job type and subtype. The Component Report allows only
one character in this column. The Transaction Report allows two characters. The Transaction Report
reports the job type and job subtype directly from the QAPMJOBS fields. The Component Report takes
the job type and job subtype values and converts it to a character that may or may not be the value
from the QAPMJOBS field. The possible job types are:
A
Autostart
B
Batch
BD
Batch immediate (Transaction only)
Note: The batch immediate values are shown as BCI on the Work with Active Job display and as
BATCHI on the Work with Subsystem Job display.
BE
Batch evoke (Transaction only)
Performance 159
P
Print driver job
T
Multiple requester terminal (MRT) (System/36 environment only)
3
System/36
Notes:
1. Job subtypes do not appear on the Component Report.
2. If the job type is blank or you want to reassign it, use the Change Job Type (CHGJOBTYP)
command to assign an appropriate job type.
Type
(System, Transaction, Job Interval) One of the transaction types listed in the description of the DTNTY
field.
(System)
The disk type.
(Transaction)
The type and subtype of the job.
(Transaction)
For the Seize/Lock Conflicts by Object section, the type of seize/lock conflict.
UDP Datagrams Received
(Component) The total number of User Datagram Protocol (UDP) datagrams delivered to UDP users.
UDP Datagrams Sent
(Component) The total number of User Datagram Protocol (UDP) datagrams sent from this entity.
Uncap CPU Avail
(Component)Percentage of CPU time available to a partition in the shared processors pool during the
interval in addition to its configured CPU. This value is relative to the configured CPU available for the
particular partition.
Unicast Packets Received
(System) The total number of subnetwork-unicast packets delivered to a higher-layer protocol. The
number includes only packets received on the specified interface.
Unicast Packets Sent
(System) The total number of packets that higher-level protocols requested to be transmitted to a
subnetwork-unicast address. This number includes those packets that were discarded or were not
sent.
Unit
(System, Component, Resource Interval) The number assigned by the system to identify a specific
disk unit or arm. An 'A' or 'B' following the unit number indicates that the disk unit is mirrored. (For
example, 0001A and 0001B are a mirrored pair.)
Unit Name
The resource name of the disk arm.
User ID
(System, Component, Transaction, Job Interval, Pool) The list of users selected to be included
(SLTUSRID parameter) or excluded (OMTUSRID parameter).
User Name
(Component, Transaction, Job Interval, Batch Job Trace) Name of the user involved (submitted the
job, had a conflict, and so on.)
User Name/Thread
(Component, Transaction) If the job information contains a secondary thread, then this column shows
the thread identifier. If the job information does not contain a secondary thread, then the column
shows the user name. The system assigns the thread number to a job as follows:
A thread value greater than 1 does not necessarily mean the job has had that many threads active
at the same time. To determine how many threads are currently active for the same job, use the
WRKACTJOB, WRKSBSJOB, or WRKUSRJOB commands to find the multiple three-part identifiers
with the same job name.
User Starts
(Component) The number of start journal operations initiated by the user.
User Stops
(Component) The number of stop journal operations initiated by the user.
User Total
(Component) The total number of journal deposits resulting from system-journaled objects.
Util
(Component, Resource Interval) The percent of utilization for each local work station, disk, or
communications IOP, controller, or drive.
Note: The system-wide average utilization does not include data for mirrored arms in measurement
intervals for which such intervals are either in resuming or suspended status.
Util 2
(Component, Resource) Utilization of co-processor.
Value
(Transaction) For the Individual Transaction Statistics section of the Job Summary report, it is the
value of the data being compared for the transaction. For the Longest Seize/Lock Conflicts section, it is
the number of seconds in which the seize or lock conflict occurred.
Verify
(Component) Number of verify exceptions per second. Verify exceptions occur when a pointer needs
to be resolved, when blocked MI instructions are used at security levels 10, 20, or 30, and when
an unresolved symbolic name is called. This count could be very high, even under normal system
operation. Use the count as a monitor. If there are large variations or changes, explore these variations
in more detail.
VP
(System) The number of virtual processors active in the reported partition.
Vrt Shr Proc Pool ID
(System) Virtual Shared Processor Pool ID. This column is only printed for the IBM i partition.
W-I Wait/Tns
(Transaction) The average time, in seconds, of wait-to-ineligible time per transaction. This value is an
indication of what effect the activity level has on response time. If this value is low, the number
of wait-to-ineligible transitions probably has little effect on response time. If the value is high,
adding additional interactive pool storage and increasing the interactive pool activity level should
improve response time. If you are unable to increase the interactive pool storage (due to limited
available storage), increasing the activity level may also improve response time. However, increasing
the activity level might result in excessive faulting within the storage pool.
Performance 161
Wait Code
(Transaction) The job state transition that causes the trace record to be produced. The values can be
as follows:
EVT
Event Wait. A long wait that occurs when waiting on a message queue.
EOTn
End of transaction for transaction for type n. These codes are in the wait code column, but they
are not wait codes. They indicate transaction boundary trace records.
EORn
End of response time for transaction n. These codes are in the wait code column, but they are not
wait codes. They indicate transaction boundary trace records.
Error Responses
(Component> The number of responses in error.
HDW
Hold Wait (job suspended or system request).
LKRL
Lock Released. The job released a lock it had on the object named on the next detail line of the
report (OBJECT --). The job that was waiting for the object is named on this line (WAITER --) along
with the amount of time the job spent waiting for the lock to be released.
LKW
Lock Wait. If there are a number of these, or you see entries with a significant length of time in the
ACTIVE/RSP* column, additional analysis is necessary. The LKWT report lines that precede this
LKW report line show you what object is being waited on, and who has the object.
LKWT
Lock Conflict Wait. The job is waiting on a lock conflict. The time (*/ time /*) is the duration of the
lock conflict and, though not equal to the LKW time, should be very close to it. The holder of the
lock is named at the right of the report line (HOLDER --). The object being locked is named on the
next report line (OBJECT --).
SOTn
Start of transaction n. These codes are in the wait code column, but they are not wait codes. They
indicate transaction boundary trace records.
SWX
Short Wait Extended. The short wait has exceeded a 2-second limit and the system has put the
transaction into a long wait. This long wait must be charged to the transaction response time. In
other words, this active-to-wait transaction does not reflect a transaction boundary.
SZWG
(Transaction) Listed in the Wait Code column, Seize Wait Granted. The job was waiting on a seize
conflict. The original holder released the lock that it had on the object, and the lock was then
granted to the waiting job. The job that was waiting for the object is named on this line (WAITER
--) along with the amount of time the job spent waiting for the seize conflict to be released. The
object that is held is named on the next line of the report (OBJECT --).
SZWT
Seize/Lock Conflict Wait. The job is waiting on a seize/lock conflict. The time (*/ time /*) is the
duration of the seize/lock conflict, and is included in the active time that follows it on the report.
The holder of the lock is named at the right of the report line (HOLDER --). The object being held is
named on the next report line (OBJECT --).
TSE
Time Slice End. The program shown in the stack entry labeled LAST is the program that went
to time slice end. Every time a job uses 0.5 seconds of CPU time (0.2 seconds on the faster
processors) between long waits, the system checks if there are jobs of equal priority on the CPU
queue. If there are, then the next job with equal priority is granted the CPU and the interrupted
job is moved to the queue as the last of equal priority. The job, however, retains its activity level.
This is an internal time slice end. When a job reaches the external time slice value, there can
Performance 163
Report page number
Identifies the page of the report.
Perf data from time to time at interval
Indicates the time period over which the data was collected and at what interval.
User-selected report title
Indicates the name assigned to the report by a user.
Member
Indicates the performance data member used in the report. This name corresponds to the name used
on the MBR parameter of the Create Performance data (CRTPFRDTA) command.
Library
Identifies the library where the performance data used for a particular report is located.
Model/Serial
Indicates the model and serial number of the server on which the performance data for the report was
collected. The serial number can be 10 characters.
Main storage size
Indicates the size of the main storage on the server on which the performance data was collected.
Started
Indicates the date and time Collection Services started collecting performance data for the report.
Depending on whether or not you select specific intervals or a specific starting time, you could see the
following:
• If you specify no intervals at which to run the report, the start date and time is the date and time at
which the data was collected.
• If you specify specific intervals at which to run the report, the start date and time is the date and
time at which the data was collected.
Note: For the System Report only, you should consult the Report Selection Criteria section to find
out which intervals were selected.
Stopped
The date and time Collection Services stopped collecting performance data for this report. Depending
on whether or not you select specific intervals or a specific ending time, you could see the following:
• If you specify no intervals at which to run the report, the stop date and time is the date and time at
which the data was collected.
• If you specify specific intervals at which to run the report, the stop date and time is the date and
time at which the data was collected.
Note: For the System Report only, you should consult the Report Selection Criteria section to find
out which intervals were selected.
System name
Indicates the name of the server on which the performance data was collected for the report.
Version/Release level
x/ x.0 indicates which version and release level of the operating system the server was running at
the time the performance data was collected.
Partition ID
Identifies the ID of the partition on which the collection was run. This change accommodates the
logical partition implementation. Here are some of the values that you might see:
• If your system is not partitioned (which is the default) or you used Collection Services to collect and
print the performance data for the primary partition of a logical partition system, this value is 00.
• If you collected data with the Start Performance Monitor (STRPFRMON) command in a previous
release, the value for the partition ID is 00.
• If you used Collection Services to collect and print the performance data in any secondary partition
of a logical partition system, this value is the same as the partition ID that is shown on the Work with
System Partitions display under the Start Service Tools (STRSST) command.
Scenarios: Performance
One of the best ways to learn about performance management is to see examples that illustrate how you
can use these applications or tools in your business environment.
Situation
You recently upgraded your system to the newest release. After completing the upgrade and resuming
normal operations, your system performance has decreased significantly. You would like to identify the
cause of the performance problem and restore your system to normal performance levels.
Details
Several problems may result in decreased performance after upgrading the operating system. You can use
the performance management tools included in IBM i and IBM Performance Tools for i licensed program
(5770-PT1) to get more information about the performance problem and to narrow down suspected
problems to a likely cause.
1. Check CPU utilization. Occasionally, a job will be unable to access some of its required resources after
an upgrade. This may result in a single job consuming an unacceptable amount of the CPU resources.
• Use WRKSYSACT, WRKSYSSTS, WRKACTJOB, or IBM Navigator for i monitors to find the total CPU
utilization.
• If CPU utilization is high, for example, greater than 90%, check the amount of CPU utilized by active
jobs. If a single job is consuming more than 30% of the CPU resources, it may be missing file calls
Performance 165
or objects. You can then refer to the vendor, for vendor-supplied programs, or the job's owner or
programmer for additional support.
2. Start a performance trace with the STRPFRTRC command, and then use the system and component
reports to identify and correct the following possible problems:
• If the page fault rate for the machine pool is higher than 10 faults/second, give the machine pool
more memory until the fault rate falls below this level.
• If the disk utilization is greater than 40%, look at the waiting and service time. If these values are
acceptable, you may need to reduce workload to manage priorities.
• If the page faults in the user pool are unacceptably high, you might want to automatically tune
performance.
3. Run the job summary report and refer to the Seize lock conflict report. If the number of seize or lock
conflicts is high, ensure that the access path size is set to 1TB. If the seize or lock conflicts are on
a user profile, and if the referenced user profile owns many objects, reduce the number of objects
owned by that profile.
Situation
As a system administrator, you need to ensure that the system has enough resources to meet the current
demands of your users and business requirements. For your system, CPU utilization is an important
concern. You would like the system to alert you if the CPU utilization gets too high and to temporarily hold
any lower priority jobs until more resources become available.
To accomplish this, you can set up a system monitor that sends you a message if CPU utilization exceeds
80%. Moreover, it can also hold all the jobs in the QBATCH job queue until CPU utilization drops to 60%, at
which point the jobs are released, and normal operations resume.
Configuration example
To set up a system monitor, you need to define what metrics you want to track and what you want the
monitor to do when the metrics reach specified levels. To define a system monitor that accomplishes this
goal, complete the following steps:
1. In IBM Navigator for i, select Monitors > System Monitors. From the Actions menu, select Create
New System Monitor...
2. On the General page, enter a name and description for this monitor. Click Next.
3. Add and edit the CPU Utilization (Average) metric properties by performing the following steps:
a. To add the metric, select CPU Utilization (Average) from the list of Available Metrics, and click
Add. CPU Utilization (Average) is now listed under Metrics to monitor.
b. To edit the metric properties, click the CPU Utilization (Average) metric in the Metrics to monitor
list. This action opens the Configure Metric page where you can edit the properties of the metric.
c. For Collection interval, specify how often you would like to collect the data. This action overrides
the Collection Services setting. For this example, specify 30 seconds.
d. For Threshold 1, enter the following values to send an inquiry message if the CPU Utilization is
greater than or equal to 80%:
i) Select Enable threshold.
ii) For the threshold trigger value, specify >= 80 (greater than or equal to 80 percent busy).
iii) For Duration, specify 1 interval.
v) For the threshold reset value, specify < 60 (less than 60 percent busy). This action resets the
monitor when CPU utilization falls below 60%.
e. For Threshold 2, enter the following values to hold all the jobs in the QBATCH job queue when CPU
utilization stays above 80% for five collection intervals:
i) Select Enable threshold.
ii) For the threshold trigger value, specify >= 80 (greater than or equal to 80 percent busy).
iii) For Duration, specify 5 intervals.
iv) For the IBM i command, specify the following:
HLDJOBQ JOBQ(QBATCH)
v) For the threshold reset value, specify < 60 (less than 60 percent busy). This action resets the
monitor when CPU utilization falls below 60%.
vi) For Duration, specify 5 intervals.
vii) For the IBM i command, specify the following:
RLSJOBQ JOBQ(QBATCH)
This command releases the QBATCH job queue when CPU utilization stays below 60% for five
collection intervals.
4. Click OK to save the metric properties.
5. Click Next to view the monitor summary page.
6. Click Finish to save the monitor.
7. From the list of system monitors, right-click the new monitor and select Start.
Results
The new monitor collects the CPU utilization, with new data points added every 30 seconds, according
to the specified collection interval. The monitor automatically carries out the specified threshold actions
whenever CPU utilization reaches 80%. The monitor will continue to run and perform threshold actions
until you stop the monitor.
Note: This monitor tracks only CPU utilization. However, you can include any number of the available
metrics in the same monitor, and each metric can have its own threshold values and actions. You can also
have several system monitors that run at the same time.
Situation
As a system administrator, you need to be aware of inquiry messages as they occur across your system.
You can set up a message monitor to display any inquiry messages in your message queue that occur on
your system.
Configuration example
To set up a message monitor, you need to define the types of messages you would like to watch for and
what you would like the monitor to do when these messages occur. To set up a message monitor that
accomplishes this goal, complete the following steps:
Performance 167
1. In IBM Navigator for i, select Monitors > Message Monitors. From the Actions menu, select Create
New Message Monitor.
2. On the General page, enter a name and description for this monitor. Click Next.
3. On the Message Queue page, enter the following values:
a. For Message Queue to Monitor, specify QSYSOPR.
b. For Library, specify QSYS.
c. Click Next.
4. On the Message Set page, perform the following steps:
a. On the Message Set 1 tab, click Add.
b. On the Add A Message Set page, enter the following values:
i) Select Add a user defined set of messages.
ii) For Message Type, select Inquiry.
iii) Click OK.
c. Select Set the message trigger and reset.
d. For Trigger at the following message count, specify 1.
e. Click Next.
5. Click Next to view the monitor summary page.
6. Click Finish to save the monitor.
7. From the list of message monitors, right-click the new monitor and select Start.
Results
The new message monitor displays any inquiry messages sent to QSYSOPR.
Note: This monitor responds to only inquiry messages sent to QSYSOPR. However, you can include two
different sets of messages in a single monitor, and you can have several message monitors that run at the
same time. Message monitors can also carry out IBM i commands when specified messages are received.
Manuals
IBM Redbooks
• IBM eserver iSeries Universal Connection for Electronic Support and Services
This document introduces Universal Connection. It also explains how to use the variety of support tools
that report inventories of software and hardware on your machine to IBM so you can get personalized
electronic support, based on your system data.
Websites
• IBM i Technology Updates - Performance Tools
Performance 169
See the Performance Tools topic of the IBM i Technology Updates website on IBM Support to read
about the most recent enhancements to various IBM i performance tools. The topic includes updates
for the performance data collectors (Collection Services, Disk Watcher, Job Watcher, and Performance
Explorer), the performance components of IBM Navigator for i, and the IBM Performance Tools for i
(5770-PT1) licensed program. This website also includes a resources page with links to a wide variety
of performance reference materials: forums, blogs, Redbooks, white papers, presentations, articles, and
websites.
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
Software Interoperability Coordinator, Department YBWA
3605 Highway 52 N
Rochester, MN 55901
U.S.A.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
"Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
172 Notices
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon,
Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or
its subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks of Oracle, Inc. in the United States, other
countries, or both.
Other product and service names might be trademarks of IBM or other companies.
Notices 173
174 IBM i: Performance
IBM®