Storwize and IBM I PDF
Storwize and IBM I PDF
Storwize and IBM I PDF
Sabine Jordan
Mario Kisslinger
Rodrigo Jungi Suzuki
ibm.com/redbooks
International Technical Support Organization
April 2014
SG24-8197-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Contents v
vi IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® Informix® Redbooks®
DB2® Power Systems™ Redbooks (logo) ®
DS8000® POWER7® Storwize®
Easy Tier® PowerHA® System Storage®
FlashCopy® PowerVM® SystemMirror®
IBM® Real-time Compression™ Tivoli®
ITIL is a registered trademark, and a registered community trademark of The Minister for the Cabinet Office,
and is registered in the U.S. Patent and Trademark Office.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
viii IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Preface
The use of external storage and the benefits of virtualization became a topic of discussion in
the IBM® i area during the last several years. The question tends to be, what are the
advantages of the use of external storage that is attached to an IBM i environment as
opposed to the use of internal storage. The use of IBM PowerVM® virtualization technology
to virtualize Power server processors and memory also became common in IBM i
environments. However, virtualized access to external storage and network resources by
using a VIO server is still not widely used.
This IBM Redbooks® publication gives a broad overview of the IBM Storwize® family
products and their features and functions. It describes the setup that is required on the
storage side and describes and positions the different options for attaching IBM Storwize
family products to an IBM i environment. Basic setup and configuration of a VIO server
specifically for the needs of an IBM i environment is also described.
The information that is provided in this book is useful for clients, IBM Business Partners, and
IBM service professionals who need to understand how to install and configure their IBM i
environment with attachment to the Storwize family products.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Rochester Center.
This edition of this IBM Redbooks project was led by Debbie Landon of the International
Technical Support Organization, Rochester Center.
David Bhaskaran
Jenny Dervin
Steven Finnes
Brian Nordland
Marilin Rodriguez
Kiswanto Thayib
Kristopher Whitney
Keith Zblewski
IBM Rochester, IBM i Development team
Ingo Dimmer
IBM Germany, Advanced Technical Skills (ATS) team
David Painter
IBM United Kingdom, STG Lab Services
Mike Schambureck
IBM US, IBM i Technology Center (iTC)
Alison Pate
IBM US, Advanced Technical Skills (ATS) team
Ann Lund
International Technical Support Organization
Find out more about the residency program, browse the residency index, and apply online at
this website:
http://www.ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form that is found at this website:
http://www.ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface 3
4 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
1
External storage helps solve the annoying points of internal storage and can provide solutions
for most of the positive points. External storage features the following advantages:
Storage area network (SAN) storage can be placed in different pools for production and
test workloads. It can also be implemented in different storage tier; for example, with
different reliability and performance demands.
Availability and disk protection is managed by the storage administrator.
It is much easier to move or replicate the storage into other areas to minimize the risk of
disaster.
Storage can be segmented into more and smaller parts, with the flexibility to move storage
between partitions.
Smaller footprint in the data center that leads to reduced power and cooling costs.
Investment cycle for storage and server can be split apart, which allows longer use of the
external storage when the IBM Power System is replaced.
As a rule, the advantages to use external storage are growing, as more partitions must be
deployed. Every logical disk unit is distributed to many physical disks and automatically
protected against failure. It is possible to set up new workloads when they are needed,
without the demand for more disk adapters or storage boxes.
With the support for external storage in IBM PowerHA SystemMirror for IBM i or the built-in
replication functions of the IBM Storwize Family, a broad range of disaster recovery and high
availability options are available.
IBM PowerVM Live Partition Mobility also depends on external storage. With this product,
logical partitions can be moved to another IBM Power System, without the need for
downtimes. This can be used for planned outages like maintenance windows or migrations.
For more information about possible combinations of Host Bus Adapters, specific IBM Power
Systems, and firmware levels, see the IBM System Storage® Interoperation Center (SSIC) at
this website:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
Figure 1-1 Direct attachment of IBM Storwize V7000 to IBM Power System
Important: Direct attached is restricted to the 4 GB adapters, 5774 or the low profile
adapter 5276, and requires IBM i 7.1 TR6 with the latest Program Temporary Fixes (PTFs).
Direct attached storage is a good choice for systems with up to two logical partitions. The
number of partitions is limited by the Fibre Channel Ports in the storage system. It is possible
to use fewer Host Bus Adapters at the client partition, but this limits the number of active
paths for the disks to one, which is not considered a best practice and can be error prone.
Figure 1-2 Two native attached IBM Storwize V7000 to two IBM Power Systems
The native attach option allows for a greater number of IBM i partitions (compared to the
direct attach option) because all Host Bus Adapters are connected to the switch. The amount
of the maximum partitions is limited by the number of SAN switch ports and slots in the IBM
Power System.
The use of VIOS as virtualization layer provides a highly flexible and customized solution
where it is possible to use fewer physical adapters as compared to direct attached and native
attached configurations.
Disks can be attached directly to the VIOS partitions (without the use of a switch) or they can
be provided through the SAN fabric. In either case, the disks are mapped directly to the VIOS
where they are shown as hdiskX devices. Afterward, a mapping from VIOS to the IBM i client
must be configured.
Note: It is also possible to provide internal disks through VIOS to the IBM i partitions by
using vSCSI and mirroring these disks through the IBM i operating system for resilience.
Virtual Fibre Channel client adapters are assigned to the IBM i partitions. The IBM Power
server hypervisor automatically generates a pair of virtual World Wide Port Numbers
(WWPNs) for this adapter that is used for SAN zoning and the host attachment within the
SAN storage.
Note: An NPIV capable Host Bus Adapter and an NPIV enabled SAN switch are required.
The transparent connection of NPIV allows for better performance than virtual SCSI,
simplified SAN setup, and multipath connections to the storage by adding them to different
zones, as described in 2.5.3, “Multipath” on page 52.
The NPIV setup is the most flexible configuration for a large number of partitions on a single
IBM Power System.
The SAN Volume Controller can be a single point of control to provision LUNs to your entire
SAN environment. It can virtualize a variety of storage systems. For more information, see the
following IBM System Storage SAN Volume Controller website:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/
You can use IBM SAN Volume Controller with two or more nodes, which are arranged in a
cluster. A cluster is a pair of nodes and can grow up to four SAN Volume Controller node
pairs.
The IBM SAN Volume Controller main function is to virtualize the disk drives that are provided
from different storage systems that are connected at the back end. The main function of the
IBM Storwize V3700 and IBM Storwize V7000 is to provide disks from their expansion
enclosures. Although these are the most usual Storwize family usage situations, the IBM
Storwize V3700 and Storwize V7000 can also virtualize external storages.
One of the main differences between the IBM Storwize V3700 and the IBM Storwize V7000 is
capacity. The Storwize V3700 supports up to four expansion units and can have up to 240 TB
of capacity. The Storwize V7000 can support up to nine expansion units, reaching up to
480 TB of capacity.
The IBM Storwize V3700 is an entry-level storage system that is targeted to meet the
requirements of the small business market and the IBM Storwize V7000 is targeted toward
more complex environments. There are differences with the cache per controller, maximum
number of expansions, quantity of host interface, and so on.
If you need Real-time Compression™ function, that is supported on the IBM Storwize V7000.
For more information, see 2.4, “IBM Storwize V7000 and hardware compression” on page 37.
There also is an IBM Storwize V3500 storage system. It consists of a one control enclosure.
The control enclosure contains disk drives and two nodes, which are attached to the SAN
fabric and reaches up to 36 TB of storage capacity.
Important: The IBM Storwize V3500 is available only in the following countries and
regions:
People's Republic of China
Hong Kong S.A.R. of the PRC
Macao S.A.R. of the PRC
Taiwan
For more information about the features, benefits, and specifications of the IBM Storwize
family, see the following websites:
IBM System Storage SAN Volume Controller:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/
IBM Storwize V7000 and Storwize V7000 Unified Disk Systems:
http://www-03.ibm.com/systems/storage/disk/storwize_v7000/index.html
IBM Storwize V3700:
http://www-03.ibm.com/systems/storage/disk/storwize_v3700/index.html
IBM Storwize V3500:
http://www-03.ibm.com/systems/hk/storage/disk/storwize_v3500/index.html
Table 1-1 list a series of videos that are available for the IBM Storwize family.
An IBM Storwize V7000 system consists of machine type 2076 rack-mounted enclosures.
The control enclosure contains the main processing units that control the entire system and
can be connected to expansion enclosures. The expansion enclosures can hold up to 12
3.5-inch drives or up to 24 2.5-inch drives.
Machine type and model 2076-112 can hold up to 12 3.5-inch drives and the 2076-124 can
hold up to 24 2.5-inch drives.
Figure 1-9 Storwize V7000 control enclosure rear view, models 112 and 124
Figure 1-10 shows the rear view of the control enclosure for the machine type and models
2076-312 and 324, which includes 10 Gbps Ethernet ports capabilities.
Machine type and model 2076-312 can hold up to 12 3.5-inch drives and the 2076-324 can
hold up to 24 2.5-inch drives.
Figure 1-10 Storwize V7000 control enclosure rear view, models 312 and 324
Figure 1-11 Storwize V7000 expansion enclosure front view, models 2076-112, 212, and 312
Figure 1-12 Storwize V7000 expansion enclosure front view, models 2076-124, 224, and 324
The expansion enclosure has two power supply slots numbered as 1 and two canister slots
numbered as 2 as shown in Figure 1-13.
Each canister has two SAS ports that must be connected if you are adding an expansion
enclosure.
Note: Control enclosures have Ethernet ports, Fibre Channel ports, and USB ports.
Expansion enclosures do not have these ports.
The Storwize V7000 uses a USB key to start the initial configuration. The USB key comes
with the Storwize V7000 and contains a program that is called InitTool.exe to start the
initialization.
Tip: If you do not have the USB key that is shipped with the Storwize V7000, you can use
another USB key. You can also download the InitTool.exe file from this website:
http://www-933.ibm.com/support/fixcentral/swg/selectFixes?parent=Mid-range+disk
+systems&product=ibm/Storage_Disk/IBM+Storwize+V7000+(2076)&release=All&platfor
m=All&function=all
Complete the following steps to start the initial configuration of an IBM Storwize V7000:
1. Insert the USB key into a personal computer that is running the Microsoft Windows
operating system. The initialization program is started automatically if the system is
configured to autorun for USB keys or open the USB key from Windows system and click
InitTool.exe.
2. In the IBM Storwize V7000 Initialization Tool window (see Figure 2-1), select Initialize a
new system using the USB key and click Next.
Tip: Normally, the Storwize V7000 contains sufficient power for the system to start. If the
batteries do not have sufficient charge, the system cannot start. You must wait until the
system became available.
3. Enter the system name, time zone, date, and time, as shown in Figure 2-4. Click Next.
Figure 2-4 IBM Storwize V7000 system name, time zone, and date setup
d. Enter the IBM support email address to receive the notifications, as shown in
Figure 2-9. Click Next.
f. After the support notification is configured, you can edit the configuration by clicking
Edit or you can continue with the setup by clicking Next, as shown in Figure 2-11.
7. An Add Enclosure pop-up window opens, as shown in Figure 2-13. Wait until the task is
finished and then click Close.
Note: All available disks are configured into RAID sets and hot spare drives that are
based on the number of drives that are found. However, this might not be what you
intended to configure. Check the intended configuration before you click Finish.
– Leave the check box cleared, click Finish, and continue with the storage configuration.
9. After you click Finish, wait until the setup Wizard is complete, as shown in Figure 2-15.
3. To create a storage pool, click the New Pool button. In the Create Pool window, enter the
pool name and click Next, as shown Figure 2-19.
5. When the Discover Devices task completes, click Close, as shown in Figure 2-21.
6. Select the MDisks and click Finish to create the storage pool.
The storage pool is created. The next steps describe how to create the volumes to be used by
the systems.
2. In the Volumes window, click New Volume. The New Volume window is displayed, as
shown in Figure 2-23.
4. Figure 2-25 shows the output of the volume creation task. Click Close.
5. In the Volumes window, you can see the new volume, as shown in Figure 2-26.
For more information about hardware compression, see 2.4, “IBM Storwize V7000 and
hardware compression” on page 37.
7. Right-click the new volume and select Properties to see the volume details, as shown in
Figure 2-33.
4. In this example, Fibre Channel Host was selected. Enter the host name and add the
WWPN. In the Advanced Settings section, select the settings that are related to your
Storwize V7000 I/O Group configuration and then select the Host Type. Click Create Host,
as shown in Figure 2-36.
2. In the Volumes window, right-click the volume and select Map to Host, as shown in
Figure 2-39.
The volume is now mapped to the server. Ask the server administrator to verify that the new
disks are now recognized.
On the Storwize V7000, the IBM Real-time Compression feature is licensed by enclosures; on
the IBM SAN Volume Controller, it is licensed by storage space (in terabytes).
On average, an IBM i 7.1 system can achieve a data compression ratio of approximately 1:3,
which is about a 66% of compression savings. For an example, see 2.4.2, “Creating a volume
mirrored copy on a compressed volume” on page 39.
The Comprestimator tool can analyze and provide estimated compression rate for data. By
using this tool, clients can determine the data that can be a good candidate for Real-time
Compression. You can download this tool from this website:
http://www-304.ibm.com/support/customercare/sas/f/comprestimator/home.html
Important: The IBM Comprestimator Utility does not run on IBM i; therefore, it can be used
only with VIOS attached storage and not direct attached storage.
Example 2-1 shows the use of this utility on a PowerVMVirtual I/O Server to provide an
estimate for the compression savings for an IBM i load source volume that is mapped to the
Virtual I/O Server.
In this example of an IBM i load source of a 20 GB volume on the Storwize V7000, the utility
estimates a thin-provisioning savings of 22.9% and a compression savings of 74.9%. This
results in a compressed size of 3.9 GB, which is a total capacity savings of 80.6%. This result
is the required physical storage capacity that is predicted to be reduced by 80.6% as a result
of the combined savings from thin provisioning and compression.
Note: A Storwize V7000 compressed volume is always a thinly provisioned volume from an
architectural perspective when it was created with a real size of 100%. Therefore, the
reported compression savings are in addition to the savings that are provided by thin
provisioning, which stores areas of zeros with minimal capacity.
You also must ensure that you have sufficient disks that are configured to maintain
performance of the IBM i workload.
As an alternative to the GUI procedure, the addvdiskcopy CLI command can be used for
creating a compressed mirrored VDisk from an existing VDisk. The following CLI command
shows an example of creating a compressed mirrored VDisk from an existing VDisk ID 10 in
mdiskgroup 1, which initially allocates 2% real capacity and uses the thin-provisioning auto
expand option:
svctask addvdiskcopy -mdiskgrp 1 -rsize 2% -autoexpand -compressed 10
The VDisk copy creation synchronization progress can be monitored from the Running Tasks
information in the Storwize V7000 GUI or with the CLI command that is shown in
Example 2-2, which also shows the estimated completion time.
Command
===>
F3=Exit F5=Refresh F12=Cancel F24=More keys
Figure 2-43 IBM i Work with Disk Status display
From the audit log entry that is shown in Example 2-3, you can see that the mirrored copy
creation was started at 14:21, with an estimated completion time of 17:11. This process takes
about 2:50 hours to finish the 20 GB LUN that uses the default volume synchronization rate of
50%, which provides a synchronization data rate of approximately 2 MB.
Change the synchronization rate to 100% by clicking Edit or by using the svctask chvdisk
–syncrate 100 <vdisk_id> command for the remaining three 20 GB LUNs from ASP1, which
are on a V7000 RAID 10 array with 8x 10k RPM SAS drives that are each used by IBM i by
51.2%. This results in the following VDisk mirror creation times:
10 minutes for a single compressed volume with up to 16% V7000 RTC CPU usage
10 minutes for two compressed volumes with up to 32% V7000 RTC CPU usage
When the first compressed volume is created, some of the Storwize V7000 and SAN Volume
Controller CPU cores and memory within the I/O group are reserved for use by the real-time
compression engine. On a four-core system with Storwize V7000 or SAN Volume Controller
CF8 nodes, three cores are reserved for the compression engine. On a six-core SAN Volume
Controller CG8 nodes system, four cores are reserved.
These resources are returned for general system use again when the last compressed
volume is deleted. It is considered a best practice to enable compression on four- or six-node
systems only if the current system CPU usage is less than 25% for a four-node system and
less than 50% for a six-node system. This can have significant performance implications to all
hosts that are served by the storage unit.
In the first scenario of using Storwize V7000 VDisk mirroring, both volume copies (that is, the
compressed and non-compressed copies), were on the same RAID 10 rank that uses 8x 300
GB 10k RPM SAS drives. For the second scenario with a split VDisk mirroring, each copy was
on its own dedicated RAID rank configuration.
Note: The small Storwize V7000 disk configuration that is used in this example was
saturated by the IBM i workload. Therefore, it is not representative of a properly sized IBM
i SAN storage configuration. However, this example still shows a compression performance
evaluation.
Figure 2-46 IBM Storwize V7000 performance for non-compressed primary mirrored VDisks
After the primary copy was switched, which designates the preferred copy of the mirrored
VDisk to read from, from the non-compressed volume to the compressed volume by using the
svctask chvdisk •primary <copy_ID> <vdisk_ID> command, you can see a significant
increase in read I/O performance with 4681 read I/Os per second and 1062 write I/Os per
second with a service time of 1 ms and 12 ms, as shown in Figure 2-47 on page 45.
Note: Depending on the host workload characteristics, a significant read I/O performance
improvement when compressed volumes are used can be achieved. Because of the
compression for cache read misses and write de-stages from cache to disk, less data must
be transferred.
This result can be verified by comparing the disk backend read I/O rate for the MDisks, which
dropped from 966 read I/Os per second for the non-compressed primary (see Figure 2-46) to
only 181 read IOs per second for the compressed primary, as shown in Figure 2-47 on
page 45.
In this section, a second scenario of comparing the Storwize V7000 disk I/O performance with
IBM i database workload for non-mirrored non-compressed versus non-mirrored compressed
volumes is reviewed. The previously mirrored volumes were split into two independent copies
by migrating one mirrored copy to another equally configured RAID rank (MDisk) in another
storage pool (mdiskgrp) by using the svctask migratevdisk command and by splitting the
mirror by using the svctask splitvdiskcopy command.
IBM i storage management uses I/O bundling to limit the amount of physical disk I/O. That is,
multiple logical I/Os can be coalesced into a single physical disk I/O with a larger transfer
size. The degree of IBM i I/O bundling also depends on the disk I/O service times. From this
perspective, the IBM i disk I/O transfer sizes can vary for different storage configurations such
that it is more accurate here to compare the achieved data rates (MBps) than the I/O
throughput (I/O per second).
Command
===>
F3=Exit F5=Refresh F12=Cancel F24=More keys
Figure 2-48 Work with Disk Status display, non-compressed volumes
Figure 2-49 shows the Work with Disk Status display for compressed volumes, which shows
the IBM i I/O bundling effect with slightly smaller write transfers being used for the lower
performing non-compressed volumes.
Command
===>
F3=Exit F5=Refresh F12=Cancel F24=More keys
Figure 2-49 Work with Disk Status display, compressed volumes
Figure 2-50 IBM Storwize V7000 performance for split mirror uncompressed volumes (IOPS)
Figure 2-51 IBM Storwize V7000 performance for split mirror compressed volumes (IOPS)
As described in “Scenario 1: Volume mirroring” on page 44, significant disk I/O performance
improvements are possible by using compression when the V7000/SAN Volume Controller
disk backend is already highly used. This is due to the compression in which less data must
be read and written from and to the disk backend, which causes a lower disk usage rate and,
therefore, better disk response times.
Looking at the V7000 disk backend data rates, you can see a significant reduction of the
backend workload from 78 MBps reads and 172 MB/ps for the case of non-compressed
volumes that are shown in Figure 2-52 on page 49. This is down to only 14 MBps reads and
16 ps writes for compressed volumes, as shown in Figure 2-53 on page 49.
However, as can also be seen from Figure 2-53 on page 49, with the system CPU utilization
reaching around 50% and the real-time compression CPU utilization peaking at 56%, a
proper sizing of the Storwize V7000 system with regards to CPU utilization as described in
2.4.2, “Creating a volume mirrored copy on a compressed volume” on page 39 becomes
more important when the compression feature is used.
Figure 2-53 IBM Storwize V7000 performance for split mirror compressed volumes (MBps)
From a performance perspective for the tested configuration and workload, a 23% increase in
IBM i database disk throughput with even lower response times was observed when V7000
compressed volumes were used versus non-compressed volumes. This improvement was
gained for a V7000/SAN Volume Controller disk backend, which was already highly utilized
when the non-compressed volumes were used.
The default configuration option in the Storwize V7000 GUI is RAID 5 with a default array
width of 7+P for SAS HDDs, RAID 6 for Nearline HDDs with a default array width of 10+P+Q,
and RAID 1 with a default array width of 2 for SSDs.
Note: Disk Magic is a tool that is used to size IBM disk subsystems that are based on the
performance and objectives of the workloads that are hosted on the disk subsystem. IBM
Disk Magic is a product to be used only by IBM personnel or an IBM Business Partner.
For more information about Disk Magic sizing for IBM i, see Chapter 5, Sizing for Midrange
Storage, in the IBM Redbooks publication IBM i and Midrange External Storage, SG24-7668.
VIOS VIOS
For Business
vfiber vfiber
The same connection considerations apply when you are connecting by using the native
connection option without VIOS. The following best practices guidelines are useful:
Isolate host connections from remote copy connections (Metro Mirror or Global Mirror)
where possible.
Isolate other host connections from IBM i host connections on a host port basis.
Always have symmetric pathing by connection type. That is, use the same number of
paths on all host adapters that are used by each connection type.
Size the number of host adapters that are needed based on expected aggregate
maximum bandwidth and maximum IOPS (use the Disk Magic tool or other common
sizing methods that are based on actual or expected workload).
Disk Magic tool: The Disk Magic tool supports multipathing only over two paths.
You might want to consider more than two paths for workloads where there is a high wait time
or where high I/O rates are expected to LUNs.
Multipath for a LUN is achieved by connecting the LUN to two or more ports that belong to
different adapters in an IBM i partition, as shown in the following examples:
With native connection, although it is not a requirement, it is considered a recommended
practice for availability reasons to have the ports for multipath be on different physical
adapters in IBM i.
With VIOS NPIV connection, the virtual Fibre Channel adapters for multipath should be
evenly spread across the different VIOS.
With VIOS vSCSI connection, the virtual SCSI adapters for multipath should be evenly
spread across the different VIOS.
Multipath with natively connected Multipath with VIOS NPIV connected Multipath with VIOS Virtual SCSI
Storwize V7000 Storwize V7000 connected Storwize V7000
IBM I IBM I
Multipath Multipath
Virtual SCSi
Virtual FC connections
IBM I connections
Multipath
hdisk hdisk
IBM i multipath provides resiliency in case the hardware for one of the paths fails. It also
provides performance improvement as multipath uses I/O load balancing in a round-robin
mode among the paths.
Every LUN in Storwize V7000 uses one V7000 node as a preferred node. The I/O traffic to or
from the particular LUN normally goes through the preferred node. If that node fails, the I/O is
transferred to the remaining node.
For clarity, this description is limited to a multipath with two WWPNs. By using the preferred
practice of switch zoning, you can achieve the situation where four paths are established from
a LUN to the IBM i where two of the paths going through adapter 1 (in NPIV also through
VIOS 1) and two of the paths going through adapter 2 (in NPIV also through VIOS 2). From
the two paths that go through each adapter, one goes through the preferred node and other
goes through the non-preferred node. Therefore, two of the four paths are active, each of
them going through different adapter, and different VIOS if NPIV is used. Two of the paths are
passive, each of them going through different adapter, and different VIOS if NPIV is used.
IBM i multipathing uses a round-robin algorithm to balance the I/O among the paths that are
active.
Figure 2-56 shows the detailed view of paths in multipath with native or VIOS NPIV
connection to the Storwize V7000. The solid lines refer to active paths, and the dotted lines
refer to the passive paths. The red lines present one switch zone and green lines present the
other switch zone.
The LUN reports as device hdisk in each VIOS. The I/O rate from VIOS (device hdisk) to the
LUN uses all the paths that are established from VIOS to Storwize V7000. Multipath across
these paths and load balancing and I/O through a preferred node are handled by the VIOS
multipath driver. The two hdisks that represent the LUN in each VIOS are mapped to IBM i
through different virtual SCSI adapters. Each of the hdisks reports in to IBM i as a different
path to the same LUN (disk unit). IBM i establishes multipath to the LUN by using both paths.
Both paths are active and the load balancing in a round-robin algorithm is used for the I/O
traffic.
IBM i uses two paths to the same LUN, each path through one VIOS to the relevant hdisk
connected with a virtual SCSI adapter. Both paths are active and the IBM i load balancing
algorithm is used for I/O traffic. Each VIOS has eight connections to the Storwize V7000;
therefore, eight paths are established from each VIOS to the LUN. The I/O traffic through
these paths is handled by the VIOS multipath driver.
When connection with VIOS virtual SCSI is used, is recommend to zone one physical port in
VIOS with all available ports in Storwize V7000, or with as many ports as possible to allow
load balancing. There are a maximum of eight paths that are available from VIOS to the
Storwize V7000. Storwize V7000 ports zoned with one VIOS port can be evenly spread
between Storwize V7000 node canisters.
Spreading workloads across all components maximizes the utilization of the hardware
components. This includes spreading workloads across all the available resources. However,
it is always possible when sharing resources that performance problems can arise because of
contention on these resources.
To protect critical workloads, you can isolate them to minimize the chance that non-critical
workloads can affect the performance of critical workloads.
A storage pool is a collection of managed disks from which volumes are created and
presented to the IBM i system as LUNs. The primary property of a storage pool is the extent
size, which is 1 GB by default with Storwize V7000 release 7.1 and 256 MB in earlier
versions. This extent size is the smallest unit of allocation from the pool.
When you add managed disks to a pool, they can have the following performance
characteristics:
Same RAID level.
Roughly the same number of drives per array.
Workload isolation is most easily accomplished where each ASP or LPAR has its own
managed storage pool. This configuration ensures that you can place data where you intend
with I/O activity balanced between the two nodes or controllers on the Storwize V7000.
Important: Make sure that you isolate critical workloads. A best practice is to have only
IBM i LUNs on any storage pool (rather than mixed with non-IBM i). If you mix production
and development workloads in storage pools, make sure that you understand this might
affect production performance.
All of the disk arms in the disk pools are shared among all the LUNs that are defined in that
disk pool. Figure 2-62 shows an example of a Storwize V7000 disk pool with three disk arrays
of V7000 internal disk arms (MDisks) and a LUN that was created in the disk pool, with the
LUN using an extent from each disk array in turn.
LUN size
The maximum supported LUN size by IBM i is 2 TB - 1 Byte; that is, 2 TB LUNs are not
supported by IBM i.
The number of LUNs that are defined often is related to the wait time component of the
response time. If there are insufficient LUNs, wait time often increases. The sizing process
determines the correct number of LUNs that are required to access the needed capacity,
which meets performance objectives.
For any ASP, define all the LUNs to be the same size where 80 GB is the recommended
minimum LUN size. A minimum of six LUNs for each ASP or LPAR is considered a best
practice.
To support future product enhancements, it is considered a best practice that load source
devices be created with at least 80 GB. A smaller number of larger LUNs reduces the number
of I/O ports that are required on the IBM i and the Storwize V7000. In an IASP environment,
you might use larger LUNs in the IASPs, but SYSBAS might require more, smaller LUNs to
maintain performance.
The Disk Magic tool does not always accurately predict the effective capacity of the ranks
depending on the DDM size that is selected and the number of spares that are assigned. The
IBM Capacity Magic tool can be used to verify capacity and space utilization plans.
To calculate the number of LUNs that you require, you can use a tool, such as the IBM
Systems Workload Estimator, which is found at the following website:
http://www-912.ibm.com/wle/EstimatorServlet
The use of SSDs with the Storwize V7000 is through Easy Tier. Even if you do not plan to
install SSDs, you can still use Easy Tier to evaluate your workload and provide information
about the benefit you might gain by adding SSDs. Easy Tier is included with the Storwize
V7000. However, it does require you to purchase a license and obtain a license key on a
Storwize V3700.
When Easy Tier automated management is used, it is important to allow Easy Tier some
space to move data. Do not allocate 100% of the pool capacity, but leave some capacity
deallocated to allow for Easy Tier migrations.
There also is an option to create a disk pool of SSDs in Storwize V7000 and to create an
IBM i ASP that uses disk capacity from an SSD pool. The applications that are running in that
ASP can experience a performance boost.
Note: IBM i data relocation methods (such as ASP balancing and media preference) are
not available to use with SSD in Storwize V7000.
When you are installing the IBM i operating system with disk storage on a Storwize V7000,
you can select from one of the available Storwize V7000 LUNs when the installation prompts
you to select the load source disk.
When you are migrating from internal disk drives or from another storage system to the
Storwize V7000, you can use IBM i ASP balancing to migrate all of the disk capacity, except
the LoadSource. After the non-LoadSource data is migrated to the Storwize V7000 with load
balancing, you can then migrate the LoadSource by using the DST function to copy disk unit
data to copy from the previous disk unit to the LUN in Storwize V7000. The Storwize V7000
LUN must be of equal or greater size than the disk unit that was previously used for the
LoadSource. This type of migration can be done with the Storwize V7000 connections of
native, VIOS NPIV, and VIOS vSCSI.
LUN size is flexible. Choose the LUN size that gives you enough LUNs for good performance
according to the Disk Magic tool. A best practice is to start modeling at 80 GB.
It is equally important to ensure that the sizing requirements for your SAN configuration also
take into account the other resources that are required when Copy Services is enabled. Use
the Disk Magic tool to model the overheads of replication (Global Mirror and Metro Mirror),
particularly if you are planning to enable Metro Mirror.
It is also a good idea to perform a bandwidth sizing for Global Mirror and Metro Mirror.
Note: The Disk Magic tool does not support modeling FlashCopy® (which is also known as
Point-in-Time Copy functions). Make sure that you do not size the system to maximum
recommended utilizations if you want to also use FlashCopy snapshots for backups.
Each set of reports includes print files for the following reports:
System report: Disk utilization (required)
Component report: Disk activity (required)
Resource Interval report: Disk utilization detail (required)
System report: Storage pool utilization (optional)
If you have multiple servers that are attached to a storage subsystem, particularly if you have
other platforms that are attached in addition to IBM i, it is essential that you have a
performance tool that enables you to monitor the performance from the storage subsystem
perspective.
The IBM Tivoli® Storage Productivity Center license product provides a comprehensive tool
for managing the performance of Storwize V7000. Collect data from all attached storage
subsystems in 15-minute intervals. If there is a performance problem, IBM asks for this data.
There is a simple performance management reporting interface that is available through the
IBM Storwize V7000 GUI. This provides a subset of the performance metrics that are
available from the IBM Tivoli Storage Productivity Center tool.
For more information about the IBM Tivoli Storage Productivity Center tool, see this website:
http://www-03.ibm.com/software/products/us/en/tivostorprodcent/
Change Volumes are not currently supported by PowerHA, so it is essential that you size the
bandwidth to accommodate the peaks or otherwise risk an effect to production performance.
There is a limit of 256 Global Mirror with change volume relationships per system.
The current zoning guidelines for mirroring installations advise that a maximum of two ports
on each Storwize V7000 node canister be used for mirroring, with the remaining two ports on
the node not having any visibility to any other cluster. If you are experiencing performance
issues when mirroring is operating, implementing zoning in this fashion might help to alleviate
this situation.
http://www-03.ibm.com/systems/services/labservices/platforms/labservices_i.html
Note: The IBM DS8000 Storage system supports IBM i with a native disk format of 520
bytes.
IBM i performs the following change of the data layout to support 512-byte blocks (sectors) in
external storage: for every page (8 * 520-byte sectors) it uses another ninth sector; it stores
the 8-byte headers of the 520-byte sectors in the ninth sector; therefore, it changes the
previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was previously stored in 8 *
sectors is now spread across 9 * sectors, so the required disk capacity on Storwize V7000 is
9/8 of the IBM i usable capacity. Conversely, the usable capacity in IBM i is 8/9 of the
allocated capacity in Storwize V7000.
Therefore, when an Storwize V7000 is attached to IBM i (whether through vSCSI, NPIV, or
native attachment), this mapping of 520:512 byte blocks means that you have a capacity
overhead and lose around 11% of the configured SAN storage capacity, which is not available
for IBM i user data.
The configuration of vSCSI and N-Port ID Virtualization (NPIV) host attachment are also
explained by using both GUI and CLI interfaces, as the VIOS handles I/O requests to the
storage.
Finally, the network setup between the client partitions and the external network is described.
This explanation starts with a simple setup that uses one virtual network for the
communication between the partitions and the external network and ends with multiple virtual
networks to isolate traffic between the partitions, forwarding these VLAN tags to the external
network, and balancing the VIOS partitions for network load.
If your setup depends on multipath disk drive connections and vSCSI, the Subsystem Device
Driver Path Control Module (SDDPCM) driver is required. The SDDPCM driver provides for
specific SAN storage features, such as load balancing, that is not included in the native MPIO
multipath driver.
The following steps are not explained in detail. For more information about how to create a
logical partition and assign resources to it, see the IBM Redbooks publication IBM Power
Systems HMC Implementation and Usage Guide, SG24-7491:
1. Create the logical partition from the HMC. Select the IBM Power server that hosts the
VIOS from the HMC GUI on the left.
2. Click Configuration Create Partition VIO Server and assign the necessary
resources for storage and network.
3. Assign a minimum set of physical adapters to the VIOS partitions for presenting the
storage to the client partitions as needed. Add Ethernet adapters for VIOS management
and Ethernet virtualization.
Number of client LPARs (hosts) per physical Fibre Channel port (NPIV) 64
Number of physical volumes (LUNs) per volume group (storage pool) 1024
Important: When it comes to network setup, make sure that communication with the HMC
is possible by using the specified IP addresses. This is used for the setup of the partitions
and is necessary to communicate with the partitions after the setup to add resources
dynamically.
1. On the HMC GUI, activate the VIOS partition by selecting the partition and clicking
Activate Profile, as shown in Figure 3-1.
The setup wizard starts the partition and scans for network hardware.
Important: Make sure that no console is connected to this partition because this
causes an error in hardware detection.
3. There are three possible ways to select a source for the installation media. A good
approach for IBM i clients is to use the physical media, which must be inserted into the
HMC optical drive. As an alternative, image files can be imported, which are on a remote
FTP site; for example, on an existing IBM i partition.
If physical media is used for the installation, continue to step 8 on page 68.
5. In the Import Virtual I/O Server Image window (see Figure 3-4), enter all of the necessary
information and click OK.
6. Figure 3-5 shows the loading panel. Wait until the files are uploaded to the HMC and then
continue with the installation.
Tip: It is possible to manage the Virtual I/O Server Image Repository from the left side
of the HMC in the HMC Management menu.
7. Select the newly created repository to begin the installation process. Verify that the
network information is correct and then continue with step 10 on page 68.
8. As shown in Figure 3-6, all of the necessary information to install the VIOS partition from
physical media in the HMC optical drive was entered. Select the Virtual I/O Server
installation source of DVD and specify the correct network adapter and all the network
details for your VIOS partition. Click OK.
9. After all of the information is copied from the first installation media, a window opens, in
which you are prompted for the second media to be inserted into the HMC optical drive.
Exchange the DVD and click OK to continue the installation.
10.You see the message that is shown in Figure 3-7 after the installation is complete.
Important: Make sure that the VIOS installation media is inserted into the IBM Power
System optical drive and that the adapter that is connected to the drive is assigned to the
VIOS partition.
2. In the Activate Logical Partition window (see Figure 3-9), click Advanced. Select SMS and
click OK to change the boot mode. Activate the VIOS partition by clicking OK.
Figure 3-9 Starting the VIOS partition without starting the installation wizard
d. Browse to the VIOS partition and select it, as shown in Example 3-1.
Open in progress..
e. Select Continue with Install and press Enter. The system reboots after the
installation is complete.
6. Finish the installation by configuring an IP address to the VIOS partition, as shown in
Example 3-2.
PLATFORM SPECIFIC
Name: ethernet
Node: ethernet@0
Device Type: network
Physical Location: U78AA.001.WZSJG7K-P1-C7-T1
$ cfgassist
* Hostname [z1436cvios3]
* Internet ADDRESS (dotted decimal) [9.5.216.252]
The limitations of vSCSI and NPIV are listed in Table 3-1 on page 65. If the focus of the setup
is for performance, it is considered a best practice to use NPIV instead of vSCSI.
$ oem_platform_level
AIX Version:
6.1.0.0
$ tar xf devices.sddpcm.61.rte.tar
$ cfgassist install_update
Install Software
$ shutdown -restart
2. Transfer the downloaded files to the VIOS partition by using FTP. Run the commands that
are shown in Example 3-3. They include a change of environment from VIOS commands
to a broadened selection of AIX commands to finding the AIX version. Then, return to the
VIOS environment, extract the downloaded archive, open the wizard to install the
SDDPCM drivers, and restart the VIOS.
Attention: If IBM i partitions are running and no secondary VIOS partition is configured
for failover, a restart of the VIOS partition does affect them all.
3. Test if the SDDPCM driver is working by switching to the OEM setup environment by using
the oem_setup_env and pcmpath query device commands.
The disk is now visible within IBM i and can be added to the ASP as normal.
Tip: The location code V1-C6 that is shown in Example 3-5 references the VIOS
partition ID 1 (V1) and virtual adapter ID 6 (C6).
Example 3-5 Displaying disk mappings and adding a disk to the host
$ lsmap -all
$ lsmap -all
VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk1
Physloc
U78AA.001.WZSJ2AV-P1-C2-T1-W500507680210DD40-L1000000000000
Mirrored false
For the virtual client adapters, the HMC generates two World Wide Port Numbers (WWPNs).
The first WWPN is used for the SAN zoning. The second WWPN is important for Live
Partition Mobility (LPM) and is used to create the partition profile at the remote system.
The server-client pairs can be mapped to the same physical HBA and are visible to the SAN
and to the physical HBA WWPNs. Often, only the virtual WWPNs are used for further SAN
zoning operations because they are connected to the IBM i client partition.
4. Figure 3-16 shows the active connections for the host bus adapter port fcs0. The
connections can be modified by selecting and clearing the check boxes. Click OK to save
the changes.
All disks that are zoned to the IBM i partition and connected to the IBM i virtual WWPN from
the IBM Storwize Family SAN storage are now visible within IBM i and can be added to the
ASP as normal. If disks are added or removed from the SAN, the changes are immediately
visible to the IBM i partition without further configuration of the VIOS.
In Example 3-7, the active mappings are displayed and the virtual server host bus adapter
is mapped to the IBM i partition. The physical location code of the adapter is set to V1-C7
because it is a virtual adapter with the adapter ID 7 on the VIOS partition.
Example 3-7 Displaying disk mappings and adding a virtual server port to the host bus adapter
$lsmap -all -npiv
Status:NOT_LOGGED_IN
FC name: FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name: VFC client DRC:
Status:LOGGED_IN
FC name:fcs0 FC loc code:U78AA.001.WZSJ2AV-P1-C2-T1
Ports logged in:1
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:DC02 VFC client DRC:U8205.E6C.06F759R-V4-C7
Figure 3-18 on page 80 displays the complete VIOS networking setup, including all of these
steps.
POWER Hypervisor
Control Channel VID=42
ha_mode=sharing
Trunk Adapter A Trunk Adapter B Trunk Adapter C Trunk Adapter D
VID = 2500, 3983 VID = 3984 VID = 2500, 3983 VID = 3984
VIOS1 Priority = 1 Priority = 1 Priority = 2 Priority = 2 VIOS2
(Primary) (Backup)
Ethernet Network
3.5.1 Optional step: LACP setup for adapter redundancy in each VIOS
The link aggregation control protocol (LACP) allows for Ethernet port redundancy. This
depends on the correct switch configuration. It is also a good idea to coordinate this
configuration with the network department.
A wizard guides you through the setup of LACP. Example 3-8 shows the LACP configuration
for ent0 and ent1.
Complete the following steps to set up the Ethernet adapter by using the HMC GUI:
1. From the HMC GUI, select Systems Management Servers IBM Power System.
2. Select the primary VIOS partition.
3. In the Tasks area, click Configuration Manage Profiles profile name.
5. In this step, a virtual trunk Ethernet adapter is created that can bridge traffic to the external
network and allow for multiple VLANs, as shown in Figure 3-20 on page 82.
If you do not plan on having multiple virtual networks, do not select the IEEE 802.1q
compatible adapter option, which adds the VLAN tagging options.
Tip: It is considered a best practice to use only one default VLAN for client traffic and
another internal VLAN for the shared Ethernet adapter high availability.
Tip: VLAN IDs can range 0 - 4095. The IDs 0 and 4095 are reserved. VLAN ID 1 is the
default VLAN ID for most hardware providers. Do not use these IDs for VLAN setups to
avoid unexpected behavior.
It is important that the VLAN ID that is added to the packets that are leaving the client
partitions (the default VLAN ID number) is not removed on entering the Virtual I/O Server.
This is what happens when a default configuration is used.
It is for this reason that the default VLAN ID on the Virtual I/O Servers virtual Ethernet
adapter (as shown in Figure 3-20 on page 82) must be set to an unused VLAN ID (in this
example, 42 is used) that is not used outside the system.
If a packet arrives with this VLAN ID (42), the VLAN ID tag is stripped off (untagged) and
cannot be forwarded to the correct external VLAN. If this was sent through the shared
Ethernet adapter to the external physical network, it arrives at the Ethernet switch as
untagged.
Click OK to save the settings and close the window.
6. Repeat steps 3 on page 80 through step 5 on page 81 for the secondary VIOS. Use a
priority value of two for the virtual trunk Ethernet adapter.
7. Add another virtual Ethernet adapter (as shown in Figure 3-21) for the control channel of
the SEA with as high a VLAN ID as possible. Make sure to coordinate with the network
department. When the SEA is created, it checks for a virtual Ethernet adapter, with no
access to the external network, and starts at the highest VLAN ID.
Tip: With an active RMC connection between VIOS and HMC, all changes to an active
partition can be done by using DLPAR operations. Take care to save all changes to the
LPAR profile afterward.
Tip: You can select each VLAN ID to get a list of the associated adapter resource name
in every LPAR.
5. The wizard returns to the Virtual Network Management window. If you select the VLAN ID
of 4094, you can see that this VLAN acts now as the control channel VLAN for the newly
created SEA, as shown in Figure 3-24.
Figure 3-24 HMC virtual network editor with control channel VLAN
The SEA configuration is now finished and client partitions with virtual Ethernet adapters
on VLAN 42 can access the external network. VLAN tags are not yet exported to the
external network. If you decide to use other VLAN tags in VLAN 2500 and 3983, continue
with 3.5.4, “Transporting VLANs to the external network” on page 85.
Important: The two steps that are shown in Example 3-9 must be run in each VIOS
partition, starting with the primary site.
Example 3-9 Adding a trunk adapter to the SEA and change ha_mode attribute
ent6: SEA
ent2: trunk adapter which is already part of the SEA
ent5: new trunk adapter adding to the SEA
Example 3-10 Exporting the VLAN IDs using the SEA adapter ent6
$ mkvdev -vlan ent6 -tagid 2500
ent7 Available
en7
et7
$ mkvdev -vlan ent6 -tagid 3983
ent8 Available
en8
et8
$ mkvdev -vlan ent6 -tagid 3984
ent9 Available
en9
et9
3.6 Customizing
This section describes how to ease navigation within the VIOS shell. This section also shows
how to set up the SNMP daemon to monitor VIOS performance remotely. The following topics
are covered:
3.6.1, “Changing ksh to bash style” on page 86
3.6.2, “Configuring monitoring by using SNMP” on page 86
3.6.3, “Creating an image repository for client D-IPL” on page 87
Use the o key to insert a new line and change to editing mode. Use the Esc key to exit editing
mode.
Outside the editing mode, you can use commands to browse vi. Enter :d to delete a line and
:x to exit and save the file.
export ENV=/usr/ios/cli/.kshrc
export PS1="$(ioscli hostname)$ "
alias __A=`echo "\020"` #ctrl-p => up
alias __B=`echo "\016"` #ctrl-n => down
alias __C=`echo "\006"` #ctrl-f => right
alias __D=`echo "\002"` #ctrl-b => left
alias __H=`echo "\001"` # Home
alias __F=`echo "\005"` # End
set -o emacs
# exit
Add the next four lines to the end of the file, save, and exit:
VACM_GROUP group1 SNMPV2c public -
VACM_ACCESS group1 - - noAuthNoPriv SNMPv2c defaultView - defaultView -
COMMUNITY public public noAuthNoPriv 0.0.0.0 0.0.0.0 -
VACM_VIEW defaultView 1 - included -
# vi /etc/rc.tcpip
Attention: Unlike image catalogs in IBM i, the images in the repository are not autoloaded.
Before you start, a virtual DVD drive must be created on the VIOS partition by using the VIOS
command line. Complete the following steps:
1. Create a virtual server SCSI adapter on the VIOS partition profile and a matching virtual
client SCSI adapter on the client partition that is used as alternative Load Source device.
2. Run the mkvdev command from the VIOS CLI for the created device (as shown in
Example 3-13 on page 88) to create a virtual optical device.
Tip: Use the -dev parameter with the mkvdev command to set a device name, such as
vtopt_iLPAR1 for later reference.
VTD vtopt0
Status Available
LUN 0x8100000000000000
Backing device
Physloc
Mirrored N/A
4. Figure 3-26 shows the Create Media Library window. Enter the media library size. The
maximum size depends on the VIOS disk that is used. Ensure that it is large enough to
store the image files. Click OK.
5. Transfer the image file by using FTP or an SCP client to the VIOS server.
Tip: You can delete the image file from the VIOS after it is copied from its location to the
repository.
8. Select the partition and virtual optical driveSelect the Force reassignment on running
partitions option and click OK, as shown in Figure 3-30. Use read-only as the access
mode to load the image into multiple virtual drives.
Example 3-14 Creating a repository, adding an image, and loading the image
$ lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 51136 27008 64 0 LVPOOL
$ mkrep -sp rootvg -size 26G
Virtual Media Repository Created
Repository created within "VMLibrary" logical volume
$ mkvopt -name IBASE71G -file I_BASE_01_710G.iso -ro
$ loadopt -disk IBASE71G -vtd vtopt0
$ lsrep
Size(mb) Free(mb) Parent Pool Parent Size Parent Free
26515 24134 rootvg 51136 384
3.7 Troubleshooting
This section describes how to improve VIOS performance and find bottlenecks. This section
also shows how to restart the Resource Monitoring and Control (RMC) connection and which
information is necessary for a support call and how to gather that information.
Figure 3-32 shows information about average and maximum I/O activity within the
measured interval.
Figure 3-34 displays the average CPU usage. The VIOS footprint is small if you use NPIV.
The CPU load increases with a higher virtualized network load.
Figure 3-35 shows the memory usage. If the VIOS starts to page, increase the size of the
memory by using a DLPAR operation without the need for downtime.
Alternatively, the VIOS Advisor tool can be downloaded from this website:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Power%20System
s/page/VIOS%20Advisor
The tool allows for a longer collection period of up to 1440 minutes and provides greater detail
on I/O activity and network configuration.
# /usr/sbin/rsct/install/bin/recfgct
0513-071 The ctcas Subsystem has been added.
0513-071 The ctrmc Subsystem has been added.
0513-059 The ctrmc Subsystem has been started. Subsystem PID is 9502788.
# /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc
Management Domain Status: Management Control Points
I A 0x69e3763d06415474 0002 9.5.103.98
I A 0x48864326026f84b4 0001 9.5.104.205
# exit
Gather information about the type, model, and serial number of the IBM Power System.
Use the commands that are shown in Example 3-17 to collect information about the installed
operating system levels.
To collect information about the virtual adapter configuration (including the error logs), run the
snap command and download the resulting snap.pax.Z file by using FTP to your workstation.
The process resembles what is shown in Example 3-18 and runs for several minutes.
An unplanned outage that in duration or recovery time that exceeds business expectations
can have severe implications, including unexpected loss of reputation, client loyalty, and
revenue. Clients who did not effectively plan for the risk of an unplanned outage, never
completed their installation of a high availability (HA) solution, or did not have a tested tape
recovery plan in place, are especially exposed to negative business effects.
This chapter describes various usage scenarios for IBM PowerHA SystemMirror for i with the
SAN Volume Controller and IBM Storwize storage servers. This chapter also describes the
possibilities, position them against each other, and gives you an overview of how to configure
them.
Because data resiliency is managed within the IBM i storage management architecture, there
is no operator involvement. Just as there is no operator involvement with RAID 5 or disk
mirroring.
As shown in Figure 4-1, the architecture of PowerHA SystemMirror for i provides several
different implementation possibilities.
Metro Mirror Global Mirror LUN Level Switching FlashCopy Geographic Mirroring
• SAN hardware • SAN • Single copy of IASP • SAN • IBM i storage mgmt
replication hardware switched between hardware page level replication
• Synchronous replication LPARs or server replication • Sync. or asynchronous
• IBM DS8000 or • Asynchronous • IBM DS8000 or • Point in time • Internal or ext. storage
Storwize family/ • IBM DS8000 or Storwize family • IBM DS8000 or • Supports direct, VIOS
Storwize family Storwize family & IBM i hosted storage
Geographic mirroring offers IBM i clients an IBM i-based page-level replication solution for
implementing high availability and disaster recovery with any kind of IBM i-supported internal
or external storage solution. With IBM System Storage DS8000, SAN Volume Controller, or
IBM Storwize family, storage servers clients can use storage-based remote replication
functions for high availability and disaster recovery, LUN-level switching for local high
availability, and FlashCopy for reducing save window outages by enabling the creation of a
copy that is attached to a separate partition for offline backup to tape.
Which of these solutions make most sense in your environment depends largely on the
storage infrastructure in place.
For more information about the Storwize Copy Services functions, see the IBM Redbooks
publication, IBM System Storage SAN Volume Controller and Storwize V7000 Replication
Family Services, SG24-7574.
Figure 4-2 shows a typical setup with PowerHA and Metro Mirror. Each system has its own
system ASP that contains the operating system, license programs, and some other objects.
The IASP that contains application programs and data is mirrored from the primary storage
system to the secondary storage system. Data is kept synchronous on a page level.
Metro Mirror is commonly used with PowerHA to achieve a high availability and disaster
recovery solution for metropolitan distances up to 300 km when Storwize V3700, V5000,
V7000, or SAN Volume Controller is used.
SysBas
Metro Mirror of IASP SysBas
iASP iASP‘
Consistency of the data at the remote site is maintained always. However, during a failure
condition, the data at the remote site might be missing recent updates that were not sent or
that were in-flight when a replication failure occurred. Therefore, the use of journaling to allow
for proper crash consistent data recovery is of key importance. It is also considered a best
practice to ensure crash consistent data recovery even when remote replication is not used.
This allows you to recover from a loss of transient modified data in IBM i main storage after a
potential sudden server crash due to a severe failure condition.
A log file is used by the Storwize family for Global Mirror to maintain write ordering. It also is
used to help prevent host write I/O performance impacts when the host writes to a disk sector
that is being transmitted or because of bandwidth limits that are still waiting to be transmitted
to the remote site. The Storwize family also uses shared sequence numbers to aggregate
multiple concurrent (and dependent) write I/Os to minimize its Global Mirror processing
overhead.
Global Mirror is commonly used with PowerHA to achieve disaster recovery where you want
to put some distance between your sites. You want to combine it with a local high availability
solution because the asynchronous implementation of Global Mirror does not ensure that the
last transaction reached the backup system.
It is supported for NPIV attachment and for native attachment of the IBM Storwize family. It is
also supported for an SAN Volume Controller split cluster environment, which can make it an
attractive solution, especially in heterogeneous environments where SAN Volume Controller
split cluster is used as the basis for a cross-platform, two-site, high-availability solution.
100 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
PowerHA
The initial support for split cluster that was delivered in version 5.1 contained the restriction
that all communication between the SAN Volume Controller node ports cannot traverse
Inter-Switch Links (ISLs). This limited the maximum supported distance between failure
domains. Starting with SAN Volume Controller version 6.3, the ISL restriction was removed,
which allowed the distance between failure domains to be extended to 300 kilometers.
Additionally, in SAN Volume Controller version 6.3, the maximum supported distance for
non-ISL configurations was extended to 40 kilometers.
The SAN Volume Controller split cluster configuration provides a continuous availability
platform whereby host access is maintained if any single failure domain is lost. This
availability is accomplished through the inherent active architecture of the SAN Volume
Controller with the use of volume mirroring. During a failure, the SAN Volume Controller
nodes and associated mirror copy of the data remain online and available to service all host
I/O.
The split cluster configuration uses the SAN Volume Controller volume mirroring function.
Volume mirroring allows the creation of one volume with two copies of MDisk extents. The two
data copies, if placed in different MDisk groups, allow volume mirroring to eliminate the effect
to volume availability if one or more MDisks fails. The resynchronization between both copies
is incremental and is started by the SAN Volume Controller automatically. A mirrored volume
has the same functions and behavior as a standard volume. In the SAN Volume Controller
software stack, volume mirroring is below the cache and copy services. Therefore,
FlashCopy, Metro Mirror, and Global Mirror have no awareness that a volume is mirrored. All
operations that can be run on non-mirrored volumes can also be run on mirrored volumes.
Important: Although SAN Volume Controller split cluster is a supported environment with
PowerHA, there is no automatic change of the SAN Volume Controller preferred node
when a failover or switchover is done. Because of the latency that is involved when you are
reading from the remote SAN Volume Controller node, make sure that the distance
between the SAN Volume Controller nodes is close enough to meet your disk response
time expectations. In addition, you must have separate Fibre Channel adapters for the
system ASP and the IASP. Also, ensure that you set the preferred node correctly when you
are creating the vDisks on the SAN Volume Controller because changing this setting later
on requires the attached IBM i LPAR to be powered down.
4.1.4 FlashCopy
The IBM Storwize family FlashCopy function provides the capability to perform a point-in-time
copy of one or more volumes (VDisks). In contrast to the remote copy functions of Metro
Mirror and Global Mirror, which are intended primarily for disaster recovery and
high-availability purposes, FlashCopy typically is used for online backup or creating a clone of
a system or IASP for development, testing, reporting, or data mining purposes.
FlashCopy also is supported across heterogeneous storage systems that are attached to the
IBM Storwize family, but only within the same Storwize family system. Up to 4096 FlashCopy
relationships are supported per IBM Storwize family system, and up to 256 copies are
supported per FlashCopy source volume. With IBM Storwize family version 6.2 and later, a
FlashCopy target volume can also be a non-active remote copy primary volume, which eases
restores from a previous FlashCopy in a remote copy environment by using the FlashCopy
reverse function.
FlashCopy is commonly used to minimize downtimes for backup operations to tape. Instead
of saving data from the production system, you create a FlashCopy of that data and attach it
to another system where the actual save to tape is done, as shown in Figure 4-4 on page 103.
102 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Production LPAR LPAR for Save
FlashCopy
IASP IASP
In addition, FlashCopy can be used to provide a kind of disaster recovery option when large
changes are made to your application. Create the FlashCopy before you start the application
change. When you find that something does not work as expected, you can revert to the state
before the application changes without waiting for a lengthy restore operation.
For more information about how an IASP is suspended for a brief period, the FlashCopy is
taken, and the BRMS setup is done, see Chapter 6, “BRMS and FlashCopy” on page 173.
Generating the SSH key requires the following IBM i products to be installed:
5770-SS1 option 33 (Portable Application Solution Environment)
5733-SC1 base and option 1 (IBM Portable Utilities for i and OpenSSH, OpenSSL, zlib)
Generating the key file is done on IBM i from Qshell. Example 4-1shows the commands in
Qshell to generate the key and the files that are created. Ensure that the directory you specify
in the keygen command exists before the command is issued. Also, the user profile you are
using when you are generating the key can have a maximum length of eight characters only.
You then must import the id_rsa.pub file into a user on the Storwize family system. This user
must have a minimum role of Copy Operator to perform the functions that are needed for
PowerHA to work properly. If you want to use LUN level switching, make sure that the user
has a minimum role of Administrator because PowerHA must change host attachments when
switching the IASP to the secondary system.
104 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Transfer the file to your PC and import it to the Storwize user, as shown in Figure 4-5.
Figure 4-5 Importing SSH public key file to SAN Volume Controller user
Make sure to distribute the id_rsa file to all nodes in your cluster to the same directory. The
user profile QHAUSRPRF must have at least *R data authority to the key file on each system.
4.2.2 SAN Volume Controller split cluster and LUN level switching
This section provides an example of configuring SAN Volume Controller split cluster and LUN
level switching. In this example, it is assumed that SAN Volume Controller split cluster was set
up correctly. Also, ensure that SAN zoning is set up so that the IASP can be reached from the
primary cluster node and from the secondary cluster node.
Complete the following steps by using the GUI. These steps cover creating the cluster,
creating the cluster resource group, and creating the ASP copy descriptions.
To complete these steps by using a CLI, see “Using CLI commands to setup SAN Volume
Controller split cluster and LUN level switching” on page 136:
1. Log in to the IBM Systems Director Navigator for i by using the following URL:
http://your_server_IP_address:2001
3. As shown in Figure 4-7, there is no configured cluster. Click Create a new Cluster to start
the cluster creation wizard.
4. On the Create Cluster welcome panel, click Next to continue to the cluster setup wizard.
106 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
5. As shown in Figure 4-8, on the Create Cluster Name and Version panel, you must provide
a name for the cluster. With current fixes installed, the PowerHA version is 2.2. Click Next.
Figure 4-8 Cluster creation, specifying the cluster name and version
Tip: It is a common practice to use two different IP connections for the cluster heartbeat
so that if there is a partial network failure, cluster communication is still available.
108 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
7. In the Create Cluster Additional Nodes panel, specify the information for the second node
in the cluster. Click Add, as shown in Figure 4-10, and specify the node name and the IP
addresses for the second cluster node. You can also specify whether clustering is started
after the creation of the cluster is finished.
110 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
9. In the Create Cluster Message Queue panel, you must decide whether you want to use a
cluster message queue, as shown in Figure 4-12. When a message queue is specified, a
message is sent to that message queue on the secondary system when there is a failure
on the primary system. You can also specify how long that message waits for an answer
and what the standard action is after that time passes. Click Next.
11.Status messages appear as the cluster is created, as shown in Figure 4-14. Click Close to
return to the PowerHA main menu.
112 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
12.From the PowerHA main menu (as shown in Figure 4-15), select Cluster Nodes to edit
the cluster nodes that were created.
14.Right-click each node in the cluster and select Properties, as shown in Figure 4-17.
114 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
15.Figure 4-18 shows the properties for cluster node Z1436B23 from our example. There is
no entry for the device domain. Click Edit.
Note: If an IASP exists on one of the cluster nodes, that node must be added first into
the device domain. Only the first node in a device domain can have an existing IASP.
Nodes with an existing IASP cannot be added to an existing device domain.
116 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
17.Device domains must be defined for each node in the cluster. As shown in Figure 4-20, the
cluster node overview shows the active cluster nodes with their device domains.
118 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
19.In the PowerHA Cluster Resource Groups window, select Create Cluster Resource
Group from the Select Action drop-down menu, as shown in Figure 4-22.
20.In the Create Cluster Resource Group Name and Type panel, specify a name for the
cluster resource group. The type must be Device, as shown in Figure 4-23, because you
want to switch an IASP device that uses this cluster resource group. Click Next.
Figure 4-23 Create Cluster Resource Group, specifying the name and type
Figure 4-24 Create Cluster Resource Group, recovery domain for the primary node
120 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
22.Add the secondary node into the recovery domain. Specify Backup as the role. Ensure
that the site name you provide here is identical to the site name that you provided for the
primary node, as shown in Figure 4-25 on page 121. Click Add to add the node into the
recovery domain.
Figure 4-25 Create Cluster Resource Group, recovery domain for the backup node
122 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
24.In the Create Cluster Resource Group Exit Program panel, you can define a cluster
resource group exit program. This program can be used to start functions that are specific
to your environment when failover or switchover is done; for example, to start a certain
application. For this example, an exit program is not used, as shown in Figure 4-27. Click
Next.
Figure 4-28 Create Cluster Resource Group, defining devices that are controlled by the cluster
124 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
26.You can add multiple devices into one cluster resource group. Click Next when the devices
you need are part of the cluster resource group, as shown in Figure 4-29.
Figure 4-30 Create Cluster Resource Group, specifying a failover message queue
28.The Create Cluster Resource Group Summary panel that is shown in Figure 4-31 displays
the configuration options that were entered. Click Finish to create the cluster resource
group.
126 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
The cluster resource group is created and is in an inactive status, as shown in Figure 4-32.
128 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
30.In the PowerHA Independent ASPs window, select ASP Sessions and Copy
Descriptions from the Select Action drop-down menu, as shown in Figure 4-34.
130 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
32.In the Add SVC ASP Copy Description window, you must define the connection to the
SAN Volume Controller and the disk range and host connections from the SAN Volume
Controller setup for the LUNs of the IASP. As shown in Figure 4-36, enter the following
information:
– Enter a name for the ASP Copy Description.
– The ASP Device, the Cluster Resource Group, and the Site name must be the names
that were specified previously in our example.
– User Id is the user ID on the SAN Volume Controller that you imported the SSH key
into. This field is case-sensitive. Make sure that you enter the user ID exactly as you
specified it in the SAN Volume Controller.
– The Secure Shell Key File points to the SSH private key that is stored in the same
directory for all nodes in the cluster
– The IP Address is the network address of the SAN Volume Controller.
– Enter the disk range that is used for the IASP by clicking Add to add the information to
your configuration setup. This information can be found in the SAN Volume Controller
GUI. You can also add several disk ranges here. Make sure that the Use All Host
Identifiers option is selected.
Figure 4-36 SAN Volume Controller ASP Copy Description creation, adding a virtual disk range
Figure 4-37 SAN Volume Controller ASP Copy description creation, adding a host identifier for the primary node
132 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
34.Click Add to add the Recovery Domain Entry, as shown in Figure 4-38. Make sure that
you add the host identifier information for each node in the cluster.
Figure 4-38 SAN Volume Controller ASP Copy description creation, adding the primary node
Figure 4-39 SAN Volume Controller ASP Copy description creation, recovery domain completed
134 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Figure 4-40 shows the completed setup. Click Back to Independent ASPS.
Figure 4-40 SAN Volume Controller ASP Copy description creation finished
36.Start the cluster resource group. From the main PowerHA menu, click Cluster Resource
Groups.
Using CLI commands to setup SAN Volume Controller split cluster and
LUN level switching
You can use the CLI commands that are shown in Example 4-2 to complete this setup.
Example 4-2 CLI commands to set up the SAN Volume Controller split cluster and LUN level switching
On the backup system:
CRTDEVASP DEVD(IASP1) RSRCNAME(IASP1)
136 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
STRCRG CLUSTER(SVC_CLU) CRG(SVC_CRG)
For more information about Metro Mirror, Global Mirror, and LUN level switching, see 4.1.1,
“Metro Mirror” on page 99, 4.1.2, “Global Mirror” on page 99, and 4.1.3, “LUN level switching”
on page 100.
To complete these steps by using a CLI interface, see “Using CLI commands to setup LUN
level switching with Global Mirror in a three-node cluster” on page 149.
3. In the Add Cluster Node window, enter the node name and the cluster IP addresses that
are used for cluster communication, as shown in Figure 4-43. Click OK.
Figure 4-43 Three-node cluster, specifying the third cluster node details
4. As shown in Figure 4-44, the cluster node is added and started. Click Close.
138 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
5. As shown in Figure 4-45, our example now has a three-node cluster. The third node was
not added to the device domain. You can add the node manually to the device domain or
PowerHA can handle this for you when the cluster node is added into the cluster resource
group. Click Back to PowerHA to return to the main PowerHA menu.
140 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
9. In the Add Recovery Domain Node window (as shown in Figure 4-48), enter the following
information:
– Node name of the third node.
– The node role is set to Backup with an order number of 2. In this example, there
already is a primary node and a first backup node that is defined in the cluster setup.
– Because this is a different site from the site that was used for LUN level switching, you
must specify a different site name here. Think of the site name as a representation for
each copy of the IASP in the recovery domain.
Click OK.
10.The new node is not yet added into the cluster device domain, as shown in the message
that is shown in Figure 4-49. Click OK and the new node is automatically added to the
same device domain that is used by the other nodes in the recovery domain.
12.A status message is then displayed that shows that the third node was added to the
recovery domain, as shown in Figure 4-51. Click Close.
142 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
13.The recovery domain now consists of three cluster nodes, as shown in Figure 4-52. Click
Back to Cluster Resource Groups and return to the main PowerHA menu.
144 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
16.In the PowerHA Independent ASPs, ASP Sessions and Copy Descriptions window, select
Add SVC ASP Copy Description from the Select Action pull-down menu (as shown in
Figure 4-54) because this example is working with an SAN Volume Controller/Storwize
V7000 environment.
Figure 4-54 Three-node cluster: Adding a SAN Volume Controller copy description
146 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
18.Enter the numbers for your virtual disk range from the SAN Volume Controller, as shown in
Figure 4-56. Click Add to add this information into the ASP copy description. You can
specify multiple disk ranges here. Click OK.
Figure 4-57 Three-node cluster: Defining a SAN Volume Controller ASP session
148 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
20.In the Start SVC ASP Session window, specify the following information, as shown in
Figure 4-58:
– Type: Defines the relationship between the ASP copy descriptions. This example set
up a Global Mirror connection between the primary and the secondary LUNs.
– Specify the Cluster Resource Group that is used.
– You can also decide whether replication between the LUNs is reversed after doing a
switchover or a failover. By default, replication is not reversed after a failover so that the
original data in the primary IASP is preserved to provide a chance for error analysis.
Click OK.
Figure 4-58 Three-node cluster: Starting an SAN Volume Controller ASP session
The three-node cluster that uses LUN level switching for local high availability and Global
Mirror for remote disaster recovery is now ready for use.
Using CLI commands to setup LUN level switching with Global Mirror in
a three-node cluster
If you prefer to set up this environment by using CL commands, use the commands that are
shown in Example 4-3.
The main differences are the configuration on the Storwize system where you specify to use
Metro Mirror instead of Global Mirror. You must setup a cluster with two nodes in it. You then
create the cluster resource group. Make sure to specify different site names for the node
identifiers in the recovery domain. Create two SAN Volume Controller Copy Descriptions that
point to the LUNs that are used as Metro Mirror source volumes and Metro Mirror target
volumes. When you are starting the SAN Volume Controller session, make sure to specify
*METROMIR as the session type.
For more information about setup instructions for this environment, see PowerHA
SystemMirror for IBM i Cookbook, SG24-7994.
150 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
5. Add the ranges of the newly created LUN ranges to the corresponding ASP copy
descriptions. You can use the Change SAN Volume Controller Copy Description
(CHGSVCCPYD) command or use the Navigator GUI by completing the following steps:
a. From Independent ASP link in the PowerHA GUI, select ASP Sessions and Copy
Description from the Select Action pull-down menu, as shown in Figure 4-59.
152 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
c. In the Properties window, click Edit to change the ASP Copy Description, as shown in
Figure 4-61.
154 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
e. Enter the disk range of the newly added LUNs and click Add, as shown in Figure 4-63.
156 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
5
This chapter describes the general steps that are part of a Live Partition Mobility operation. It
gives an overview of the prerequisites to use Live Partition Mobility and guides you through
the setup process.
Because Live Partition Mobility works only when the current production system is up and
running, it is not a replacement for another high availability solution. Instead, it is another
feature that you want to use with your high availability solution.
Live Partition Mobility can help in reducing planned downtimes for hardware maintenance or
replacement of systems and in moving logical partitions to another system to achieve a more
balanced workload distribution. As the complete logical partition is moved to a new system
and nothing remains on the original system, Live Partition Mobility cannot help in scenarios
where you need access to one environment when you are performing productive work on
another environment. This includes operating system maintenance or application
maintenance.
The following steps provide an overview of the process that is involved in using Live Partition
Mobility to move a running LPAR to a new system:
1. As a starting point in Figure 5-1, the IBM i client partition is running on System #1. On
System #2, there is a Virtual I/O Server that is running that is connected to the storage
system that System #1 is using. As shown in Figure 5-1, there is no configuration that is
available on System #2 for the IBM i LPAR that we want to migrate to that system.
The environment is checked for required resources, such as main memory or processor
resources.
Suspended
IBM i Client
Partition
1
Check environment for
M M M M M M M M
required resources
A LIN1
DC01 CMN01
HMC
VLAN VLAN
Hypervisor Hypervisor
Storage
Subsystem
158 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
2. If this check is successful, a shell LPAR is created on System #2, as shown in Figure 5-2.
A LIN1 LIN1
HMC
VLAN VLAN
Hypervisor Hypervisor
Storage
Subsystem
3. Virtual SCSI or NPIV devices are created in the virtual target VIOS server and in the shell
partition to give the partition access to the storage subsystem that is used by System #1,
as shown in Figure 5-3.
A LIN1 LIN1 A
Storage
Subsystem
A LIN1 LIN1 A
M M
Storage
Subsystem
160 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
5. When the system reaches a state where only a small number of memory pages were not
migrated, System #1 is suspended, as shown in Figure 5-5. From this point, no work is
possible on System #1. Users notice that their transactions seem to be frozen. The
remaining memory pages are then moved from System #1 to System #2.
Storage
Subsystem
CMN01 DC01
HMC
VLAN VLAN
Hypervisor Hypervisor
Storage
Subsystem
162 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
No licenses are available on the secondary system. In this case, the 70-day grace period
applies. You can use the system for 70 consecutive days. When you are performing
multiple Live Partition Mobility operations, this 70-day grace period restarts after each
migration. To work in this way, licenses must be available on the source system. Source
and target systems must belong to the same company or must be leased by the same
company. In addition, the source system must be in a higher or equal processor group as
the target system. To receive support when working on the target system, it is
recommended to have at least one IBM i license with a valid software maintenance
contract on that system.
5.3.1 Requirements
The following minimum prequisites for the use of Live Partition Mobility in an IBM i
environment must be met on the source system and target system:
POWER7® server with firmware level 740_40 or 730_51, or higher
HMC V7R7.5 or later
VIOS 2.2.1.4 (FP25-SP2) or later
IBM i 7.1 Technology Refresh 4 or later
PowerVM Enterprise Edition
All VIOS virtual adapters assigned as required
If NPIV is used, both logical WWPNs of a virtual FC server adapter must be zoned
because the second WWPN is used on destination server.
VLAN ID for IBM i client to be migrated must be identical on source and target system.
164 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
All I/O for the LPAR that you want to move to another system must be virtual and all disk
storage for that LPAR must be attached externally by using VSCSI or NPIV. Access to that
storage must be shared between VIO servers on the source and the target system.
No physical adapters can be assigned to the IBM i LPAR. The IBM i LPAR must be set to
Restricted I/O, which is done in the Properties section of the LPAR, as shown in Figure 5-8.
Setting this parameter requires an IPL of the LPAR. It prevents the addition of physical
adapters to the LPAR.
The attribute for the mover service partition is set in the properties of the VIO server LPAR, as
shown in Figure 5-10. The attribute becomes active immediately and does not require an IPL
of the LPAR.
166 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
All VIO servers that are taking part in the Live Partition Mobility activity must have a working
RMC connection. If VIO servers are cloned by using copy operations, this is not the case
because their ct_node_id is not unique. In this case, the error message that is shown in
Figure 5-11 is displayed. For more information about for the steps that are required to set the
ct_node_id, see 3.7.2, “Restarting the RMC connection” on page 95.
Figure 5-11 Live Partition Mobility, VIO server without RMC connection
Ensure that the memory region sizes of the source and the target system are identical. If this
setting is different on the systems that are involved, the validation process displays the error
message that is shown in Figure 5-12. If you must change this setting, open the ASMI
interface for the system that you want to change. The value can be found by clicking
Performance Setup Logical Memory Block Size. Changing the memory region size
requires a complete shutdown of the server.
Figure 5-12 Live Partition Mobility, error for different memory region sizes
2. Select the destination system to which you want to move the LPAR, as shown in
Figure 5-14. Click Validate.
168 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
3. After the validation finishes without errors, you can start the migration by clicking Migrate
as shown in Figure 5-15. If there are multiple VIO servers available on the target system to
provide I/O for the migrated LPAR, the Live Partition Mobility process provides a
suggestion for the virtual storage assignment. This predefined setting can also be
changed.
4. During the migration, a status window displays the progress, as shown in Figure 5-16.
Display Messages
System: Z1436C23
Queue . . . . . : QSYSOPR Program . . . . : *DSPMSG
Library . . . : QSYS Library . . . :
Severity . . . : 60 Delivery . . . : *HOLD
Figure 5-18 Live Partition Mobility error message if the virtual optical device is not deallocated
170 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
Tape devices must be varied off from the IBM i side. If a tape is still varied on, this does not
result in an error message during the validation phase. During the actual migration, the
error message that is shown in Figure 5-19 is displayed and the Live Partition Mobility
operation stops.
Figure 5-19 Live Partition Mobility error message if tape device is varied on
In addition, the information that is shown in Figure 5-20 can be found in the QSYSOPR
message queue of the LPAR that you attempted to move.
Figure 5-20 Live Partition Mobility QSYSOPR message if tape device not varied off
Figure 5-21 Live Partition Mobility warning message regarding slot numbers
If you mapped two virtual Fibre Channel adapters to one physical port for the LPAR on the
source system, the error message that is shown in Figure 5-22 is displayed. The same
error message appears if the Live Partition Mobility operation results in a setup on the
target system where two virtual Fibre Channel adapters are mapped to one physical port.
Figure 5-22 Live Partition Mobility error message for physical device location not found
172 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
6
This chapter provides information about how an IASP is suspended for a brief period, the
FlashCopy is taken, and the BRMS setup is done.
Power HA
Backup LPAR z1436b22 ASP1 Primary LPAR z1436b23 ASP1 High Availability LPAR z1436c23 ASP1
Re
Bac sto tore
kup r Res
e
PowerHA provides this integration. There are IBM i commands available with PowerHA that
communicate with the storage system by using Secure Shell (SSH) communication to run the
required operations.
Preparation must be done on another LPAR. You cannot attach this copy of the IASP to your
backup node. The FlashCopy consistency groups and FlashCopy mappings are automatically
created when you start a FlashCopy session from the IBM i.
174 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
6.2.2 Changes in PowerHA to add the third node into the cluster
As shown in the example in Figure 6-1 on page 174, the two-node cluster for high availability
is already set up with node z1436b23 as the primary system and node z1436c23 as the
secondary system. To use FlashCopy and BRMS, complete the following steps to extend that
configuration to include a third node z1436b22:
1. On the third node, create a device description for the IASP, as shown in the following
example:
CRTDEVASP DEVD(IASP1) RSRCNAME(IASP1)
2. On one of the existing cluster nodes, add the third node into the cluster, as shown in the
following example:
ADDCLUNODE CLUSTER(SVC_CLU) NODE(Z1436B22 ('9.5.216.230' '192.168.10.22'))
3. Add the new cluster node into the existing device domain, as shown in the following
example:
ADDDEVDMNE CLUSTER(SVC_CLU) DEVDMN(SVC_DMN) NODE(Z1436B22)
4. Add an SAN Volume Controller copy description that points to the newly created LUNs, as
shown in the following example:
ADDSVCCPYD ASPCPY(FLASHSVCB) ASPDEV(IASP1) CRG(*NONE) SITE(*NONE)
NODE(Z1436B22) SVCHOST(powerha '/QIBM/UserData/HASM/hads/.ssh/id_rsa'
'9.5.216.236') VRTDSKRNG((8 11 *ALL))
Note: For the target copy description that is to be used for FlashCopy, the cluster
resource group and cluster resource group site must be *NONE. The node identifier
must be set to the cluster node name that owned the target copy of the IASP.
A FlashCopy can now be taken by using the STRSVCSSN command, as shown in Figure 6-2.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 6-2 Using the STRSVCSSN command to start FlashCopy
Issuing an End SVC ASP Session (ENDSVCSSN) command, as shown in Figure 6-3, ends the
FlashCopy relationship. The consistency group is deleted by default.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 6-3 Using the ENDSVCSSN command to end FlashCopy
BRMS provides the IBM i server with support for policy-oriented setup and the running of
backup, recovery, archive, and other removable-media-related operations. BRMS uses a
consistent set of intuitive concepts and operations that can be used to develop and implement
a backup strategy that is tailored to your business requirements.
For more information about BRMS, see Backup Recovery and Media Services for OS/400: A
Practical Approach, SG24-4840.
176 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
3. In the Change Network Group display (see Figure 6-4), change the communication
method to *IP and add your IBM i LPARs as nodes to the BRMS network. The host name
must be reachable in the network. Add a host table entry, if necessary.
Tip: If the add operation fails, change the DDM attributes to allow a connection without
password by using the following command:
CHDDMTCPA PWDRQD(*USRID)
Bottom
4. Import the BRMS database information from the leading BRMS system to the secondary
partitions by using the following command (ignore all messages):
INZBRM OPTION(*NETSYS) FROMSYS(APPN.Z1436B22)
5. From the system that backs up the IASP, define which systems receive the save
information and define that they also own the save history. If the option to change the save
history owner is implemented, it appears as though the target system performed the
backup. This is useful for the PRTRCYRPT and STRRCYBRM commands. Complete the following
steps:
a. To add a specific system sync, run the following command to change the system name
to make it look as though the backup was done by this system and synchronize the
reference date and time:
CALL QBRM/Q1AOLD PARM('HSTUPDSYNC' '*ADD' 'Z1436B23' 'APPN' 'IASP1'
'*CHGSYSNAM')
b. To add a specific system sync, run the following command to keep the name of who did
the backup and synchronize the reference date and time:
CALL QBRM/Q1AOLD PARM('HSTUPDSYNC' '*ADD' 'Z1436C23' 'APPN' 'IASP1'
'*NORMAL')
c. To display what is setup, run the following command:
CALL QBRM/Q1AOLD PARM('HSTUPDSYNC' '*DISPLAY')
d. To remove a specific system, use the following command:
CALL QBRM/Q1AOLD PARM('HSTUPDSYNC' '*REMOVE' 'Z1436C23' 'APPN' 'IASP1')
Complete the following steps to create a media class and a media policy:
1. Run the Work with Media Classes command WRKCLSBRM to open the media classes menu.
2. Add the media class, as shown in Figure 6-5. Repeat this step as necessary for your
normal BRMS save routines.
178 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
3. Run the WRKPCYBRM TYPE(*MED) command to add a new policy for the save operation, as
shown in Figure 6-6. This example uses a retention period of one day. In most cases, you
might prefer longer retention periods.
More...
Use the Work with Backup Control Groups command (WRKCTLGBRM) to create BRMS control
groups. Complete the following steps:
1. As shown in Example 6-1, create control groups on the primary and high availability node,
which is used to create and remove the FlashCopy of the IASP1. Also, the recovery
reports are printed on the primary node.
Example 6-1 Creating two BRMS control groups with CRTIASPCPY and RMVIASPCPY
CRTIASPCPY *EXIT CHGASPACT ASPDEV(IASP1) OPTION(*SUSPEND) SSPTIMO(3)
*EXIT STRSVCSSN SSN(FLASH1) TYPE(*FLASHCOPY) ASPCPY((IASP_MM FLASHSVCB))
*EXIT CHGASPACT ASPDEV(IASP1) OPTION(*RESUME)
2. On the backup node, create the control group that is used to back up the IASP copy, as
shown in Example 6-2.
180 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
3. On the backup LPAR, the production LPAR was added to the list of owners for the saved
object, as described in 6.3.1, “Setting up the BRMS network group” on page 176.
4. Run the BRMS control groups by using Start Backup by using the BRM (STRBKUBRM)
command in the correct order to save the IASP objects.
Figure 6-8 shows that after running the following Start Recovery by using BRM (STRRCYBRM)
command with *LCL as the originating system, all information about the saved IASP objects
from the backup LPAR are available:
STRRCYBRM OPTION(*ALLUSR) ACTION(*RESTORE) FROMSYS(*LCL)
Bottom
F3=Exit F5=Refresh F9=Recovery defaults F11=Object View
F12=Cancel F14=Submit to batch F16=Select
Figure 6-8 STRRCYBRM command on the production system
The PowerHA Tools for IBM i complement and extend the PowerHA and IBM storage
capabilities for high availability (HA) and disaster recover (DR).
For more information about PowerHA Tools for IBM i, see the following IBM Systems Lab
Services and Training website:
http://www.ibm.com/systems/services/labservices
Storwize
DS8000
Internal
IBM i
Smart Assist for Provides operator commands and Simplifies deployment and ongoing X X X
PowerHA on IBM i scripts to supplement PowerHA management of HA for critical IBM i
installation and ongoing operations for applications.
iASP enabled applications.
184 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
IBM Lab Services Offerings for PowerHA for IBM i
Table A-2 lists the PowerHA for IBM i service offerings that are available from IBM Systems
Lab Services.
IBM i High Availability Architecture An experienced IBM i consultant conducts a planning and design workshop to
and Design Workshop review solutions and alternatives to meet HA and DR and backup and recovery
requirements. The consultant provides an architecture and implementation plan to
meet these requirements.
PowerHA for IBM i An experienced IBM i consultant reviews network bandwidth requirements for
Bandwidth Analysis implementing storage data replication. IBM reviews I/O data patterns and provides
a bandwidth estimate to build into the business and project plan for clients who are
deploying PowerHA for IBM i.
IBM i Independent Auxiliary An experienced IBM i consultant provides jumpstart services for migrating
Storage Pool (IASP) Workshop applications into an IASP. Training includes enabling applications for IASPs,
clustering techniques, and managing PowerHA and HA and DR solution options
with IASPs.
PowerHA for IBM i An experienced IBM consultant provides services to implement an HA and DR
Implementation Services solution for IBM Power Systems servers with IBM Storage. Depending on specific
business requirements, the end-to-end solution implementation can include a
combination of PowerHA for IBM i and PowerHA Tools for IBM i, and appropriate
storage software, such as Metro Mirror, Global Mirror, or Flashcopy.
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some of the publications that are referenced in this list might be available in
softcopy only:
PowerHA SystemMirror for IBM i Cookbook, SG24-7994
IBM Power Systems HMC Implementation and Usage Guide, SG24-7491
Simple Configuration Example for Storwize V7000 FlashCopy and PowerHA SystemMirror
for i, REDP-4923
IBM i and Midrange External Storage, SG24-7668
IBM System Storage SAN Volume Controller and Storwize V7000 Replication Family
Services, SG24-7574
Backup Recovery and Media Services for OS/400: A Practical Approach, SG24-4840
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and other materials, at this website:
http://www.ibm.com/redbooks
Other publications
Another relevant source of more information is IBM Real-time Compression Evaluation User
Guide, S7003988, which is available at this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7003988&myns=s028&mynp=OCST3FR7
&mync=E
Online resources
These websites are also relevant as further information sources:
IBM System Storage Interoperation Center (SSIC):
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wssDescript
ion1
IBM System Storage SAN Volume Controller:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/
IBM Storwize V3500:
http://www-03.ibm.com/systems/hk/storage/disk/storwize_v3500/index.html
188 IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
IBM i and IBM Storwize Family: A
Practical Guide to Usage Scenarios
IBM i and IBM Storwize Family: A
Practical Guide to Usage Scenarios
IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
IBM i and IBM Storwize Family: A Practical Guide to Usage Scenarios
(0.2”spine)
0.17”<->0.473”
90<->249 pages
IBM i and IBM Storwize Family: A
Practical Guide to Usage Scenarios
IBM i and IBM Storwize Family: A
Practical Guide to Usage Scenarios
Back cover ®
VIOS explained from The use of external storage and the benefits of virtualization became a
topic of discussion in the IBM i area during the last several years. The INTERNATIONAL
an IBM i perspective
question tends to be, what are the advantages of the use of external TECHNICAL
storage that is attached to an IBM i environment as opposed to the use SUPPORT
Using PowerHA of internal storage. The use of IBM PowerVM virtualization technology ORGANIZATION
together with IBM to virtualize Power server processors and memory also became
Storwize Family common in IBM i environments. However, virtualized access to external
storage and network resources by using a VIO server is still not widely
Configuring BRMS in used.
a high available This IBM Redbooks publication gives a broad overview of the IBM BUILDING TECHNICAL
solution Storwize family products and their features and functions. It describes INFORMATION BASED ON
the setup that is required on the storage side and describes and PRACTICAL EXPERIENCE
positions the different options for attaching IBM Storwize family
products to an IBM i environment. Basic setup and configuration of a IBM Redbooks are developed
VIO server specifically for the needs of an IBM i environment is also by the IBM International
described.
Technical Support
Organization. Experts from
In addition, different configuration options for a combined setup of IBM IBM, Customers and Partners
PowerHA SystemMirror for i and the Storwize family products are from around the world create
described and positioned against each other. Detailed examples are timely technical information
provided for the setup process that is required for these environments. based on realistic scenarios.
Specific recommendations
The information that is provided in this book is useful for clients, IBM are provided to help you
Business Partners, and IBM service professionals who need to implement IT solutions more
understand how to install and configure their IBM i environment with effectively in your
attachment to the Storwize family products. environment.